text
string
source
string
multi-agent system’s output generated by the critic mechanism f , we perform prompt opti- mization for participation agents inside the system. Specifically, we first use a locator L to identify the underperforming agents, and , that are not performing well and generate explanations of their failures, utilizing textual feedback I(§4.2.1). Then, an optimizer O optimizes the system prompts of the identified underperforming agents utilizing explanations of their failures (§4.2.2). 4.2.1 Locating Natural language feedback Ifor the multi-agent system’s output provides guidance regarding which directions to improve agent components in the system. However, it is hard to improve individ- ual agents solely based on global feedback of the final collaboration output. As a result, we de- sign an LLM-based locator L better capable of generating fine-grained explanations to guide 4 Figure 1: A high-level workflow of our two-step optimization pipeline. Please refer to §4.2 for more details. optimization in the next step. Our locator takes three major components as inputs: software task description X, global natural language feedback I, and multi-agent collaboration details G. Col- laboration details include two major components: role descriptions and communication trajectories of all participation agents. With a carefully de- signed prompt (see Figure 9 in Appendix), the lo- cator focuses on the negative aspects of the global feedback and navigates to underperforming agents Nf⊆Nresponsible for the negative aspects of feedback. The locator also generates specific fail- ure explanations E={En|n∈Nf}of identified underperforming agents as fine-grained signals to better guide the later optimization step. Overall, the input and output workflow of the locator is Nf, E=L(X, I, G ). 4.2.2 Optimizing After identifying the underperforming agents and fine-grained explanations of their failures from the locator in the previous step, we utilize an LLM- based optimizer O to optimize the system prompts of underperforming agents. For each un- derperforming agent n, the optimizer takes two crit- ical components as input: input-output pair Mnand fine-grained failure explanations Enof this agent. The input-output pair includes the system message Pn, all user messages, and the output of the agent. We carefully design the prompt (see Figure 10 in Appendix) to guide the optimizer Oto optimize the system prompt of underperforming agents in the di- rection where the fine-grained failure explanations can be mitigated: Pn=O(Pn, En, ...). 4.3 Optimization Settings Based on our proposed two-step optimization pipeline, we proceed to investigate the impact of various optimization settings on the performance of the multi-agent system using two comparisongroups: online against offline (§4.3.1) and indi- vidual against group (§4.3.2) optimization. For group optimization setting, we further investigate two different prompting methods: one-pass against multi-pass optimization prompting. 4.3.1 Online against Offline Optimization Inspired by the main difference between online and offline reinforcement learning (Levine et al., 2020) where data for learning is collected in real-time by agents interacting with the environment under online setting, whereas collected beforehand under offline setting, we apply the online versus offline setting to our study. The difference between our online and offline settings lies in how the textual feedback is collected during each optimization step (each training instance in our case). For the
https://arxiv.org/abs/2505.16086v1
online setting, we use the optimized prompts at the current step to derive the solution code so that agents must interact with the environment (the critic) to retrieve real-time feedback. However, for offline setting, we use the default initial agent prompts to derive solution codes and retrieve feedback for all training instances beforehand. This offline feedback collec- tion process fits into open-ended tasks like software development, where high-quality human-annotated feedback can be collected beforehand. 4.3.2 Individual against Group Optimization During the locating step, multiple agents are usu- ally identified as underperforming agents. We aim to investigate whether optimizing all of them or disentangling the optimization by individually up- dating each agent is more effective. One can image group optimization as a complete optimization pro- cess as it optimizes all components in each step; however, it also potentially brings problems such as overfitting, and individual optimization setting in our study can mitigate this concern by reducing the optimization complexity and gradually optimizing one agent at a time. Concretely, we randomly sam- ple one underperforming agent for optimization and leave other underperforming agents untouched during each optimization step. One-Pass versus Multi-Pass Prompting. For group optimization, we study two prompting meth- ods. For all identified underperforming agents, we optimize each with a separate LLM inference call, which we call multi-pass prompting. We can also optimize all agents jointly with a single LLM in- ference call, which we call one-pass prompting. Multi-pass prompting could be more accurate than 5 one-pass prompting, as one-pass prompting intro- duces irrelevant information about other agents. However, one-pass prompting could utilize the in- terconnection between agent components, and it is more efficient as it consumes fewer API calls. 5 Experiments 5.1 Dataset We use SRDD (Software Requirement Description Dataset) (Qian et al., 2024) as the software require- ment descriptions dataset in our study. SRDD com- prises 1,200 software task prompts extracted from ChatGPT, spanning 5 major categories, including education, work, life, game, and creation. We ran- domly shuffle and split the entire dataset into train, development, and test splits with a ratio of 6:2:2. 5.2 Critic Mechanism Implementation We explore various evaluation dimensions pertinent to software development tasks. For each dimension, a critic mechanism generates both a scalar utility score and natural language feedback to the solu- tion as described in §4.1. During implementation, scores and feedback are generated simultaneously or separately for different dimensions. Overall, they are generated using two high-level method- ologies. First is rule-based method, where scores or feedback are generated based on heuristic rules or external tools. Second is model-based method, following recent works of LLMs as judges (Zheng et al., 2023; Dubois et al., 2023; McAleese et al., 2024), where we guide LLMs to evaluate solution code with carefully designed prompts. For functionality, robustness, and test case cov- erage dimension, we generate scores on a scale from 0 to 10 and natural language feedback at the same time using GPT-4 (OpenAI, 2023) as judge. For the documentation dimension, we use GPT-4 to generate natural language feedback; however, we directly use the number of lines of comments
https://arxiv.org/abs/2505.16086v1
and docstrings in the solution code as the scalar score. For code style violation dimension, we uti- lize an external tool, pycodestyle2, to check the software code against style conventions in PEP 8. We define the total number of violations and the corresponding explanations the checker identifies as score and feedback. Please refer to Appendix B for more details of prompts for obtaining scores and feedback using GPT-4. 2https://pycodestyle.pycqa.org/en/latest/5.3 Experiment Setting We use gpt-4-0613 version of GPT-4 as the LLM everywhere in our study with a temperature of 0.1. We randomly sample 5 task descriptions3from the training set. At each optimization step, we optimize the current agent system prompts to a new group of prompts, which will be optimized in the next step. Unless mentioned explicitly, we always randomly sample 100 task descriptions from the testing set to report evaluation results due to budget constraints. 5.4 Baselines We consider the unoptimized system and two base- lines for comparisons. Unoptimized : we use de- fault agent system prompts to run the pipeline for code generation without optimization directly. One-shot : We randomly sample one agent com- munication trajectory and feedback pair from the training set into the system prompts of all agents as a demonstration and ask them to avoid making similar mistakes presented in the feedback. Direct optimization (TextGrad) : We consider another baseline that directly optimizes all system prompts given textual feedback. This aligns with TextGrad (Yuksekgonul et al., 2024), which backpropagates textual feedback to improve individual components of a compound AI system. Note that the key dif- ference lies in that TextGrad does not identify the underperforming agents as the locator in our frame- work does and directly utilizes the textual feed- back for optimization. Another related work is DSPy (Khattab et al., 2024), a programming model that abstracts LLM pipelines as text transforma- tion graphs, i.e., imperative computational graphs where LLMs are invoked through declarative mod- ules. However, the evaluation metric must output numerical values instead of textual feedback, which cannot fit our setting. Therefore, we leave adapting it to our setting as future work. 5.5 Main Results We show the main results in Table 1. Both one-shot and direct optimization baselines cannot consis- tently outperform unoptimized system across all evaluation dimensions. They both fail on the func- tionality dimension, which is probably the most important evaluation dimension for software de- velopment. Direct optimization also fails on the code violation dimension. However, our proposed two-step optimization pipeline effectively opti- 3We found more steps of optimization unnecessary; please refer to §C in Appendix for more details 6 Functionality ( ↑) Robustness ( ↑) Coverage ( ↑) Documentation ( ↑) Violation ( ↓) Unoptimized 6.90 6.75 0.32 3.80 6.62 One-Shot 6.66 7.47 7.00 15.33 3.80 Direct Optimization (TextGrad) 6.74 7.11 6.31 16.07 6.90 Our Optimization Offline Setting Individual 6.81 7.20 7.27 19.46 5.18 Group w/ Multi-Pass 7.22 7.63 7.64 17.92 4.35 Group w/ One-Pass 7.06 7.77 6.99 19.92 4.82 Online Setting Individual 7.02 7.55 6.95 20.14 5.34 Group w/ Multi-Pass 7.26 7.60 6.48 20.15 3.03 Group w/ One-Pass 7.26 7.74 7.81
https://arxiv.org/abs/2505.16086v1
20.33 3.24 Table 1: Evaluation scores under all optimization settings across all evaluation dimensions for baselines and our proposed two-step optimization pipeline. Bold numbers indicate best-performing results, and underlined numbers indicate second-best results. Note that the score range for the first three dimensions is 0-10, and the last two are simply positive integers. 0% 20% 40% 60% 80% 100%66.577.588.5 Optimization Steps (%)Evaluation ScoresFunctionality ( ↑) train dev 0% 20% 40% 60% 80% 100%6.577.588.5 Optimization Steps (%)Robustness ( ↑) train dev 0% 20% 40% 60% 80% 100%0246810 Optimization Steps (%)Test Case Coverage ( ↑) train dev 0% 20% 40% 60% 80% 100%051015202530 Optimization Steps (%)Documentation ( ↑) train dev 0% 20% 40% 60% 80% 100%0246810 Optimization Steps (%)Violation ( ↓) train dev Figure 2: Training and development evaluation score curves analysis for all evaluation dimensions under online group optimization with one-pass prompting. mizes the multi-agent system , evaluated by all optimization dimensions across all optimization settings except for only offline individual setting on the functionality dimension, where we observe only 0.1 behind the unoptimized system. Our best- performing optimization setting, online group opti- mization with one-pass prompting, is always better than the two baselines. Our optimization method outperforms the one-shot baseline in 22 out of the total 30 cases across all evaluation dimensions and optimization settings and always outperforms the direct optimization baseline. Next, we compare the performance of our opti- mization pipeline under all optimization settings. First, the most effective strategy seems to be on- line group optimization with one-pass prompt- ing, whose performance is consistently among the top two across all evaluation dimensions. Secondly, individual optimization is definitely worse than group optimization in most cases; however, it is still an effective optimization strategy compared with unoptimized system in all cases only except for offline setting for functionality dimension. This means optimizing all underperforming agents at each step is a better practice than gradually optimizing a single agent component along theoptimization steps under our case study. It is pos- sible because an LLM-based multi-agent system is still not complex enough to bring up issues like overfitting compared with more complicated sys- tems such as neural networks. Thirdly, although counterintuitive in our case study, the offline setting is still an effective optimization strategy . Offline setting is generally worse than the online setting only across 2 out of 5 evaluation dimen- sions. As discussed in Section 4.3.1, this enables human intervention by providing high-quality feed- back annotation beforehand. This is even more beneficial when the required training data size is only a few, meaning less human-effort required for annotation. This is very similar to our case, and we leave investigating whether human-annotated high-quality feedback leads to even better optimiza- tion for future work. Finally, we don’t observe a consistent performance difference between one- pass and multi-pass optimization prompting for group optimization setting across all evaluation di- mensions. This probably suggests that although the single LLM call of the one-pass prompting contains irrelevant information about other agents, it does not affect the overall optimization process under our current setup. As a result, one could
https://arxiv.org/abs/2505.16086v1
choose 7 0% 20% 40% 60% 80% 100%24681012 Optimization StepsEvaluation Scores ( ↓)Online Multi-Pass Online One-Pass Offline Multi-Pass Offline One-Pass Figure 3: Training curve analysis for code style violation evaluation dimension under major optimization settings. one-pass optimization for higher efficiency without sacrificing performance. 5.6 Analyses 5.6.1 Optimization Curve Analysis In this section, we plot the optimization curves of the average evaluation scores on the training and development set with respect to each optimization step. Due to budget limits, we are not able to plot curves for all development examples, so we ran- domly sample 30 examples from the development set. Figure 2 shows the optimization curves for all evaluation dimensions under the online group optimization with one-pass prompting setting. We chose this setting as it gives superior performances compared with other settings as discussed in pre- vious section. First, we can tell that there is no over-fitting happening during training , as the trends for both training and development curves are consistent across all evaluation dimensions. Sec- ond, we observe that complete model-based critic mechanisms (first three) tend to show less sta- ble training than rule-based critic mechanisms (last two) , as they oscillate more often. Finally, it shows that training curves are less stable than development curves , possibly due to the sparsity of training data compared with development data. Figure 3 shows the optimization training curves under major optimization settings (we use group optimization as default) for code style violation di- mension since it has a complete model-free score and feedback generation process to avoid potential bias raised by model-based evaluation. We observe thatonline setting shows much better stability than offline settings under both one-pass and multi-pass prompting , as their curves oscillate much less.Unoptimized Default Empty Func. ( ↑) 6.90 7.26 7.06 Rob. (↑) 6.75 7.74 7.77 Cov. (↑) 0.32 7.81 7.31 Doc. (↑) 3.80 20.33 19.8 Comp. ( ↓) 6.62 3.24 4.61 Table 2: The effect of starting optimization from an "empty" agent prompt instead of a default prompt. 5.6.2 Starting from "Empty" Prompts Instead of starting optimization from a default prompt, we study whether our optimization pipeline is still effective when the prompt to op- timize starts from empty. Note that the "empty" prompts are not completely empty as they still contain very basic contexts (prompts in black) as shown in Figures 4 and 5. We slightly increase the optimization steps from the default number of 5 to 8. We analyze online group optimization with one- pass promoting across all evaluation dimensions. As shown in Table 2, starting from an "empty" prompt, our pipeline is still able to optimize the multi-agent system to a level that is on par or just slightly worse than the system optimized starting from an informative default starting prompt. 5.6.3 Case Study We provide optimized agent prompts at the final step for functionality, robustness, and code style violation evaluation dimensions for 2 agent roles: programmer agent in Table 4 and software test en- gineer agent in Table 5 in Appendix. We generally observe agent prompts being optimized towards the evaluation dimension, as
https://arxiv.org/abs/2505.16086v1
shown in green text. How- ever, we also notice a current problem: as shown in the red text, the optimized prompts might con- tain instance-specific content that does not apply to general software development tasks even though we deliberately prompt the optimizer to think gen- erally instead of focusing on the current task only. We leave mitigating this problem to future work. 6 Conclusion In this work, we present a case study on group behavior optimization with multiple LLM-based agents utilizing natural language feedback on soft- ware development. We first propose a two-step optimization framework to effectively optimize a role-based multi-agent system under various user- defined evaluation dimensions. We then investigate the impacts of various optimization settings and 8 provide valuable insights regarding group optimiza- tion behaviors under those settings. Limitations First, we conduct a case study on the group op- timization of role-based LLM-based multi-agent systems on software development tasks. Thus, the scope of the study is limited to software devel- opment. A natural next step is to work on other real-world domains such as GitHub issue resolv- ing or web task completion to which role-based multi-agent systems can be applied. Second, it could be more beneficial for us to explore other decision-making strategies in the multi-agent sys- tem. We currently use a vertical decision-making structure following previous work, as mentioned in the Appendix; however, we could explore an- other decision-making strategy, such as horizontal decision-making. Finally, we have only conducted experiments with OpenAI’s GPT-series model. It would also be better to try out another model family, such as Anthropic’s Claude. References Michael Ahn, Anthony Brohan, Noah Brown, Yev- gen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario M Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Jayant Joshi, Ryan C. Julian, Dmitry Kalash- nikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Ret- tinghouse, Diego M Reyes, Pierre Sermanet, Nico- las Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, F. Xia, Ted Xiao, Peng Xu, Sichun Xu, and Mengyuan Yan. 2022. Do as i can, not as i say: Grounding language in robotic affordances. In Con- ference on Robot Learning . Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen,and Denny Zhou. 2023. Large language models as tool makers. ArXiv , abs/2305.17126. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2024. Chateval: Towards better LLM-based eval- uators
https://arxiv.org/abs/2505.16086v1
through multi-agent debate. In The Twelfth International Conference on Learning Representa- tions . Harrison Chase. 2022. LangChain. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé, Jared Kaplan, Harrison Ed- wards, Yura Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Moham- mad Bavarian, Clemens Winter, Philippe Tillet, Fe- lipe Petroski Such, David W. Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-V oss, William H. Guss, Alex Nichol, Igor Babuschkin, Suchir Balaji, Shantanu Jain, Andrew Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew M. Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluat- ing large language models trained on code. ArXiv , abs/2107.03374. Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2024. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representations . Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yi- han Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing, and Zhiting Hu. 2022. RLPrompt: Optimizing discrete text prompts with reinforcement learning. InProceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing , pages 3369–3391, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics. Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2024. Self-collaboration code generation via chatgpt. ACM Trans. Softw. Eng. Methodol. Just Accepted. Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. 2024. Improving factuality and reasoning in language models through multiagent debate. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. In Thirty-seventh Conference on Neural Information Processing Systems . 9 Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers) , pages 3816–3830, Online. Association for Computa- tional Linguistics. Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. 2024. Connecting large language models with evolutionary algorithms yields powerful prompt opti- mizers. In The Twelfth International Conference on Learning Representations . Rui Hao, Linmei Hu, Weijian Qi, Qingliu Wu, Yirui Zhang, and Liqiang Nie. 2023. Chatllm net- work: More brains, more intelligence. ArXiv , abs/2304.12998. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the MATH dataset. In Thirty- fifth
https://arxiv.org/abs/2505.16086v1
Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) . Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. 2024. MetaGPT: Meta pro- gramming for a multi-agent collaborative framework. InThe Twelfth International Conference on Learning Representations . Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. 2023. LLM-blender: Ensembling large language models with pairwise ranking and generative fusion. In Pro- ceedings of the 61st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 14165–14178, Toronto, Canada. As- sociation for Computational Linguistics. Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vard- hamanan A, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller, Matei Za- haria, and Christopher Potts. 2024. DSPy: Com- piling declarative language model calls into state- of-the-art pipelines. In The Twelfth International Conference on Learning Representations . Sergey Levine, Aviral Kumar, G. Tucker, and Justin Fu. 2020. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. ArXiv , abs/2005.01643.Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023a. CAMEL: Communicative agents for ”mind” exploration of large language model society. In Thirty-seventh Conference on Neural Information Processing Systems . Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, and Tat-Seng Chua. 2023b. Robust prompt optimization for large language models against distri- bution shifts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, pages 1539–1554, Singapore. Association for Computational Linguistics. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. 2023. Encouraging divergent thinking in large language models through multi-agent debate. ArXiv , abs/2305.19118. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv. , 55(9). Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, and Diyi Yang. 2024. A dynamic LLM-powered agent net- work for task-oriented agent collaboration. In First Conference on Language Modeling . Ruotian Ma, Xiaolei Wang, Xin Zhou, Jian Li, Nan Du, Tao Gui, Qi Zhang, and Xuanjing Huang. 2024. Are large language models good prompt optimizers? ArXiv , abs/2402.02101. Nat McAleese, Rai Michael Pokorny, Juan Fe- lipe Cer’on Uribe, Evgenia Nitishinskaya, Maja Tre- bacz, and Jan Leike. 2024. Llm critics help catch llm bugs. ArXiv , abs/2407.00215. OpenAI. 2023. Gpt-4 technical report. Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive sim- ulacra of human behavior. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology . Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. 2023. GrIPS: Gradient-free, edit-based in- struction search for prompting large language models. InProceedings of the 17th Conference of the Euro- pean Chapter of the Association for Computational Linguistics , pages 3845–3864, Dubrovnik, Croatia. Association for Computational Linguistics.
https://arxiv.org/abs/2505.16086v1
Reid Pryzant, Dan Iter, Jerry Li, Yin Lee, Chenguang Zhu, and Michael Zeng. 2023. Automatic prompt op- timization with “gradient descent” and beam search. InProceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing , pages 7957–7968, Singapore. Association for Computa- tional Linguistics. 10 Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2024. ChatDev: Communicative agents for software development. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15174–15186, Bangkok, Thailand. Association for Computational Linguistics. Laria Reynolds and Kyle McDonell. 2021. Prompt pro- gramming for large language models: Beyond the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Com- puting Systems , CHI EA ’21, New York, NY , USA. Association for Computing Machinery. Toran Bruce Richards. 2023. AutoGPT. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV , Eric Wallace, and Sameer Singh. 2020. AutoPrompt: Elic- iting Knowledge from Language Models with Auto- matically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 4222–4235, Online. Association for Computational Linguistics. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. 2023. Re- flexion: language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems . Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Lee Boyd-Graber, and Lijuan Wang. 2023. Prompting GPT-3 to be reli- able. In The Eleventh International Conference on Learning Representations . Hong Sun, Xue Li, Yi Xu, Youkow Homma, Qin- hao Cao, Min man Wu, Jian Jiao, and De- nis Xavier Charles. 2023. Autohint: Automatic prompt optimization with hint generation. ArXiv , abs/2307.07415. Wei Tao, Yucheng Zhou, Wenqiang Zhang, and Yu- Xi Cheng. 2024. Magis: Llm-based multi-agent framework for github issue resolution. ArXiv , abs/2403.17927. Xinyuan Wang, Chenxi Li, Zhen Wang, Fan Bai, Hao- tian Luo, Jiayou Zhang, Nebojsa Jojic, Eric Xing, and Zhiting Hu. 2024a. Promptagent: Strategic planning with language models enables expert-level prompt op- timization. In The Twelfth International Conference on Learning Representations . Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2024b. Unleashing the emergent cognitive synergy in large language mod- els: A task-solving agent through multi-persona self- collaboration. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pages 257–279, Mexico City, Mexico. Association for Computational Linguistics.Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems . Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and Chi Wang. 2024a. Autogen: Enabling next-gen LLM applications via multi-agent conversation. In
https://arxiv.org/abs/2505.16086v1
ICLR 2024 Workshop on Large Language Model (LLM) Agents . Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, and Chi Wang. 2024b. Mathchat: Con- verse to tackle challenging math problems with LLM agents. In ICLR 2024 Workshop on Large Language Model (LLM) Agents . Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2024. Large language models as optimizers. In The Twelfth International Conference on Learning Representations . Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations . Qinyuan Ye, Mohamed Ahmed, Reid Pryzant, and Fereshte Khani. 2024. Prompt engineering a prompt engineer. In Findings of the Association for Com- putational Linguistics ACL 2024 , pages 355–385, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics. Mert Yuksekgonul, Federico Bianchi, Joseph Boen, Sheng Liu, Zhi Huang, Carlos Guestrin, and James Zou. 2024. Textgrad: Automatic "differentiation" via text. ArXiv , abs/2406.07496. Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, and Chuang Gan. 2024a. Building cooperative em- bodied agents modularly with large language models. InThe Twelfth International Conference on Learning Representations . Yao Zhang, Zijian Ma, Yunpu Ma, Zhen Han, Yu Wu, and V olker Tresp. 2024b. Webpilot: A versa- tile and autonomous multi-agent system for web task execution with strategic exploration. ArXiv , abs/2408.15978. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-bench and chatbot arena. InThirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track . 11 Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2023. Large language models are human-level prompt engineers. In The Eleventh International Conference on Learning Representations . Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dy- lan R. Ashley, Róbert Csordás, Anand Gopalakrish- nan, Abdullah Hamdi, Hasan Abed Al Kader Ham- moud, Vincent Herrmann, Kazuki Irie, Louis Kirsch, Bing Li, Guohao Li, Shuming Liu, Jinjie Mai, Pi- otr Pi˛ ekos, Aditya Ramesh, Imanol Schlag, Weimin Shi, Aleksandar Stani ´c, Wenyi Wang, Yuhui Wang, Mengmeng Xu, Deng-Ping Fan, Bernard Ghanem, and Jürgen Schmidhuber. 2023. Mindstorms in natural language-based societies of mind. In R0- FoMo:Robustness of Few-shot and Zero-shot Learn- ing in Large Foundation Models . Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, and Jürgen Schmidhuber. 2024. GPTSwarm: Language agents as optimizable graphs. In Forty-first International Conference on Machine Learning . A Role-based Multi-Agent System Various decision-making structures among agents (Chan et al., 2024; Wu et al., 2024a) have been investigated. Following AgentVerse(Chen et al., 2024), we adopt the vertical decision-making struc- ture as it is a better fit for software development. Inside the vertical structure, given the software task description X, a solver agent Sproposes a solution Ytat iteration t. Other agents, as reviewer agents
https://arxiv.org/abs/2505.16086v1
R, provide feedback F={ri t=Ri(Yt)|Ri∈ R} regarding solution Ytto the solver agent. Finally, the solver agent refines its solution based on the feedback Yt+1=S(X, Y t,F). Such a review it- eration can go on for a few rounds. In the current case study, we use a total of 2 reviewer agents and a single review iteration. To determine the concrete role descriptions for solver and reviewer agents, we utilize a recruiter agent Ato select role descrip- tions for the solver and reviewer agents from a pre-defined expert pool based on the current task description. The agent role description pool we use is adapted from ChatDev (Qian et al., 2024). In our study, we aim to optimize the system prompts of all agents in the vertical decision-making structure. The concrete prompt for the solver agent is shown in Figure 4, and the prompt for reviewer agents is shown in Figure 5. The system prompts we aim to optimize are shown in red, and we wrap them with <TO IMPROVE> tags to instruct LLM to optimize only text between the tags so that other text is untouched. For the recruiting stage, the roledescription pool we use is directly presented in the prompt for the recruiter agent, as shown in Figure 6. Note that we restrict the solver agent specifi- cally associated with the role of " Programmer who can write/create computer software or applications with extensive computing and coding experience ..." since this is the only appropriate role for writing solution code. B Prompt for Model-Based Evaluation We show the evaluation prompts for generating scalar scores and feedback for functionality, robust- ness, and test case coverage dimensions in Figure 7, and for generating feedback for documentation dimension in Figure 8. Critical components in the evaluation prompt include the evaluation dimen- sion’s name and definition, software task descrip- tion, and software solution code. We list concrete definitions of all evaluation dimensions we study in Table 3. C Are More Optimization Steps Needed? We investigate whether more steps of optimization are beneficial compared with the current default steps of 5. We choose functionality and code style violation dimensions under the online group opti- mization with one-pass prompting setting and opti- mize for 10 steps. We plot the optimization curve on training and development set in Figure 11. We observe that for both dimensions, the training and development optimization curves tend towards ei- ther a stable or declining trend. This shows more steps of optimization are not necessary in our case study. 12 System Prompt : You are ${role description}. You are in a multi-agent collaboration environment. You are given a software development task description: ${task description} You will also be given the chat history of you and other teammates (history could be empty). <TO IMPROVE> Your task is to provide a new solution code to the given software task. If the history is not empty, your new solution code must be based on your previous solution and teammates’ feedback in the history. </TO IMPROVE> User Prompt : Start your response now and write your code step
https://arxiv.org/abs/2505.16086v1
by step in ```python markdown quotes. Figure 4: System and user prompt for solver agent whose responsibility is to write the main solution code given a software task description. System Prompt : You are ${role description}. You are in a multi-agent collaboration environment aiming to solve a given software development task: ${task description}. You are also given the chat history of you and other teammates below. <TO IMPROVE> Based only on your expertise, please provide your feedback on the most recent solution code to the software task given. Ensure your feedback is specific and detailed enough instead of just general opinions. </TO IMPROVE> User Prompt : Start your response now. Figure 5: System and user prompt for reviewer agent whose responsibility is to review the solution code written by solver agent and provide feedback to solver agent given a software task description. 13 System Prompt : User Prompt : You are faced with a software engineer task: ${task description} You are also given a pool of experts: [Experts pool] 1. Chief executive officer whose main responsibilities include being an active decision-maker on users’ demands and other key policy issues, leader, manager, and executor. 2. Chief product officer who is responsible for all product-related matters, including product design, product strategy, product vision, product innovation, project management, and product marketing. 3. Counselor whose main responsibilities include asking what users and customers think and providing valuable suggestions. 4. Chief technology officer who is very familiar with information technology and will make high-level decisions for the overarching technology infrastructure that closely aligns with the organization’s goals. 5. Chief human resource officer who oversees all aspects of human resource management and industrial relations policies, practices and operations for an organization. 6. Programmer who can write/create computer software or applications with extensive computing and coding experience in many varieties of programming languages and platforms, such as Python, Java, C, C++, HTML, CSS, JavaScript, XML, SQL, PHP, etc. 7. Code reviewer who can help programmers assess source codes for software troubleshooting, fix bugs to increase code quality and robustness, and offer proposals to improve the source codes. 8. Software test engineer who can use the software as intended to analyze its functional properties, design manual and automated test procedures to evaluate each software product, build and implement software evaluation test programs, and run test programs to ensure that testing protocols evaluate the software correctly. 9. Chief creative officer who directs the company’s creative software and develops the artistic design strategy that defines the company’s brand. [/Experts pool] You need to recruit a total of ${number of agents} experts from the above expert pool to collaboratively solve the given task. The first expert member you recruit must always be the programmer (index 6) since the programmer is responsible for developing the software code. The remaining expert members are responsible for providing feedback on the software code developed by the programmer. You can only select each expert once. Please use a comma to separate selected expert member indices without space. Always put the programmer at the beginning. Don’t provide any reason
https://arxiv.org/abs/2505.16086v1
for your selection. For example, if you recruit programmer, chief technology officer, and chief creative officer, you should output: 6,4,9 Figure 6: System and user prompt we use for the role selection agent for selecting concrete roles for solver and reviewer agents. 14 System Prompt : You are a professional and strict code reviewer. User Prompt : You will evaluate the solution code to a software engineer task from the following dimensions: ${dimension name}: ${dimension definition} The task description is: ${task description} The solution code is: [Solution code] ${solution code} [/Solution code] Now, you need to give a rating from 0 to 10 with detailed reasons for your rating regarding the evaluation dimensions. You must only output one line which contains the score and detailed reasons why you gave this score. You must put detailed reasons inside <Reasons> tag. The exact output format you need to follow is shown below: 1. ${dimension}: a score (from 0 to 10) + <Reasons> detailed reasons </Reasons> Make sure your score and explanations of the score are only based on the evaluation dimension, not anything else. Also, make sure that you only give a high score when the solution code is really good at satisfying the definition of the evaluation dimension. Figure 7: Evaluation system and user prompt for generating utility scores and textual feedback for functionality, robustness, and test case coverage dimensions. 15 Dimension Name Dimension Definition Functionality Is the code able to achieve all the goals specified in the task description? Robustness Is the code snippet able to handle different unexpected inputs or other exceptions? Test Case Coverage Does the solution contain test cases to cover all the software solution code? Documentation Does the solution code contain enough comments or docstrings to explain itself? Code Style Violation Does the solution code follow code style conventions defined in PEP 8? Table 3: concrete definitions of all evaluation dimensions we study in our work. System Prompt : You are a professional and strict code reviewer. User Prompt : You will evaluate the solution code to a software engineer task from the following criterion: A good solution code always comes with abundant comments and docstrings to explain the purpose and functionality of each class, method, and function. The task description is: ${task description} The solution code is: [Solution code] ${solution code} [/Solution code] You must use detailed natural language to describe how the solution code performs in terms of the above evaluation criterion. Make sure your evaluation is only related to the above evaluation criterion, nothing else. Figure 8: Evaluation system and user prompt for generating textual feedback for documentation dimension. 16 System Prompt : You are a professional error agent locator who can accurately identify agents not functioning well in a multi-agent collaboration system. User Prompt : You are given a software engineer task description, a group of agents that collaboratively solve this task with their communication trajectory, and accurate external feedback to the **final** solution code regarding some evaluation dimensions: [Task description] ${task description} [/Task description] [Agents with their role descriptions] Agent 1: ${agent 1
https://arxiv.org/abs/2505.16086v1
role description} Agent 2: ${agent 2 role description} Agent 3: ${agent 3 role description} ... ${high-level responsibilities of each agents} [/Agents with their role descriptions] [Agents communication trajectory] ${communication trajectory} [/Agents communication trajectory] [Feedback to final solution code] ${evaluation score with feedback} [/Feedback to final solution code] You can assume all agents can possibly make mistakes when writing solution code or providing feedback to code, but the external feedback to the final solution is objective and robust. If the above external feedback to the final solution code contains negative feedback, please identify all agent(s) causing the negative feedback and provide detailed explanations of why each identified agent leads to the negative feedback. Explanations must satisfy the following criteria: - If the identified agent improves itself based on the explanations, the negative feedback can be somehow mitigated, thus increasing the evaluation dimension score according to the definition of the evaluation dimension. - Make sure the explanations are specific and detailed instead of just general explanations. - Do not simply use ’agent 1’ or ’agent 2’ to describe agents in the explanations; instead, use their role descriptions, such as programmer and chief product officer, to describe. - Explanations should be based on what already happened in the trajectory instead of asking for more interactions between agents in the future. First, output the identified agent index, then provide detailed explanations. For example, if two agents, the programmer agent (Agent 1) and the code reviewer agent (Agent 3), are identified, the output must be two lines following the format below: 1. Agent 1: detailed explanations of why programmer leads to negative feedback 2. Agent 3: detailed explanations of why code reviewer leads to negative feedback If the feedback is completely positive, you can return a string of "None", meaning no agent is making mistakes. Figure 9: System and user prompt for the locator. 17 System Prompt : You are a professional system prompt optimizer for large language model agents. Don’t be afraid to be creative. User Prompt : Below are the input system message, input user messages, and output of a large language model agent: [Agent system message] ${agent system prompt} [/Agent system message] [Agent user messages] ${agent user prompts} [/Agent user messages] [Agent output] ${agent output} [/Agent output] Below is the feedback for the above large language model agent activity trajectory: [Fine-grained feedback] ${agent fine-grained feedback} [/Fine-grained feedback] Based on the above feedback for this large language model agent, please ONLY update the system prompt wrapped between the <TO IMPROVE> tag in the agent system message so that the agent can improve based on the feedback. You need to carefully think about what is useful in the feedback for you to optimize the agent system prompt and make sure that this optimization not only benefits the current software development task but can also generally benefit other software development tasks in the future. You must keep the new prompt clear, concise, informative, and descriptive. Also, make sure not to change the agent’s original goal. Since your output will directly replace the text wrapped between the <TO
https://arxiv.org/abs/2505.16086v1
IMPROVE> tag in the system prompt, make sure you ONLY output the improved prompt. DO NOT output anything else, such as the <TO IMPROVE> tag. Figure 10: System and user prompt for the optimizer. 18 Functionality As a Programmer, your task is to provide a comprehensive solution to the given software task. Your solution should be versatile, capable of handling different sports, player positions, and strategies. It should also allow users to drag and drop players to specific positions and add notes and annotations to each play. Ensure that all methods outlined in the initial structure are fully implemented and functional. Pay special attention to the user interface and ensure it is user-friendly. Test your code to ensure it works as expected and meets all the requirements of the task description. Additionally, ensure that all functionalities mentioned in the task description are implemented, and consider the user experience when designing the interface and functionality of the software. After receiving feedback from other agents, make sure to incorporate their suggestions into your final solution. Robustness As a Programmer, your task is to provide a new solution code to the given software task. If the history is not empty, your new solution code must be based upon your previous solution and teammates’ feedback in the history. While developing your solution, ensure to incorporate robust error handling, especially for user inputs in any user interface elements. Consider all possible edge cases and potential user errors to enhance the robustness of your solution. Additionally, pay attention to the feedback from your teammates regarding the robustness of your code, particularly in areas such as error handling, input validation, and handling of unexpected exceptions. Also, consider the lifecycle of any temporary files created during the process and ensure they are properly managed to prevent unnecessary storage usage. Remember to think thoroughly about the different ways the software could be used or misused, and add appropriate error handling and input validation to cover these scenarios. Furthermore, consider the integrity and validity of the data being processed, including checks for invalid or duplicate inputs, and robust handling of data storage and retrieval. Violation As a programmer, your task is to provide a new solution code to the given software task. If the history is not empty, your new solution code must be based upon your previous solution and teammates’ feedback in the history. While writing the code, ensure that it adheres to the PEP8 style guide, especially the rule about maximum line length. Break down long lines of code, including comments and docstrings, into multiple lines to improve readability and maintainability. Use a linter or code formatter to automatically check and correct your code style. Also, manually check your code against the PEP8 style guide before submitting it. In addition, when revising your code based on feedback, make sure to address all the points raised by your teammates, especially those related to code style and organization. Table 4: We list optimized system prompts for the solver agent whose role is described as a programmer along functionality, robustness, and code violation
https://arxiv.org/abs/2505.16086v1
evaluation dimensions. Green text indicates agent prompts are being optimized towards the evaluation dimension; however, we observe a current problem in that optimization tends to generate instance-specific prompts, which is not generally useful across the entire dataset, as shown in the red text. 19 Functionality As a Software Test Engineer, your task is to provide detailed and specific feedback on the most recent solution code to the software task given. Your feedback should focus on the functionality, security, and versatility of the software. Test the software with different sports, player positions, and strategies, and with different user inputs. Also, test the user interface to ensure it is user-friendly and intuitive. Your feedback will guide the programmer in improving the solution and ensuring it meets the requirements of a wide range of users. Additionally, ensure your testing procedures cover all aspects of the task description and user experience, and provide more detailed feedback on any missing functionalities or areas for improvement After providing feedback, ensure that the programmer has understood and incorporated your suggestions into the final solution. Robustness As a Software Test Engineer, your task is to provide detailed and specific feedback on the most recent solution code to the software task given. Your feedback should focus on potential areas of robustness that may have been overlooked, such as error handling, input validation, and handling of unexpected exceptions. Also, consider the lifecycle of any temporary files created during the process and ensure they are properly managed to prevent unnecessary storage usage. Your feedback should help to ensure that the final solution is as robust and reliable as possible. You should have a deeper understanding of the code and the potential edge cases and error scenarios, and provide more detailed and specific feedback to the programmer agent. Furthermore, consider the integrity and validity of the data being processed, and suggest improvements to the data handling processes, including checks for invalid or duplicate inputs, and robust handling of data storage and retrieval. Violation As a Software Test Engineer, your role is to provide specific and detailed feedback on the most recent solution code to the software task given. While focusing on the functionality and robustness of the software, also consider the readability and maintainability of the code. Review the code for adherence to style guides like PEP8, including line length, and provide feedback on this aspect. If you notice any violations, point them out and suggest possible corrections. Also, consider the feedback from other teammates and reinforce any important points they might have raised. Table 5: We list optimized system prompts for the reviewer agent whose role is described as a software test engineer along functionality, robustness, and code violation evaluation dimensions. Green text indicates agent prompts are being optimized towards the evaluation dimension; however, we observe a current problem in that optimization tends to generate instance-specific prompts, which is not generally useful across the entire dataset, as shown in the red text. 20 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%66.577.588.5Evaluation ScoresFunctionality ( ↑) train dev 0% 10% 20% 30%
https://arxiv.org/abs/2505.16086v1
arXiv:2505.16088v2 [cs.CL] 25 May 2025Date Fragments: A Hidden Bottleneck of Tokenisation for Temporal Reasoning Gagan Bhatia1Maxime Peyrard2Wei Zhao1 1University of Aberdeen2Université Grenoble Alpes & CNRS {g.bhatia.24,wei.zhao}@abdn.ac.uk Abstract Modern BPE tokenisers often split calen- dar dates into meaningless fragments, e.g., “20250312” →“202”, “503”, “12”, inflat- ing token counts and obscuring the inherent structure needed for robust temporal reason- ing. In this work, we (1) introduce a simple yet interpretable metric, termed date fragmen- tation ratio, that measures how faithfully a to- keniser preserves multi-digit date components; (2) release DATEAUGBENCH , a suite of 6500 examples spanning three temporal reasoning tasks: context-based date resolution, format- invariance puzzles, and date arithmetic across historical, contemporary, and future time pe- riods; and (3) through layer-wise probing and causal attention-hop analyses, uncover an emer- gent date-abstraction mechanism whereby large language models stitch together the fragments of month, day, and year components for tem- poral reasoning. Our experiments show that excessive fragmentation correlates with accu- racy drops of up to 10 points on uncommon dates like historical and futuristic dates. Fur- ther, we find that the larger the model, the faster the emergent date abstraction heals date frag- ments. Lastly, we observe a reasoning path that LLMs follow to assemble date fragments, typi- cally differing from human interpretation (year →month →day). Our datasets and code are made publicly available here 1 Introduction Understanding and manipulating dates is a decep- tively complex challenge for modern large lan- guage models (LLMs). Unlike ordinary words, dates combine numeric and lexical elements in rigidly defined patterns—ranging from compact eight-digit strings such as 20250314 to more ver- bose forms like “March 14, 2025” or locale-specific variants such as “14/03/2025.” Yet despite their structured nature, these date expressions often fall prey to subword tokenisers that fragment them into Is 28052025 the same date as 28th of May 2025 ? Answer:Is 28052025 the same date as 28th of May 2025? Answer: Yes InputOutput logits 280 520 25same 28 May 202 5Yes equal Input tokens Reasoning Path (F = 0.4)Figure 1: Internal processing of dates for temporal rea- soning. Here F=0.4 shows the date fragmentation ratio. semantically meaningless pieces. A tokeniser that splits “2025-03-14” into “20”, “25”, “-0”, “3”, “-1”, “4” not only inflates the token count but also sev- ers the natural boundaries of year, month, and day. This fragmentation obscures temporal cues and in- troduces a hidden bottleneck: even state-of-the-art LLMs struggle to resolve, compare, or compute dates accurately when their internal representations have been so badly fragmented. This issue has a critical impact on real-world applications: Mis-tokenised dates can undermine scheduling and planning workflows, leading to erroneous cal- endar invites or appointments (Vasileiou and Yeoh, 2024). They can skew forecasting models in do- mains ranging from time-series analysis (Tan et al., 2024; Chang et al., 2023) to temporal knowledge graph reasoning (Wang et al., 2024). In digital hu- manities and historical scholarship, incorrect split- ting of date expressions may corrupt timelines and misguide interpretative analyses (Zeng, 2024). As LLMs are increasingly deployed in cross-temporal applications, such as climate projection (Wang and Karimi, 2024), economic
https://arxiv.org/abs/2505.16088v2
forecasting (Carriero et al., 2024; Bhatia et al., 2024), and automated cur- riculum scheduling (Vasileiou and Yeoh, 2024), the brittleness introduced by subword fragmentation poses a risk of propagating temporal biases and inaccuracies into downstream scientific discoveries and decision-making systems (Tan et al., 2024). In this work, we provide a pioneer outlook on the impact of date tokenisation on downstream tem- poral reasoning. Figure 1 illustrates how dates are processed internally for temporal reasoning. Our contributions are summarized as follows: (i)We introduce DATEAUGBENCH , a benchmark dataset comprising 6,500 examples with 21 date formats. It is leveraged to evaluate a diverse array of LLMs from 8 model families in three temporal reasoning tasks. (ii) We present date fragmentation ratio, a metric that measures how fragmented the tokenisa- tion outcome is compared to the actual year, month, and day components. We find that the fragmentation ratio generally correlates with temporal reasoning performance, namely that the more fragmented the tokenisation, the worse the reasoning performance. (iii) We analyse internal representations by tracing how LLMs “heal” a fragmented date embed- dings in their layer stack—an emergent ability that we term date abstraction . We find that larger models can quickly compensate for date fragmentation at early layers to achieve high accuracy for date equivalence reasoning. (iv)We leverage causal analysis to interpret how LLMs stitch date fragments for temporal rea- soning. Our results show that LLMs follow a reasoning path that is typically not aligned with human interpretation (year →month → day), but relies on subword fragments that sta- tistically represent year, month, and day, and stitch them in a flexible order that is subject to date formats. Our work fills the gap between tokenisation re- search (Goldman et al., 2024; Schmidt et al., 2024) and temporal reasoning (Su et al., 2024; Fatemi et al., 2024), and we suggest future work to con- sider date-aware vocabularies and adaptive tokenis- ers to ensure that date components remain intact. 2 Related Works Tokenisation as an information bottleneck. Re- cent scholarship interrogates four complementary facets of sub-word segmentation: (i) tokenisation fi- delity , i.e. how closely a tokeniser preserves seman- tic units: Large empirical studies show that higher compression fidelity predicts better downstream accuracy in symbol-heavy domains such as code, maths and dates (Goldman et al., 2024; Schmidt et al., 2024); (ii) numeric segmentation strategiesthat decide between digit-level or multi-digit units: Previous work demonstrates that the choice of radix-single digits versus 1-3 digit chunks induces stereotyped arithmetic errors and can even alter the complexity class of the computations LLMs can realise (Singh and Strouse, 2024; Zhou et al., 2024); (iii) probabilistic or learnable tokenisers whose segmentations are optimised jointly with the language model: Theory frames tokenisation as a stochastic map whose invertibility controls whether maximum-likelihood estimators over to- kens are consistent with the underlying word dis- tribution (Gastaldi et al., 2024; Rajaraman et al., 2024) and (iv) pre-/post-tokenisation adaptations that retrofit a model with a new vocabulary: Zheng et al. (2024) introduce an adaptive tokeniser that co-evolves with the language model, while Liu et al. (2025) push beyond the
https://arxiv.org/abs/2505.16088v2
“sub-word” dogma with SuperBPE , a curriculum that first learns subwords and then merges them into cross-whitespace “super- words”, cutting average sequence length by 27 %. Complementary studies expose and correct system- atic biases introduced by segmentation (Phan et al., 2024) and propose trans-tokenisation to transfer vocabularies across languages without re-training the model from scratch (Remy et al., 2024). Our work builds on these insights but zooms in on calen- dar dates—a hybrid of digits and lexical delimiters whose multi-digit fields are routinely shredded by standard BPE, obscuring cross-field regularities crucial for temporal reasoning. Temporal reasoning in large language models. Despite rapid progress on chain-of-thought and process-supervised reasoning, temporal cognition remains a conspicuous weakness of current LLMs. Benchmarks such as TIMEBENCH (Chu et al., 2024), TEMPREASON (Tan et al., 2023), TEST- OF-TIME(Fatemi et al., 2024), MENAT QA(Wei et al., 2023) and TIMEQA(Chen et al., 2021) re- veal large gaps between model and human perfor- mance across ordering, arithmetic and co-temporal inference. Recent modelling efforts attack the prob- lem from multiple angles: temporal-graph abstrac- tions (Xiong et al., 2024), instruction-tuned spe- cialists such as TIMO (Su et al., 2024), pseudo- instruction augmentation for multi-hop QA (Tan et al., 2023), and alignment techniques that re- ground pretrained models to specific calendar years (Zhao et al., 2024). Yet these approaches assume a faithful internal representation of the input dates themselves. By introducing the notion of date frag- mentation and demonstrating that heavier fragmen- tation predicts up to ten-point accuracy drops on DATEAUGBENCH , we uncover a failure mode that isorthogonal to reasoning algorithms or supervi- sion: errors arise before the first transformer layer, at the level of subword segmentation. Address- ing this front-end bottleneck complements existing efforts to further improve LLMs for temporal rea- soning. 3 DateAugBench We introduce DATEAUGBENCH , benchmark de- signed to isolate the impact of date tokenisation on temporal reasoning in LLMs. DATEAUGBENCH comprises 6,500 augmented examples drawn from two established sources, TIMEQA (Chen et al., 2021) and TIMEBENCH (Chu et al., 2024), dis- tributed across three tasks splits (see Table 1). Across all the splits, our chosen date formats cover a spectrum of common regional conventions (nu- meric with slashes, dashes, or dots; concatenated strings; two-digit versus four-digit years) and de- liberately introduce fragmentation for atypical his- torical (e.g. “1799”) and future (e.g. “2121”) dates. This design enables controlled measurement of how tokenisation compression ratios and subsequent em- bedding recovery influence temporal reasoning per- formance. Context-based task. In the Context-based split, we sample 500 question–context pairs from TIMEQA, each requiring resolution of a date mentioned in the passage (e.g. Which team did Omid Namazi play for in 06/10/1990?). Every date expression is systematically rendered in six canonical serialisations—including variants such as MM/DD/YYYY ,DD-MM-YYYY ,YYYY.MM.DD and con- catenations without delimiters—yielding 3,000 ex- amples that jointly probe tokenisation fragmenta- tion and contextual grounding. Simple Format Switching task. TheSimple For- mat Switching set comprises 150 unique date pairs drawn from TIMEBENCH , posed as binary same- day recognition questions (e.g. “Are 20251403 and 14th March 2025 referring to the same date?”). Each pair
https://arxiv.org/abs/2505.16088v2
is presented in ten different representa- tions, spanning slash-, dash-, and dot-delimited for- mats, both zero-padded and minimally notated, to stress-test format invariance under maximal tokeni- sation drift. This produces 1,500 targeted examples of pure format robustness. We also have exampleswhere the dates are not equivalent, complicating the task. Date Arithmetic task. TheDate Arithmetic split uses 400 arithmetic instances from TIMEBENCH (e.g. What date is 10,000 days before 5/4/2025?). With the base date serialised in five distinct ways— from month-day-year and year-month-day with var- ious delimiters to compact eight-digit forms. This results in 2,000 examples that examine the model’s ability to perform addition and subtraction of days, weeks, and months under various token fragmenta- tion. 4 Experiment Design 4.1 Date Tokenisation Tokenisers. For tokenisation analysis, we com- pare a deterministic, rule-based baseline tokeniser against model-specific tokenisers. The base- line splits each date into its semantic compo- nents—year, month, day or Julian day—while preserving original delimiters. For neural mod- els, we invoke either the OpenAI TikTok en- codings (for gpt-4 ,gpt-3.5-turbo ,gpt-4o , text-davinci-003 ) or Hugging Face tokenisers for open-source checkpoints. Every date string is processed to record the resulting sub-tokens, token count, and reconstructed substrings. Distance metric. To capture divergence from the ideal, we define a distance metric θbetween a model’s token distribution and the baseline’s: θ(t,b) = 1−t·b |t|,|b|, (1) where tandbare vectors of sub-token counts for the model and baseline, respectively. A larger θ indicates greater sub-token divergence. Date fragmentation ratio. Building on θ, we introduce the date fragmentation ratio F, which quantifies how fragmented a tokeniser’s output is relative to the baseline. We initialise F= 0.0for a perfectly aligned segmentation and apply down- ward adjustments according to observed discrepan- cies: a 0.10 penalty if the actual year/month/day components are fragmented (i.e., 1split= 1) , a 0.10 penalty if original delimiters are lost (i.e., 1delimiter = 1), a 0.05 penalty multiplied by the token count difference (N−Nb between a to- keniser and the baseline, and a 0.30×θpenalty for distributional divergence. The resulting F∈[0,1] provides an interpretable score: values close to 0 Dataset and Task # Formats # Raw SizeEvaluation Example GT Context based 6 500 3000 Which team did Omid Namazi play for in 06/10/1990?Maryland Bays Date Format Switching 10 150 1500 Are 20251403 and March 14th 2025 referring to the same date?Yes Date Arithmetic 5 400 2000 What date is 10,000 days be- fore 5/4/2025?18 November 1997; 17 Decem- ber 1997 Total 21 1500 6500 Table 1: Overview and examples of task splits in D ATEAUGBENCH . denote minimal fragmentation, and values near 1 indicate severe fragmentation. F= 0.10×1split+ 0.10×1delimiter + 0.05× N−Nb + 0.30×θ(2) This date fragmentation ratio is pivotal be- cause tokenisation inconsistencies directly impair a model’s ability to represent and reason over tem- poral inputs. When date strings are split non- intuitively, models face inflated token sequences and fragmented semantic cues, potentially leading to errors in tasks such as chronological comparison, date arithmetic, and context-based resolution. 4.2 Temporal Reasoning Evaluation Models. We evaluate a spectrum of model rang- ing from 0.5 B to 14 B
https://arxiv.org/abs/2505.16088v2
parameters: five open- source Qwen 2.5 models (0.5 B, 1.5 B, 3 B, 7 B, 14 B) (Yang et al., 2024), two Llama 3 mod- els (3 B, 8 B) (Touvron et al., 2023), and two OLMo (Groeneveld et al., 2024) models (1 B, 7 B). For comparison with state-of-the-art closed models, we also query the proprietary GPT-4o and GPT-4o-mini endpoints via the OpenAI API (Ope- nAI et al., 2024). LLM-as-a-judge. To measure how date tokenisa- tion affects downstream reasoning, we employ an LLM-as-judge framework using GPT-4o. For each test instance in DATEFRAGBENCH , we construct a JSONL record that includes the question text, the model’s predicted answer, and a set of acceptable gold targets to capture all semantically equivalent date variants (e.g., both “03/04/2025” and “April 3, 2025” can appear in the gold label set). This record is submitted to GPT-4o via the OpenAI API with a system prompt instructing it to classify the predic- tion asCORRECT ,INCORRECT , orNOT ATTEMPTED . World W ar II lasted from 01091939 to 02091945 . How long did the war last?0.5B 7B Layer Layer Final LayerLayer Layer Layer Layer Final Layer01091939 02091945 3B Layer Layer Layer Final Layer PromptDate TokenizationModel ouptut The war lasted six years from Sep 1st 1939 to Sep 2nd 1945 01091939 02091945LLM-A LLM-BTCPFigure 2: Illustration of how LLMs with various model sizes process dates. TCP means Tokenization Compen- sation Point, defined as the first layer at which LLMs achieve above-chance accuracy (see details in Sec. 6). A prediction is deemed CORRECT if it fully contains any one of the gold target variants without con- tradiction; INCORRECT if it contains factual errors relative to all gold variants; and NOT ATTEMPTED if it omits the required information. We validate GPT- 4o’s reliability by randomly sampling 50 judged instances across all splits and obtaining indepen- dent annotations from four student evaluators. In 97% of cases, GPT-4o’s judgments of model an- swers agree with the averaged human judgments across four student evaluators, with a Cohen’s κ of 0.89 as the inter-annotator agreement, affirming the reliability of our automatic evaluation setup. 4.3 Internal Representations Layerwise probing. We use four Qwen2.5 (Yang et al., 2024) model checkpoints (0.5B, 1.5B, 3B, and 7B parameters) to trace how temporal informa- tion is processed internally across different layers. During inference, each question is prefixed with a fixed system prompt and a chain-of-thought cue, then passed through the model in evaluation mode. At each layer i, we extract the hidden-state vector corresponding to the final token position, yielding an embedding hi∈Rdfor that layer. Repeating over all examples produces a collection of layer- wise representations for positive and negative cases. We then quantify the emergence of temporal rea- soning by training lightweight linear probes on these embeddings. For layer i, the probe is trained to distinguish “same-date” (positive) vs “different- date” (negative) examples. To explain when the model’s date understanding is achieved, we define thetokenisation compensation point as the layer at which the model’s representation correctly repre- sents the date in the given prompt. We experiment with this idea across various
https://arxiv.org/abs/2505.16088v2
model sizes, aiming to test our hypothesis: larger models would recover calendar-level semantics from fragmented tokens at earlier stages, i.e., tokenisation compensation is accomplished at early layers, as illustrated in Figure 2. Causal attention-hop analysis. We introduce a framework intended to understand in which order date fragments are stitched together for LLMs to answer a temporal question. Figure 1 depicts the idea of our framework: given an input prompt re- quiring a date resolution (e.g., “Is 28052025 the same date as 28th of May 2025?”), we define two sets of tokens: (1) concept tokens corresponding to year, month, and day fragments, and (2) decision tokens corresponding to the model answer (“yes” or “no”). Our framework aims to identify a stitch- ing path for temporal reasoning, or reasoning path for short. A reasoning path is defined as a sequence of tokens containing date fragments and the model answer1. Given that there are multiple potential paths, we score each path and select the highest- scoring one as the LLM’s reasoning path for the given prompt. To score a reasoning path, our idea is the following: we identify when a date fragment or model answer is activated, by which input token and at which layer, and then determine how impor- tant each input token is for the date fragment and model answer. Our idea is implemented by using two different approaches: (i) next token prediction (§A.2.1): how likely a date fragment and model answer follows a given input token and (ii) token importance (§A.2.2): how important an input token is to a date fragment and model answer (by replac- ing the input token with a random token). Lastly, we combine the results of the two approaches to yield the final score of a reasoning path (§A.2.3). This causal framework not only pinpoints where 1The idea of reasoning paths was introduced by Lindsey et al. (2025), which we leverage to interpret how LLMs ad- dress date fragments for temporal reasoning.andwhen date fragments are activated, but also in which order they are stitched together to yield the model answer. 5 Experiment Results 5.1 Date fragmentation Model Past Near Past Present Future Avg Baseline 0.00 0.00 0.00 0.00 0.00 OLMo 0.15 0.14 0.07 0.25 0.15 GPT-3 0.17 0.14 0.06 0.25 0.16 Llama 3 0.29 0.28 0.27 0.30 0.29 GPT-4o 0.32 0.31 0.22 0.30 0.29 GPT-3.5 0.47 0.22 0.26 0.36 0.33 GPT-4 0.36 0.26 0.29 0.39 0.33 Qwen 0.58 0.55 0.49 0.58 0.55 Gemma 0.58 0.55 0.49 0.58 0.55 DeepSeek 0.58 0.55 0.49 0.58 0.55 Llama 2 0.63 0.63 0.63 0.63 0.63 Phi 0.63 0.63 0.63 0.63 0.63 Table 2: Date fragmentation ratio across models and data splits over time. In case a family of model variants (Qwen, Gemma, DeepSeek and Phi) uses the same to- keniser, only the family name is referenced. Models Context Rlt Fmt Switch Date Arth. Avg. GPT-4o-mini 53.20 95.66 56.67 68.51 OLMo-2-7B 32.13 97.24 64.72 64.70 Qwen2.5 14B 47.56 94.56 51.35 64.49 Qwen2.5 7B 39.56 91.24 40.56 57.12 Qwen2.5 3B 25.45 90.10 39.45 51.67 LLama3.1 8B 26.20 90.22 34.50 50.31 Qwen2.5 1.5B 21.32 89.65
https://arxiv.org/abs/2505.16088v2
32.34 47.77 Qwen2.5 0.5B 10.23 88.95 31.32 43.50 OLMo-2-1B 9.26 90.09 25.90 41.75 LLama3.2 3B 9.51 88.45 23.66 40.54 Table 3: Average accuracies per task. Context Rlt stands for context based resolution, Fmt Switch refers to format switching, and Date Arth. refers to date arithmetic. Cross-temporal performance. Table 2 reports the mean date fragmentation ratio across four time periods— Past (pre–2000), Near Past (2000–2009), Present (2010–2025), and Future (post–2025)— for each evaluated model. A ratio of 0.00 signi- fies perfect alignment with our rule-based base- line tokeniser, whereas higher values indicate pro- gressively greater fragmentation. The rule-based Baseline unsurprisingly attains the maximal ratio of 0.00 in all periods, serving as a lower bound. Among neural architectures, OLMo (Groeneveld et al., 2024) demonstrates the highest robustness, with an average fragmentation ratio of 0.15, closely followed by GPT-3 at 0.16. Both maintain strong fi- delity across temporal splits, although performance dips modestly in the Future category (0.25), re- flecting novel token sequences not seen during pre- training. Model Tokenised output Frag-ratio Baseline 10 27 1606 0.00 OLMo 10 27 16 06 0.34 Llama 3 102 716 06 0.40 GPT-3 1027 16 06 0.40 GPT-4o 102 716 06 0.40 Gemma 1 0 2 7 1 6 0 6 0.55 DeepSeek 1 0 2 7 1 6 0 6 0.55 Cohere 1 0 2 7 1 6 0 6 0.55 Qwen 1 0 2 7 1 6 0 6 0.55 Phi 3.5 _ 1 0 2 7 1 6 0 6 0.60 Llama 2 _ 1 0 2 7 1 6 0 6 0.60 Table 4: Tokenisation of the MMDDYYYY string “10271606” across models. Impact of subtoken granularity. A closer look, from Table 4, at sub-token granularity further ex- plains these trends. Llama 3 (Touvron et al., 2023) and the GPT (OpenAI et al., 2023) families typi- cally segment each date component into three-digit sub-tokens (e.g., “202”, “504”, “03”), thus pre- serving the semantic unit of “MMDDYYYY” as compact pieces. OLMo (Groeneveld et al., 2024) splits the date tokens into two digit tokens (e.g., “20”, “25”). By contrast, Qwen (Yang et al., 2024) and Gemma (Team et al., 2024) models break dates into single-digit tokens (e.g., “2”, “5”), whereas Phi (Abdin et al., 2024) divides it into single-digit tokens with an initial token (e.g. “_”, “2”, “0”, “2”, “5”), inflating the token count. Although single- digit tokenisation can enhance models’ ability to perform arbitrary numeric manipulations (by treat- ing each digit as an independent unit), it comes at the expense of temporal abstraction: the tight coupling between day, month, and year is lost, in- flating the compression penalty and increasing the θdivergence from the baseline. 5.2 D ATEFRAG BENCH Evaluation Performance on temporal reasoning tasks. We compare model accuracies in three tasks: Context- based Resolution, Format Switching, and Date Arithmetic (see Table 3). All models effectively solve Format Switching (e.g. 97.2% for OLMo-2- 7B, 95.7% for GPT-4o-mini, 94.6% for Qwen2.5- 14B, 90.2% for Llama3.1-8B). By contrast, Con- text Resolution and Arithmetic remain challenging: GPT-4o-mini scores 53.2% and 56.7%, Qwen2.5- 14B 47.6% and 51.4%, Llama3.1-8B 26.2% and
https://arxiv.org/abs/2505.16088v2
0.1 0.2 0.3 0.4 0.5 0.6 Date Fragmentation Ratio354045505560657075Date Resolution Accuracy (%)Fragmentation vs. Accuracy by T emporal Regime Past Near Past Present FutureBestfit (r=0.61) Figure 3: Date fragmentation ratio versus date resolu- tion accuracy, stratified by four time periods and six LLMs: OLMo, Llama 3, GPT-4o, Qwen, Gemma, Phi. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Date Fragmentation Ratio354045505560657075Date resolution Accuracy (%)Fragmentation vs. Accuracy by Date Format DDMMYYYY DD/MM/YYYY YYYY/MM/DDDDMMYYYY MMDDYYYYYYYYMMDDBestfit (r=0.42) Figure 4: Date fragmentation ratio versus date resolu- tion accuracy, stratified by six formats and six LLMs. 34.5%, and OLMo-2-7B 32.1% and 64.7%, respec- tively. The fact that arithmetic performance con- sistently exceeds resolution suggests that, given a correctly tokenised date, performing addition or subtraction is somewhat easier than resolving the date within free text—which requires encyclopedic knowledge. Correlating date fragmentation with model ac- curacy over time. Figure 3 plots date fragmenta- tion ratio against resolution accuracy, with 24 data points across six models and four temporal splits. Accuracy rises as we move from Past (1600-2000) to Near Past (2000–2009) and peaks in the Present (2010–2025), mirroring the negative correlation between fragmentation and accuracy (dashed line, Pearson correlation of −0.61). We note that the cor- relation is not particularly strong. This is because (i) for some models (e.g., Phi), their date fragmenta- tion ratios remain unchanged across temporal data splits and (ii) models differ greatly by their sizes: a larger model could outperform a substantially smaller model in terms of temporal reasoning per- formance, even if the former has a much higher fragmentation ratio. As seen from Table 5, GPT-4o-mini climbs from 61.7 % in Past to 67.9 % in Near Past, peaks at 70.5 % for Present, and falls to 58.2 % on Future dates. Qwen-2.5-14B and Llama-3.1-8B trace the same contour at lower absolute levels. OLMo-2-7B shows the steepest Near-Past jump (49.5 →62.4 %) and achieves the highest Present accuracy (73.6 %), consistent with its finer-grained tokenisation of “20XX” patterns. These results indicate that while finer date tokenisation (i.e., lower fragmentation ratios) boosts performance up to contemporary ref- erences, today’s models still generalise poorly to genuinely novel (post-2025) dates, highlighting an open challenge for robust temporal reasoning. Correlating date fragmentation with model ac- curacy over formats. Figure 4 plots model ac- curacy against date fragmentation ratio across six date formats and six LLMs. A moderate nega- tive trend emerges (dashed line, Pearson corre- lation of −0.42): formats that contain explicit separators (DD-MM-YYYY , DD/MM/YYYY , YYYY/MM/DD) are tokenised into more pieces and, in turn, resolved more accurately than com- pact, separator-free strings (DDMMYYYY , MMD- DYYYY , YYYYMMDD). As shown in Table 6, GPT-4o-mini tops every format and receives a moderate performance drop from 71.2 % on DD/MM/YYYY to 61.2 % on DDMMYYYY , with the highest overall average (66.3 %). OLMo-2-7B and Qwen-2.5-14B both exceed 70 % on the highly fragmented YYYY/MM/DD form, but slip into the low 50s on MMDDYYYY and YYYYMMDD. Lower date fragmentation ratio models, such as Llama-3.1-8B and Phi-3.5, lag behind; their accu- racy plunges below 40 %. Even so, all models score
https://arxiv.org/abs/2505.16088v2
much better on separator-rich formats compared to the date formats without separators. In sum- mary, model accuracy is correlated to how cleanly a model can tokenise the string into interpretable tokens: more visual structure (slashes or dashes) means lower fragmentation, which suggests more straightforward reasoning, and in turn, leads to bet- ter performance.6In which layer do LLMs compensate for date fragmentation? Layerwise linear probing. To pinpoint in which layer a model learns to recognize two equiva- lent dates, we define the tokenisation compensa- tion point (TCP) as the earliest layer at which a lightweight linear probe on the hidden state achieves above-chance accuracy, which is defined as 80%, on the date equivalence task. Figure 5a reports TCPs for the DATES _PAST benchmark (1600–2010): Qwen2.5-0.5B reaches TCP at layer 12 (50% depth), Qwen2.5-1.5B at layer 15 (53.6%), Qwen2.5-3B at layer 8 (22.2%), and Qwen2.5- 7B at layer 4 (14.3%). The leftward shift of the 3B and 7B curves suggests how larger mod- els recover calendar-level semantics from frag- mented tokens more rapidly. Figure 5b shows the DATES _PRESENT benchmark (2010–2025), where only the 1.5B, 3B, and 7B models surpass TCP—at layers 16 (57.1%), 21 (58.3%), and 17 (60.7%), respectively—while the 0.5B model never does. The deeper TCPs here reflect extra layers needed to recombine the two-digit “20” prefix, which is fragmented unevenly by the tokeniser. In Figure 10, we evaluate DATES _FUTURE (2025–2599), where novel four-digit sequences exacerbate fragmenta- tion. Remarkably, TCPs mirror the Past regime: layers 12, 15, 8, and 4 for the 0.5B, 1.5B, 3B, and 7B models, respectively. This parallelism indicates that model scale dictates how quickly LLMs can compensate for date fragmentation to achieve high accuracy, even when dates are novel. Tokenisation compensation point. Overall, we observe a sharp decline in TCP as model size in- creases: small models defer date reconstruction to middle layers, whereas the largest model does so within the first quarter of layers. Across all the three temporal benchmarks, TCP shifts steadily toward the first layers as model size grows. 7How do LLMs stitch date fragments for temporal reasoning? Causal path tracing. To investigate how LLMs like Llama 3 (Touvron et al., 2023) internally stitch date fragments to yield a model answer, we ap- ply our casual framework to identify the model’s reasoning path over a specific prompt. Figure 6 plots model layers on the yaxis against prompt tokens (e.g., Is 03122025 a valid date?) on the x axis. Green arrows mark the reasoning path with (a) Past (b) Present Figure 5: Layer-wise accuracies in the two time periods: Past and Present. Figure 6: Reasoning path for the “03122025 is a valid date” prompt. the highest score that is responsible for generat- ing the answer “yes”. Date fragments “25”, “220”, “031”, and the model answer “yes” are activated in sequence at layer 26-27 by the input tokens “is”, “031”, “a” and “Answer” respectively. As such, the model performs a kind of discrete, step-by-step to- ken aggregation, stitching together substrings of the input until a binary valid/invalid verdict emerges. Misalignment between LLMs and human. In contrast,
https://arxiv.org/abs/2505.16088v2
human readers parse dates by immedi- ately mapping each component to a coherent tem- poral schema: “03” is March, “12” is day of month, “2025” is year, and then checking whether the day falls within the calendar bounds of that month. Humans bring rich world knowledge of calendars and leap-year rules to bear in parallel. However, LLMs exhibit no explicit calendar “module”; in- stead, they rely on learned statistical associations between digit-patterns and the training-time super- visory signal for “valid date”. The reasoning path in Figure 6 thus illustrates a fundamentally different mechanism of date comprehension in LLMs, based on date fragments re-routing rather than holisticsemantic interpretation. We repeated causal trac- ing on 100 date strings in 6 different date formats to test whether the reasoning path difference be- tween human and LLMs is consistent across date formats. In most of cases, we observe that model reasoning paths are not aligned with human inter- pretation (year →month →day), rather rely on sub-word fragments that statistically represent year, month, and day, and stitch these date fragments in a flexible order that is subject to date formats (see examples in Figures 7-8). However, such a rea- soning path becomes tricky when a date is greatly fragmented: given the date abstraction is learned from frequency rather than hard-coded rules, the abstraction is biased toward standard Western for- mats and contemporary years. As a result, a model often addresses popular dates (in the same format) with similar reasoning paths. However, the reason- ing path becomes obscure on rare, historical, or locale-specific strings outside the distribution of pre-training data (see Figure 9). 8 Conclusion In this paper, we identified date tokenisation as a critical yet overlooked bottleneck in temporal rea- soning with LLMs. We demonstrated a correlation between date fragmentation and task performance in temporal reasoning, i.e., the more fragmented the tokenisation, the worse the reasoning performance. Our layerwise and causal analyses in LLMs further revealed an emergent “date abstraction” mecha- nism that explains when and how LLMs understand and interpret dates. Our results showed that larger models can compensate for date fragmentation at early layers by stitching fragments for temporal reasoning, while the stitching process appears to follow a reasoning path that connects date frag- ments in a flexible order, differing from human interpretation from year to month to day. Limitations While our work demonstrates the impact of date to- kenisation on LLMs for temporal reasoning, there are several limitations. First, DATEAUGBENCH fo- cuses on a finite set of canonical date serialisations and does not capture the full diversity of natural- language expressions (e.g., “the first Monday of May 2025”) or noisy real-world inputs like OCR outputs. Second, our experiments evaluate a repre- sentative but limited pool of tokenisers and model checkpoints (up to 14B parameters); therefore, the generalizability of date fragmentation ratio and our probing and causal analyses to very large models with 15B+ parameters remains unknown. Third, while the fragmentation ratio measures front-end segmentation fidelity, it does not account for deeper world-knowledge factors such as leap-year rules, timezone conversions, and culturally grounded cal- endar
https://arxiv.org/abs/2505.16088v2
systems, all of which may influence temporal interpretation; further, the fragmentation ratio met- ric, though straightforward and interpretable, is not rigorously evaluated. Lastly, the core idea of our causal framework is inspired by Lindsey et al. (2025); however, our extension to temporal reason- ing is not evaluated. Future work should extend to more diverse date expressions, broader model and tokeniser families, equipping tokenisers with ex- ternal calendar-wise knowledge to improve further robust temporal reasoning, and conducting rigor- ous evaluation of the fragmentation ratio metric and the causal framework.Ethical Considerations DATEAUGBENCH is derived solely from the public, research-licensed TIMEQAandTIMEBENCH cor- pora that do not contain sensitive data; our augmen- tation pipeline rewrites only date strings. However, our dataset focuses on 21 Anglo-centric Gregorian formats. Therefore, our data potentially reinforce a Western default and overlook calendars or numeral systems used in many other cultures, and our date fragmentation metric may over-penalise tokenisers optimised for non-Latin digits. Acknowledgements We gratefully thank Madiha Kazi, Cristina Ma- hanta, and MingZe Tang for their support of con- ducting human evaluation for LLMs-as-judge. We also thank Ahmad Isa Muhammad for participating in early discussions. References Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Qin Cai, Vishrav Chaudhary, Dong Chen, Dongdong Chen, and 110 others. 2024. Phi-3 technical report: A highly capa- ble language model locally on your phone. Gagan Bhatia, El Moatez Billah Nagoudi, Hasan Cavu- soglu, and Muhammad Abdul-Mageed. 2024. Fin- tral: A family of gpt-4 level multimodal financial large language models. Preprint , arXiv:2402.10986. Andrea Carriero, Davide Pettenuzzo, and Shubhran- shu Shekhar. 2024. Macroeconomic forecast- ing with large language models. arXiv preprint arXiv:2407.00890 . Ching Chang, Wei-Yao Wang, Wen-Chih Peng, and Tien-Fu Chen. 2023. Llm4ts: Aligning pre-trained llms as data-efficient time-series forecasters. arXiv preprint arXiv:2308.08469 . Wenhu Chen, Xinyi Wang, and William Yang Wang. 2021. A dataset for answering time-sensitive ques- tions. Preprint , arXiv:2108.06314. Zheng Chu, Jingchang Chen, Qianglong Chen, Wei- jiang Yu, Haotian Wang, Ming Liu, and Bing Qin. 2024. Timebench: A comprehensive evaluation of temporal reasoning abilities in large language models. Preprint , arXiv:2311.17667. Bahare Fatemi, Mehran Kazemi, Anton Tsitsulin, Karishma Malkan, Jinyeong Yim, John Palowitch, Sungyong Seo, Jonathan Halcrow, and Bryan Per- ozzi. 2024. Test of time: A benchmark for evaluating llms on temporal reasoning. Juan Luis Gastaldi, John Terilla, Luca Malagutti, Brian DuSell, Tim Vieira, and Ryan Cotterell. 2024. The foundations of tokenization: Statistical and computa- tional concerns. Omer Goldman, Avi Caciularu, Matan Eyal, Kris Cao, Idan Szpektor, and Reut Tsarfaty. 2024. Unpacking tokenization: Evaluating text compression and its correlation with model performance. Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bha- gia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khy- athi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, and 24 others. 2024. Olmo: Accelerating the science of language models. Jack Lindsey, Wes Gurnee, Emmanuel Ameisen, Brian Chen, Adam Pearce, Nicholas L. Turner,
https://arxiv.org/abs/2505.16088v2
Craig Citro, David Abrahams, Shan Carter, Basil Hosmer, Jonathan Marcus, Michael Sklar, Adly Templeton, Trenton Bricken, Callum McDougall, Hoagy Cun- ningham, Thomas Henighan, Adam Jermyn, Andy Jones, and 8 others. 2025. On the biology of a large language model. Transformer Circuits Thread . Alisa Liu, Jonathan Hayase, Valentin Hofmann, Se- woong Oh, Noah A. Smith, and Yejin Choi. 2025. Superbpe: Space travel for language models. arXiv preprint arXiv:2503.13423 . OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others. 2024. Gpt-4o system card. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Alt- man, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haim- ing Bao, Mohammad Bavarian, Jeff Belgum, and 262 others. 2023. Gpt-4 technical report. Buu Phan, Marton Havasi, Matthew Muckley, and Karen Ullrich. 2024. Understanding and mitigating tokenization bias in language models. arXiv preprint arXiv:2406.16829 . Nived Rajaraman, Jiantao Jiao, and Kannan Ramchan- dran. 2024. Toward a theory of tokenization in llms. François Remy, Pieter Delobelle, Hayastan Avetisyan, Alfiya Khabibullina, Miryam de Lhoneux, and Thomas Demeester. 2024. Trans-tokenization and cross-lingual vocabulary transfers: Language adap- tation of llms for low-resource nlp. arXiv preprint arXiv:2408.04303 . Craig W. Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, and Chris Tan- ner. 2024. Tokenization is more than compression.Aaditya K. Singh and DJ Strouse. 2024. Tokenization counts: the impact of tokenization on arithmetic in frontier llms. Zhaochen Su, Jun Zhang, Tong Zhu, Xiaoye Qu, Juntao Li, Min Zhang, and Yu Cheng. 2024. Timo: Towards better temporal reasoning for language models. Mingtian Tan, Mike A. Merrill, Vinayak Gupta, Tim Althoff, and Thomas Hartvigsen. 2024. Are language models actually useful for time series forecasting? In Advances in Neural Information Processing Systems . Qingyu Tan, Hwee Tou Ng, and Lidong Bing. 2023. Towards robust temporal reasoning of large lan- guage models via a multi-hop qa dataset and pseudo- instruction tuning. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, and 179 others. 2024. Gemma 2: Improving open language models at a practical size. Preprint , arXiv:2408.00118. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, and 49 oth- ers. 2023. Llama 2: Open foundation and fine-tuned chat models. Stylianos Loukas Vasileiou and William Yeoh. 2024. Trace-cs: A synergistic approach to explainable course scheduling using llms and logic. arXiv preprint arXiv:2409.03671 . Jiapu Wang, Kai Sun, Linhao Luo, Wei Wei, Yongli Hu, Alan Wee-Chung Liew, Shirui Pan, and Baocai Yin. 2024. Large language models-guided dynamic adaptation
https://arxiv.org/abs/2505.16088v2
for temporal knowledge graph reasoning. arXiv preprint arXiv:2405.14170 . Yang Wang and Hassan A Karimi. 2024. Exploring large language models for climate forecasting. arXiv preprint arXiv:2411.13724 . Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. 2024. Mea- suring short-form factuality in large language models. Preprint , arXiv:2411.04368. Yifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, and Kang Liu. 2023. Menatqa: A new dataset for testing the temporal comprehension and reasoning abilities of large language models. Siheng Xiong, Ali Payani, Ramana Kompella, and Fara- marz Fekri. 2024. Large language models can learn temporal reasoning. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, and 43 others. 2024. Qwen2 technical report. Yifan Zeng. 2024. Histolens: An llm-powered frame- work for multi-layered analysis of historical texts – a case application of yantie lun. arXiv preprint arXiv:2411.09978 . Bowen Zhao, Zander Brumbaugh, Yizhong Wang, Han- naneh Hajishirzi, and Noah A. Smith. 2024. Set the clock: Temporal alignment of pretrained language models. Mengyu Zheng, Hanting Chen, Tianyu Guo, Chong Zhu, Binfan Zheng, Chang Xu, and Yunhe Wang. 2024. Enhancing large language models through adaptive tokenizers. In Proc. NeurIPS . Zhejian Zhou, Jiayu Wang, Dahua Lin, and Kai Chen. 2024. Scaling behavior for large language models re- garding numeral systems: An example using pythia. A Appendix A.1 Experiment Design Implementation details of evaluation. The eval- uation pipeline is implemented in Python and sup- ports asynchronous API requests with retry logic, as well as multiprocessing to handle thousands of examples efficiently. After collecting GPT-4o’s la- bel for each instance, we map CORRECT /INCORRECT NOT ATTEMPTED to categorical scores A, B, and C. We then compute three core metrics: overall accuracy (proportion of A scores), given-attempted accuracy (A over A+B), and the F1 score, de- fined as the harmonic mean of overall and given- attempted accuracy. Results are reported both glob- ally and stratified by task split (Context-based, For- mat Switching, Date Arithmetic) and by temporal category (Past, Near Past, Present, Future). We adopt the sample prompts introduced in SimpleQA (Wei et al., 2024) as our LLM-as-judge queries, ensuring consistent scoring instructions across all evaluations. Our specific prompt used for evalua- tion can be found in Table 7. We have presented our examples of LLM as a judge and human evaluation in Table 8. Date ambiguities. We explicitly enumerate all valid variants in the gold label set for each ex- ample to handle multiple correct answers arising from date-format ambiguities. This ensures that any prediction matching one of these variants is marked correct, avoiding penalisation for format differences.Synthetic benchmark construction for lin- ear probing. We construct a suite of syn- thetic true–false benchmarks to isolate tempo- ral reasoning across different reference frames. For the DATES _PAST,DATES _PRESENT , and DATES _FUTURE datasets, we sample 1,000 date–date pairs each, drawing calendar dates uni- formly
https://arxiv.org/abs/2505.16088v2
from the appropriate range and rendering them in two randomly chosen, distinct formatting patterns ( Ymdvsd/m/Y ). Exactly half of each set are “YES” examples (identical dates under differ- ent formats), which are our positive examples, and half are “NO” (different dates), which are our neg- ative examples. All three datasets are balanced, shuffled, and split into equal positive and negative subsets to ensure fair probing. A.2 Causal Attention–Hop Analysis A.2.1 Next Token Prediction We treat each token in the prompt as a candidate “concept” to follow. After the model processes the input, it produces a hidden vector hℓ,pper token at position pand layer ℓ. To see how likely a concept c(e.g., a date fragment and model answer) follows each input token, we project hℓ,pthrough WUto yield the “probability” distribution of vocabulary tokens, and denote sc ℓ,pas the “probability” of the concept being the next token. zℓ,p=WUhℓ,p, sc ℓ,p=zℓ,p[tc],(1) where tcis the index of concept cin the vocabulary. A.2.2 Token Importance To measure how important an input token is to a concept (e.g., a date fragment and model answer), we replace the token with an unrelated one (e.g., “Dallas” →“Chicago”) and compute the probabil- ity drop of the concept incurred by the replacement, denoted as Ic,p(which we compute only at the last layer): Ic,p=σ zp[tc] −σ ˜zp[tc] , (2) where σis a softmax function. The bigger the Ic,p, the more important the original token at position p for the concept c. A.2.3 Path scoring Areasoning path P= (c1, . . . , c k)is a sequence of tokens, indicating in which order date fragments are stitched together for LLMs to answer a tempo- ral question. We score each potential path by blend- ing five components (ordering, activation strength, causal strength, gap penalty, and confidence in the final concept), into a single score: S(P) =α×Sorder+β×Sact+γ×Scausal −η×Sgap+κ×Sfinal (1) Each term is designed to reward a different de- sirable property: •Ordering : we give points if the concepts appear in roughly left-to-right order in the prompt, and secondarily in increasing layer order: Sorder= 0.7×1[p1≤ ··· ≤ pk] +0.3×1[ℓ1≤ ··· ≤ ℓk],(2) where 1is an indicator function, pi= max ℓ,psci ℓ,pindicating the position of the most important input token for a concept ciat the last layer. Similarly, ℓiis the layer at which an input token pays the most attention to the concept ci. •Activation : we compute the average position of the most important input token for a con- cept from 1 to k, and normalize by a threshold τ= 0.2, and clip to 1: Sact= min1 kkX i=1pi/ τ,1 , (3) •Causal strength : we use the token impor- tance score, denoted as di=|Ici+1,pi|be- tween two adjacent concepts ci+1andci, up- weight latter scores, and downweight missing links by a coverage term ρ, which is defined as the fraction of actual causal connections observed between consecutive concepts out of the total possible consecutive pairs in the path. The combined score then multiplies the weighted average of the diby1 2+1 2ρ, giving: Scausal =P iwidiP iwi 0.5 + 0 .5ρ ,(4) where wi= 0.5 + 0 .5i−1
https://arxiv.org/abs/2505.16088v2
k−2. •Gap penalty : to discourage large jumps in po- sition, we compute the mean gap ¯gand apply a small multiplier λ= 0.1: Sgap= 1−λ¯g, S gap≤1. (6) This is done to encourage model paths to think step by step instead of directly jumping to the conclusion (yes/no).•Final confidence : We compute the position of the most important input token for the last concept ck: Sfinal= max ℓ,psck ℓ,p. (7) The reasoning path with the highest total score S(P)is chosen as the model’s reasoning path over a specific prompt. We note that Ordering ,Acti- vation ,Gap penalty andFinal confidence compo- nents are built upon next token prediction signals sc ℓ,p, whereas the Causal strength component is de- rived solely from token importance score Ici+1,πi, i.e. the drop in the softmax probability for concept ci+1when the token at position piis replaced. Models Past Near Past Present Future GPT-4o-mini 61.66 67.93 70.51 58.23 OLMo-2-7B 49.45 62.35 73.56 43.45 Qwen2.5 14B 58.97 64.80 67.22 55.69 Qwen2.5 7B 51.41 55.98 57.98 48.55 Qwen2.5 3B 46.50 50.25 51.98 43.91 LLama3.1 8B 45.28 48.82 50.48 42.76 Qwen2.5 1.5B 42.99 46.16 47.69 40.60 Qwen2.5 0.5B 39.15 41.68 43.00 36.98 OLMo-2-1B 36.07 38.09 40.49 34.07 LLama3.2 3B 36.48 38.57 39.74 34.46 Table 5: Model accuracy on context-based resolution across four data splits over time. Model DD-MM-YYYY DD/MM/YYYY YYYY/MM/DD DDMMYYYY MMDDYYYY YYYYMMDD Avg. OLMo 64.70 64.56 65.35 52.35 54.56 50.41 58.65 Llama 3 50.31 50.89 53.45 38.45 40.24 34.56 44.65 GPT-4o 68.51 71.23 69.24 61.23 62.34 64.98 66.25 Qwen 64.49 62.35 73.56 46.50 50.25 51.98 58.19 Gemma 58.90 58.97 64.80 47.22 46.50 50.25 54.44 Phi 47.23 46.07 48.09 39.15 41.68 43.00 44.20 Table 6: Model accuracy on context-based resolution across date formats. Figure 7: Reasoning path for the “03/12/2025 is a valid date” prompt. Figure 8: Reasoning path of the “03-12-2025 is a valid date” prompt. Figure 9: Reasoning path of the “03121325 is a valid date” prompt, where year = 1325 . Figure 10: Layer-wise accuracies in the Future period. LLM-as-Judge Evaluation Prompt Your task: Evaluate one prediction at a time. You receive: •Question – the task prompt shown to the model •Gold target –allanswers that are considered correct •Predicted answer – the model’s response Return one letter only : A CORRECT prediction fully matches onegold variant B INCORRECT prediction contradicts or misses required info C NOT_ATTEMPTED prediction refuses, guesses, or answers irrelevantly General rules : 1. Match semantics, ignore capitalisation, punctuation, order. 2. If any statement contradicts the gold target, grade B. 3. Hedging ("I think. . . ") is fine if the correct info is present and no incorrect info is added. 4. Partial answers are B. Typos that preserve meaning are allowed. DateAugBench specifics : •Date format ambiguity : gold lists every valid interpretation; accept any. •Date arithmetic : prediction must match day, month, year of a listed variant, any textual format allowed. •Format-switch questions : answer with any synonym of Yes/True orNo/False . •Numeric answers – must match the gold number to the last shown significant digit. Output format Return exactly one capital letter: AorBorC No
https://arxiv.org/abs/2505.16088v2
additional text or punctuation. Example template Question: {question} Gold target: {target} Predicted answer: {predicted_answer} Now grade: AorBorC Table 7: LLM-as-Judge prompt used for comparing model and gold answers in the three DateAugBench tasks. Human Evaluation Context-based resolution Prompt : Who was the chair of Allgemeiner Deutscher Fahrrad-Club in 17/10/2016? Gold Answer : Ulrich Syberg Model Prediction : As of October 17, 2016, the Federal Chairman was Ulrich Syberg Human Annotator Rating :A LLM-as-Judge Rating :A Date arithmetic Prompt : What date is 60 days after 05/01/1225? Gold Answer : March 6, 1225 , June 29, 1225 Model Prediction : July 30, 1225 Human Annotator Rating :B LLM-as-Judge Rating :B Table 8: Human evaluation of LLM-as-judge.
https://arxiv.org/abs/2505.16088v2
arXiv:2505.16090v1 [cs.AI] 22 May 2025CANAI R EAD BETWEEN THE LINES ? B ENCHMARKING LLM S ONFINANCIAL NUANCE Dominick Kubicaa+, Dylan T. Gordona+, Nanami Emuraa+, Derleen Sainia+, and Charlie Goldenberga+ aDepartment of Business Analytics, Santa Clara University - Leavey School of Business, Santa Clara, California 95053, United States +These authors contributed equally. {dkubica, dtgordon, nemura, dsaini, cgoldenberg}scu@edu This research was conducted as part of a Microsoft-sponsored Capstone Project at Santa Clara University, led by Juhi Singh and Bonnie Ao from the Microsoft MCAPS AI Transformation Office. ABSTRACT As of 2025, Generative Artificial Intelligence (GenAI) has become a central tool for productivity across industries. Beyond text generation, GenAI now plays a critical role in coding, data analysis, and research workflows. As large language models (LLMs) continue to evolve, it is essential to assess the reliability and accuracy of their outputs, especially in specialized, high-stakes domains like finance. Most modern LLMs transform text into numerical vectors, which are used in operations such as cosine similarity searches to generate responses. However, this abstraction process can lead to misinterpretation of emotional tone, particularly in nuanced financial contexts. While LLMs generally excel at identifying sentiment in everyday language, these models often struggle with the nuanced, strategically ambiguous language found in earnings call transcripts. Financial disclosures frequently embed sentiment in hedged statements, forward-looking language, and industry-specific jargon, making it difficult even for human analysts to interpret consistently, let alone AI models. This paper presents findings from the Santa Clara Microsoft Practicum Project, led by Professor Charlie Gold- enberg, which benchmarks the performance of Microsoft’s Copilot, OpenAI’s ChatGPT, Google’s Gemini, and traditional machine learning models for sentiment analysis of financial text. Using Microsoft earnings call transcripts, the analysis assesses how well LLM-derived sentiment correlates with market sentiment and stock movements and evaluates the accuracy of model outputs. Prompt engineering techniques are also examined to improve sentiment analysis results. Visualizations of sentiment consistency are developed to evaluate alignment between tone and stock performance, with sentiment trends analyzed across Microsoft’s lines of business to determine which segments exert the greatest influence. 1 Introduction Generative AI’s role in high-stakes domains like finance will become more prevalent as AI becomes increasingly embedded in professional workflows. Financial language is uniquely complex because it is charged with forward- looking statements, hedged language, and subtle cues that challenge current models. Can today’s Large Language Models (LLMs) understand this kind of nuance? This question motivated a collaborative research project between Santa Clara University and the Microsoft data science team. The evaluation centers on whether LLMs can outperform traditional natural language processing (NLP) tools in financial sentiment analysis and whether they can generate useful insights when applied to real-world financial reporting such as quarterly earnings calls. The approach had three parts: 1. Benchmarking LLMs and traditional NLP tools on a standardized financial dataset. 2.Applying these models to Microsoft’s quarterly earnings transcripts [ 1], breaking down sentiment by business line, and better understanding insights that can be extracted from earnings call transcripts. Can AI Read Between the Lines? Benchmarking LLMs on Financial Nuance Figure 1: Overall Sentiment Analysis Performance (First
https://arxiv.org/abs/2505.16090v1
250 Rows) 3.Analyzing results to identify optimization opportunities and assess how sentiment correlates with actual stock performance. The results were both encouraging and eye-opening: while LLMs significantly outperformed traditional tools in grasping nuanced sentiment, they still face performance challenges. This paper outlines the benchmarking process, real-world findings, and recommendations to enhance tools like Microsoft Copilot. 2 Evaluating the Accuracy of Models Through Benchmarking An objective benchmarking process is essential to evaluate performance differences between LLMs and traditional NLP tools. A standardized evaluation was conducted to measure how accurately various models interpret sentiment in financial texts. Given the complexities of financial language, this comparison highlights how effectively each model captures tone and nuance, offering insights for both tool selection and future model development. Accuracy testing was conducted using the Financial Phrase Bank dataset, developed by researchers at Aalto University. [ 2] It consists of financial and earnings-related news headlines labeled as positive, neutral, or negative based on market sentiment. Nine models were compared: •LLM-based/cloud platforms: Microsoft Copilot Desktop App, Copilot via Microsoft 365, Copilot App Online1, ChatGPT - 4o, and Google Gemini 2.0 Flash • Cloud-based NLP service: Azure Language AI •Python libraries: FinBERT (Transformer model via Python library), NLTK, and TextBlob (Microsoft Copilot 365) Each model classified the same sentences from the dataset. Financial sentences were preprocessed for the traditional NLP libraries to ensure formatting consistency. For LLM-based tools, identical prompts were used to reflect a real-world application. After each model returned the sentiment of each sentence, accuracy was measured as the percentage of correct classifications against the pre-labeled dataset. Both the Copilot desktop app and the Chat interface, whether run locally or accessed via the web, were used with the ”Think Deeper” capability. Microsoft 365 does not have a ”Think Deeper” ability available. Benchmarking revealed significant differences in sentiment analysis accuracy across models (Figure 1). The Copilot App (both Online and Local) led with an accuracy of 82.0%, followed by ChatGPT 4o (77.6%), Prompt-Engineered ChatGPT (75.6%), and Gemini (68.0%). Notably, LLMs using uncleaned sentences demonstrated stronger performance, particularly in detecting nuance and hedged expressions. 1ChatGPT is developed by OpenAI and operates on Microsoft’s Azure supercomputing infrastructure. While Microsoft and OpenAI collaborate in the development and delivery of AI services, OpenAI remains an independent entity. Azure OpenAI Service provides enterprise-grade access to OpenAI models. 2 Can AI Read Between the Lines? Benchmarking LLMs on Financial Nuance Figure 2: Condensed Sentiment Accuracy Comparison (First 250 Rows) Copilot through Microsoft 365 exhibited lower accuracy compared to other LLM-based sentiment models in the benchmarking. Analysis indicated that Copilot 365 defaults to the TextBlob Python library as its primary sentiment analysis engine, while both the desktop App and web-hosted Chat versions utilize their full LLM-based capabilities. Both Prompt Engineered and Unguided Copilot 365 returned outputs consistent with standalone TextBlob, often defaulting to neutral sentiment and missing implied or domain-specific cues (Figure 2). This outcome is reflective of the design intent of Copilot 365, which is primarily focused on enhancing the productivity features of the Microsoft 365 suite rather than advanced NLP tasks. Handling CSV files presented
https://arxiv.org/abs/2505.16090v1
challenges for Copilot across all tested versions. The models occasionally struggled to interpret structured data accurately, and converting CSVs to plain text was often necessary to improve reliability. Instances of hallucination were observed during formatting and post-processing, resulting in inconsistent outputs. In cases where Copilot defaulted to simpler tools or miscommunicated its capabilities, the lack of transparency risked eroding user trust. This is especially sensitive domains like financial analysis, where clarity and reliability are critical. These findings highlight that deployment choices, including model selection, tool integration, and input formatting, have a significant impact on LLM performance in specialized tasks. In contexts like financial sentiment analysis, these factors can determine whether outputs are accurate, reliable, and useful. Across the board, LLMs outperformed traditional sentiment engines in identifying implied or nuanced sentiment. ChatGPT and Gemini delivered strong results, while FinBERT was particularly effective for finance-specific cases. The performance of the Copilot App illustrates the potential of integrated LLM tools for financial analysis, contingent upon appropriate model selection and deployment. 3 Real-World Application: Business Line Sentiment vs Stock Prediction While benchmarking LLMs on a standardized dataset offered valuable insight into model accuracy, it was also important to assess whether these tools could convert benchmarking results into actionable insights in a real-world financial context. Specifically, can business line sentiment derived from earnings call transcripts provide additional business insights or competitive data, and correlate with stock price movements? Microsoft Copilot was used to segment quarterly earnings call transcripts by business line: Devices, Dynamics, Gaming, Office Commercial, Search and News Advertising, Server Products and Cloud Services, Q and A, and Overall Sentiment. Each segment was then processed using ChatGPT to evaluate sentiment at the business-line level. ChatGPT-4o was selected as the sentiment analysis tool due to its consistent output and its ability to process multiple quarters of transcripts efficiently through a single standardized Python workflow. While Copilot demonstrated the highest sentiment accuracy in our evaluation, its lack of API access and reliance on manual input made it impractical for large-scale analysis. Given ChatGPT-4o’s close performance and superior accessibility, it was the most viable choice for the analysis. The blue shaded box for each quarter shows the interquartile range of the data and the black horizontal lines represent the minimum and maximum sentiment values without outliers (Figure 3). The arrows below represent the stock price increase or decrease the day after the earnings call for each quarter. Analyzing overall transcript sentiment provided limited insight into stock movement, but breaking down sentiment by business segment revealed meaningful patterns that would otherwise be overlooked. For instance, high positive 3 Can AI Read Between the Lines? Benchmarking LLMs on Financial Nuance Figure 3: Positive Sentiment by Business Line - ChatGPT sentiment in the ”Search and News Advertising” segment during Q1 2025 was associated with a notable drop in stock price following the call. Conversely, in Q3 2023, Devices sentiment spiked positively and was followed by a significant rise in Microsoft’s share price. These findings suggest that sentiment within specific business segments may have a greater impact on market response than the overall
https://arxiv.org/abs/2505.16090v1
tone of each earnings call. To visualize this relationship, the above SHAP (SHapley Additive exPlanations) beeswarm plot was created to map sentiment direction against stock movement (Figure 4). Each dot represents a business segment within a specific quarter. Red indicates high positive sentiment, while blue indicates low sentiment. The horizontal axis represents the SHAP value, indicating the contribution of each business line’s sentiment to the model’s stock price change prediction, with values further from zero signifying greater influence. Importantly, dots for Search and News Advertising cluster toward the left despite the red color. This reinforces the hypothesis that a positive tone in this segment may raise investor skepticism or signal over-optimism, resulting in stock price decline. This inverse correlation raises a critical insight: not all positive sentiment within transcripts translates to positive external or investor sentiment. Similar sentiment inversions were observed in segments like gaming and Q&A as well, where optimistic statements were followed by negative stock movement. These cases illustrate a core challenge in sentiment analysis: tone alone is not a reliable predictor of investor sentiment. Investor response likely depends on additional context, expectations, and broader market narratives. Ultimately, breaking down transcripts by business line was key to revealing meaningful insights and it transformed high-level analysis into more granular interpretation. Integrating LLMs into workflows offers a valuable opportunity to improve forecasting accuracy and contextual understanding. When combined with human expertise and domain knowledge, LLMs can uncover nuanced sentiment patterns that open avenues for further exploratory analysis and research. The findings indicate that LLMs, when combined with domain expertise, can support the identification of sentiment patterns that may inform further analysis, while acknowledging that conclusive stock movement predictions remain outside the current capabilities of these approaches. 4 Can AI Read Between the Lines? Benchmarking LLMs on Financial Nuance Figure 4: SHAP Beeswarm: Effect of Net Sentiment on Stock Prediction 4 Findings and Optimization Recommendations Reviewing our benchmark results alongside the Microsoft earnings case study, several key insights emerged: •LLMs clearly outperformed traditional tools in financial sentiment analysis, especially in detecting filler words and subtle cues. •Traditional models require aggressive text cleaning, which often strips away nuance. LLMs were able to use more of the filler context wording. •Despite their edge, LLMs still failed to exceed 85% accuracy. Financial experts should be able to surpass this with their tailored domain knowledge. • LLMs remain expensive and computationally intensive, limiting scalability for smaller teams. •Human creativity is still essential. While LLMs supported tasks like code generation and visualizations, they lacked the intuition to guide the project’s direction. The key questions, analysis choices, and meaningful visualizations came from the data scientists. AI can assist, but it doesn’t have the perspective or creative insight to see a project through from start to finish. These findings reinforce that LLMs are powerful tools, but not replacements for domain experts. 4.1 Observed Optimization Areas in Copilot and LLM Implementations for Financial Applications Evaluation of multiple LLM platforms, including different Copilot implementations, identified several areas for potential optimization to enhance performance and usability in financial sentiment analysis tasks. • Performance
https://arxiv.org/abs/2505.16090v1
Transparency: In some configurations, tasks were routed to traditional tools like TextBlob rather than handled directly by the LLM, but users were not clearly informed when this occurred. Clear indicators of fallback behavior can help users understand what the system is doing. In addition, documentation that explains capability differences between configurations supports more informed and confident decision-making. Without this transparency, users may misinterpret the system’s capabilities, leading to confusion and reduced confidence in the tool. Over time, this lack of clarity can erode user trust, especially in professional or high-stakes applications where reliability is critical. • Structured Data Handling: All LLM systems should prioritize reliable handling of structured data formats like CSVs. In Copilot, accurate analysis often required converting CSVs to plain text, adding extra steps for the user. While some internal 5 Can AI Read Between the Lines? Benchmarking LLMs on Financial Nuance processing may be needed, this complexity should be managed by the system, not the user. Streamlining CSV support can improve usability in data-heavy workflows and encourage adoption in professional settings. • Reliability in Basic NLP Functions: Hallucinations and inconsistencies in core tasks, such as sentiment counting and text cleaning, were frequent and deeply concerning. In some cases, LLMs fabricated capabilities or produced results that were clearly inaccurate, even when the task was straightforward. These failures are concerning in specialized domains where accuracy is non-negotiable. If left unaddressed, they will continue to undermine user confidence and limit the practical adoption of LLMs in high-stakes settings. 5 Conclusion Financial language is layered with strategy, hedging, and nuance, making it a stress test for any sentiment analysis tool. This study found that while LLMs outperform traditional NLP libraries in detecting financial sentiment, they still face architectural, economic, and reliability barriers to adoption at scale. When used carefully, however, LLMs can help highlight patterns or shifts that may complement traditional analysis, especially when applied at the business-line level. The Microsoft earnings call case study demonstrates that a segmented, model-enhanced approach can connect executive tone with investor behavior in powerful ways. Looking forward, tools like Microsoft Copilot have immense potential, but unlocking that potential will require deeper integration with LLM capabilities, better transparency, and a stronger focus on enterprise needs. LLMs, paired with human intelligence, will reshape the future of financial analysis not by replacing experts, but by providing them with a powerful, new toolset. References [1] Microsoft Corporation. Investor relations. https://www.microsoft.com/en-us/investor/default , 2025. Accessed: 2025-05-08. [2] S. Bhatti. Financial sentiment analysis. https://www.kaggle.com/datasets/sbhatti/financial-sentiment-analysis , 2021. Accessed: 2025-05-08. 6
https://arxiv.org/abs/2505.16090v1
arXiv:2505.16094v1 [cs.LG] 22 May 2025A Survey of Large Language Models for Text-Guided Molecular Discovery: from Molecule Generation to Optimization Ziqing Wang1*Kexin Zhang1*Zihan Zhao1Yibo Wen1 Abhishek Pandey2Han Liu1Kaize Ding1† 1Northwestern University2AbbVie {ziqingwang2029, zihanzhao2026, yibowen2024}@u.northwestern.edu kevin.kxzhang@gmail.com abhishek.pandey@abbvie.com {hanliu, kaize.ding}@northwestern.edu Abstract Large language models (LLMs) are introduc- ing a paradigm shift in molecular discovery by enabling text-guided interaction with chemical spaces through natural language, symbolic notations, with emerging extensions to incorporate multi-modal inputs. To advance the new field of LLM for molecular discovery, this survey provides an up-to-date and forward-looking review of the emerging use of LLMs for two central tasks: molecule generation and molecule optimization. Based on our proposed taxonomy for both problems, we analyze representative techniques in each category, highlighting how LLM capabilities are leveraged across different learning settings. In addition, we include the commonly used datasets and evaluation protocols. We conclude by discussing key challenges and future directions, positioning this survey as a resource for researchers working at the intersection of LLMs and molecular science. A continuously updated reading list is avail- able at https://github.com/REAL-Lab-NU/ Awesome-LLM-Centric-Molecular-Discovery . 1 Introduction Molecular design and optimization are fundamen- tal to multiple scientific disciplines, including drug discovery (Zheng et al., 2024), materials sci- ence (Grandi et al., 2025), and synthetic chem- istry (Lu et al., 2024; Wang et al., 2025). However, these tasks present significant challenges due to the vast and complex chemical spaces that must be navigated to discover novel compounds with desirable properties while maintaining chemical validity and structural plausibility (Zheng et al., 2024; Yu et al., 2025). Over the years, a range of computational approaches has been developed to achieve these goals, from Variational Autoen- coders (Gómez-Bombarelli et al., 2018) and Gen- *Equal Contribution †Corresponding Authorerative Adversarial Networks (De Cao and Kipf, 2018) to Transformers (Edwards et al., 2022). How- ever, these traditional methods often struggle with generating high-quality, diverse, and synthesizable molecules (Ramos et al., 2025; Sun et al., 2025). More recently, large language models (LLMs) have emerged as particularly powerful tools for tackling these challenges, drawing increasing research at- tention (Zheng et al., 2024). These foundation models, characterized by billions of parameters, exhibit emergent capabilities such as advanced reasoning, instruction following, and in-context learning, enabled by extensive pre-training on di- verse datasets (Brown et al., 2020; Wei et al., 2022a). Thus, LLMs can leverage their extensive pre-training knowledge to generalize across chem- ical problems and can be further adapted to spe- cialized tasks through fine-tuning. These unique capabilities have established LLMs as a powerful new paradigm for exploring chemical space and accelerating molecular discovery. Despite the growing interest in applying LLMs to molecular discovery tasks, existing literature reviews fail to provide a comprehensive analy- sis of this specific intersection. Most earlier sur- veys (Cheng et al., 2021; Zeng et al., 2022; Tang et al., 2024; Yang et al., 2024) focus broadly on general deep generative AI approaches rather than specifically examining LLMs’ unique contribu- tions. Other reviews that do mention LLMs (Ramos et al., 2025; Zhang et al., 2025; Guo et al., 2025; AbuNasser, 2024; Janakarajan et al., 2024; Liao et
https://arxiv.org/abs/2505.16094v1
al., 2024) either primarily focus on the general chemical domain or include smaller language mod- els lacking the emergent capabilities characteristic of the LLMs central to this survey. Our survey addresses this critical gap by providing the first overview specifically focused on LLMs as generators in molecular discovery, with particu- 1 lar emphasis on two central tasks: molecule gen- eration andmolecule optimization . Our survey specifically highlights how LLMs are deployed, adapted, and trained for navigating and manip- ulating complex chemical spaces, distinguishing their role from auxiliary functions like feature ex- traction (Liu et al., 2023) or control (Liu et al., 2024a). Unlike prior surveys that categorize stud- ies based on model architectures (AbuNasser, 2024; Janakarajan et al., 2024), we introduce a new tax- onomy centered on the learning paradigms em- ployed to leverage LLMs for generative molecular tasks. As illustrated in Fig. 1, we distinguish be- tween approaches that operate without LLM tuning (i.e., Zero-Shot Prompting andIn-Context Learn- ing) and those with LLM tuning (i.e., Supervised Fine-Tuning andPreference Tuning ), allowing re- searchers to better understand the effectiveness and limitations of different LLM utilization strategies. To summarize, we provide the first systematic re- view focused on LLMs for text-guided molecu- lar discovery for both generation and optimization tasks. The main contributions are as follows: •We introduce a new taxonomy categorizing exist- ing research based on learning paradigms, reveal- ing how different approaches utilize LLMs’ ca- pabilities, alongside their respective advantages and limitations. •We provide a systematic summary of commonly used datasets, benchmarks, and evaluation met- rics, offering a comprehensive reference for re- searchers in the field. •We identify critical challenges and outline promising future research directions to further advance this rapidly evolving domain of LLM- centric molecular discovery. 2 Preliminaries 2.1 Large Language Models LLMs distinguish themselves from earlier Pre- trained Language Models (PLMs) like BERT (De- vlin et al., 2019) (which typically possessed mil- lions of parameters) primarily through their mas- sive scale, often boasting parameter counts in the billions, and the resultant emergent capabilities not found in smaller models (Zhao et al., 2023; Yang et al., 2023). The development of these LLMs and their advanced functionalities is largely attributed to their pre-training on vast text corpora, predom-inantly through an autoregressive next-token pre- diction objective. This immense scale facilitates emergent capabilities (Wei et al., 2022a) such as in-context learning (Brown et al., 2020), chain-of- thought reasoning (Wei et al., 2022b), and powerful zero-shot generalization, which are not consistently observed in their smaller predecessors. These ad- vanced capabilities render LLMs uniquely suited for tackling complex chemical applications like molecule generation and optimization tasks cen- tral to this review. For clarity and scope within this survey, we focus specifically on foundation models with at least 1 billion (1B) parameters . 2.2 Problem Definition In this survey, we focus on two central tasks: Problem Definition 1 (LLM-centric Molecule Generation ).This task leverages LLMs as the core generative engine for the de novo design of novel molecular structures based on specified input in- structions. Problem Definition 2 (LLM-centric Molecule Optimization ).This task leverages LLMs
https://arxiv.org/abs/2505.16094v1
to mod- ify or edit a given input molecule, aiming to en- hance one or more of its properties while often preserving essential structural characteristics. As illustrated in Fig. 2, for both tasks, the input prompt provided to the LLM typically comprises three key components: (1) Instruction ( I): A tex- tual component that defines the primary guidance and objectives of the task. (2) Few-Shot Exam- ples (Efs)(Optional): A small set of input-output examples relevant to the task, provided to facili- tate in-context learning. (3) Property Constraints (Cp)(Optional): Explicit desired values, ranges, or thresholds for specific molecular properties. While these input components are common to both tasks, their specific content and function differ sig- nificantly. For Molecule Generation , the Instruc- tionItypically consists of a natural language de- scription of the desired molecular characteristics or a general task definition. The objective is to gen- erate a chemically valid molecular representation (e.g., a SMILES string SM) that aligns with this in- struction and any provided property constraints Cp, potentially guided by few-shot examples Efs.For Molecule Optimization , the Instruction Iserves a more specific purpose. It not only outlines the optimization objectives but also crucially includes 2 LLM-Centric Molecular DiscoveryGenerationw/o Tuning In-Context LearningLLM4GraphGen (Yao et al., 2024), MolReGPT (Li et al., 2024c), FrontierX (Srinivas and Runkana, 2024) w/ TuningSupervised Fine-TuningMol-instructions (Fang et al., 2023), LlaSMol (Yu et al., 2024a), ChemLLM (Zhang et al., 2024a), ICMA (Li et al., 2024b), MolReFlect (Li et al., 2024d), ChatMol (Fan et al., 2025), PEIT-LLM (Lin et al., 2025), NatureLM (Xia et al., 2025), SynLlama (Sun et al., 2025), TOMG-Bench (Li et al., 2024a), UniMoT (Zhang et al., 2024b) Preference TuningDiv-SFT (Jang et al., 2024), Mol-MOE (Calanzone et al., 2025), SmileyLLama (Cavanagh et al., 2024), ALMol (Gkoumas, 2024), Less for More (Gkoumas and Liakata, 2024), Mol-LLM (Lee et al., 2025) Optimizationw/o TuningZero-Shot Prompting LLM-MDE (Bhattacharya et al., 2024), MOLLEO (Wang et al., 2025) In-Context LearningCIDD (Gao et al., 2025b), LLM-EO (Lu et al., 2024), MOLLM (Ran et al., 2025), ChatDrug (Liu et al., 2024c), Re2DF (Le and Chawla, 2024), BOPRO (Agarwal et al., 2025) w/ TuningSupervised Fine-TuningMultiMol (Yu et al., 2025), DrugAssist (Ye et al., 2025), GeLLM3O (Dey et al., 2025), DrugLLM (Liu et al., 2024d), LLM-Enhanced GA (Bedrosian et al., 2024), Molx-Enhanced LLM (Le et al., 2024), TOMG-Bench (Li et al., 2024a) Preference Tuning NatureLM (Xia et al., 2025) Figure 1: A Taxonomy of LLM-Centric Molecular Discovery. aninitial molecule Mxthat requires modification. This initial molecule can be represented in various formats, such as a 1D sequence (e.g., SMILES), a 2D graph, or 3D coordinates (see Appendix A for details). The instruction typically specifies which properties should be improved. The objective is to generate a chemically valid modified molecule (e.g., SMy) that enhances the desired properties of Mxwhile adhering to any specified constraints Cp, potentially guided by few-shot examples Efs. 2.3 Learning Paradigms The application of LLMs to molecular discovery tasks, as depicted in the taxonomy in Fig. 2, can be broadly categorized based on whether the model’s parameters are updated for the specific task. This distinction defines two
https://arxiv.org/abs/2505.16094v1
primary learning paradigms: Without LLM Tuning: These methods utilize pre- trained LLMs directly, guiding their behavior solely through the input prompt Iwithout modifying the model’s weights. This paradigm primarily encom- passes strategies like Zero-Shot Prompting , where the LLM operates based on instructions alone, and In-Context Learning (ICL) , where few-shot exam- ples provided within the prompt guide the model’s responses. These approaches avoid computation- ally training but rely heavily on the LLM’s inherent capabilities and effective prompt engineering. With LLM Tuning: These methods involve adapt- ing the pre-trained LLM by further training and updating its parameters to specialize it for molecu- lar tasks or align its outputs with desired objec- tives. This typically includes Supervised Fine-Tuning (SFT) , where the model learns from labeled task-specific datasets, and subsequent Preference Tuning (or Alignment), where the model is refined based on feedback. While tuning can significantly enhance performance, it requires curated data and computational resources. 3 Molecule Generation Molecule generation, the computational creation of novel molecular structures, is a cornerstone of modern drug discovery and materials science (El- ton et al., 2019). This section reviews recent ad- vances in LLM-centric molecule generation, pri- marily categorizing approaches based on the learn- ing paradigms defined in Section 2.3. 3.1 Molecule Generation without Tuning In-Context Learning: Since Zero-Shot Prompting is challenging for general-purpose LLMs due to their lack of specialized chemical knowledge, most successful applications in this paradigm heavily rely on ICL to provide specific guidance. For in- stance, FrontierX (Srinivas and Runkana, 2024) uses knowledge-augmented prompting, supply- ing detailed instructions alongside few-shot ex- amples within the prompt to guide de novo de- sign effectively. Similarly, LLM4GraphGen (Yao et al., 2024) explores property-based generation by prompting LLMs with target properties and rele- vant molecular examples, evaluating performance under different prompting strategies, including few- shot ICL. Recognizing the importance of example quality, MolReGPT (Li et al., 2024c) incorporates Retrieval-Augmented Generation (RAG), dynam- 3 Instruction Query: Help me modify the molecule CC(C)C(=O)O to increase hydrophobicity while keeping it similar to the input molecule? Few-Shot Examples Query: Modify COc1ccccc1 to increase hydrophobicity while maintaining similarity. Response: CCOc1ccccc1 Property Constrains Required Tanimoto Similairy ≥ 0.6 Instruction In-Context Examples Frozen LLM Supervised Fine-Tuning Trained LLMInstruction Dataset Instruction Preference Tuning Property Constrains Trained LLM Instruction FeedbackReward Molecule Generation Instruction Query: The molecule is a threonic acid. It is a conjugate acid of a D-threonate. It is an enantiomer of a L-threonic acid. Few-Shot Examples Query: The molecule is the D-enantiomer of glyceric acid. Response: C([C@H](C(=O)O)O)O Property Constrains Calculated LogP (cLogP): 2.0 - 3.5 Drug-likeness (QED): > 0.9 Molecule Optimization Prompting & Tuning Strategies Zero-Shot Prompting & In-Context LearningFigure 2: Overview of LLM-Centric Molecular Discovery. Left: Typical input components (Instruction, Few-Shot Examples, Property Constraints) for molecule generation and optimization. Right: Core learning paradigms for applying LLMs to Zero-Shot Prompting & In-Context Learning ,Supervised Fine-Tuning andPreference Tuning . ically retrieving highly relevant molecule-caption pairs to serve as more effective few-shot context, thereby boosting ICL performance. 3.2 Molecule Generation with Tuning Supervised Fine-Tuning: While non-tuning meth- ods leverage pre-trained knowledge effectively, their capabilities can be
https://arxiv.org/abs/2505.16094v1
limited for highly special- ized or complex generation tasks. SFT addresses this by adapting pre-trained LLMs specifically for molecule generation on labeled datasets, typically pairs of instructions and target molecular represen- tations. Although early explorations demonstrated the viability of SFT using smaller PLMs such as MolGPT (Bagal et al., 2021) and MolT5 (Edwards et al., 2022), current research focuses on harnessing large foundation models. A predominant SFT strategy is the curation of large-scale, high-quality instruction datasets to instill chemical knowledge into general-purpose LLMs (as shown in Fig. 5). Initiatives such as LlaS- Mol (Yu et al., 2024a) with its SMolInstruct dataset, ChemLLM (Zhang et al., 2024a) with Chem- Data, Mol-Instructions (Fang et al., 2023) covering broader biomolecular text, and the OpenMolIns dataset from TOMG-Bench (Li et al., 2024a) all exemplify this trend. These efforts fine-tune mod- els like LLaMA-2-7B (Touvron et al., 2023) and Mistral-7B (Jiang et al., 2023) with LoRA (Hu et al., 2021), to enhance instruction following and performance on the molecule generation task. Beyond broad instruction tuning, SFT methodolo- gies also address specific challenges in molecule generation. A significant hurdle is ensuring thatgenerated molecules precisely meet complex con- straints. ChatMol (Fan et al., 2025) directly ad- dresses this limitation by using a numerical en- hancement technique, significantly improving the model’s fidelity to specified quantitative property values. Concurrently, SynLlama (Sun et al., 2025) tackles synthetic feasibility to generate complete synthetic pathways. Other innovative SFT strate- gies include integrating dynamic context directly into the fine-tuning process; ICMA (Li et al., 2024b) and MolReFlect (Li et al., 2024d) propose In-Context Molecule Tuning (ICMT), which fine- tunes the LLM with relevant retrieved examples. Furthermore, PEIT-LLM (Lin et al., 2025) pro- poses a two-step Property Enhanced Instruction Tuning (PEIT) framework, first synthesizing in- struction data with a multi-modal model, then using it to fine-tune LLMs for tasks like multi-constraint generation. NatureLM (Xia et al., 2025) demon- strates the application of SFT on models pre-trained across multiple scientific domains for tasks includ- ing text-instructed molecule generation. However, the SFT methods discussed above pri- marily operate on text-based representations (like SMILES or SELFIES), which inherently struggle to explicitly encode rich structural information cru- cial for chemistry. Multi-modal SFT approaches aim to bridge this gap by incorporating these richer data types. UniMoT (Zhang et al., 2024b) exempli- fies a solution by introducing a novel molecule tok- enizer. Leveraging Vector Quantization (VQ) and a Causal Q-Former, this component converts graph- based molecular features into discrete "molecule tokens", enabling unified autoregressive processing of text and graph-derived molecular information. 4 Preference Tuning: Following SFT, which primar- ily teaches models to mimic static input-output pat- terns from datasets, Preference Tuning techniques offer further refinement by employing feedback- driven learning to shape LLM outputs towards desired characteristics. In molecule generation, this feedback is typically incorporated in two main ways: (1) RL-based methods (Sutton et al., 1998) optimize the LLM (policy) using a scalar reward signal derived from evaluating generated molecules against desired criteria. (2) Offline methods like Direct Preference Optimization (DPO) learn from preference pairs ("chosen" vs "rejected") of molecules, training the
https://arxiv.org/abs/2505.16094v1
LLM to assign higher like- lihoods to the preferred candidates based on com- parative evaluations. SmileyLlama (Cavanagh et al., 2024) utilizes DPO after SFT to significantly improve adherence to specified property constraints by learning from preferences between correctly and incorrectly gen- erated molecules. Mol-MoE (Calanzone et al., 2025) uses a preference objective to train a Mixture- of-Experts router for molecule generation. Further- more, Div-SFT (Jang et al., 2024), after an initial SFT stage, employs RL with a reward function explicitly designed to maximize structural diver- sity among the generated molecules. Similarly, contrastive methods like Contrastive Preference Optimization (CPO) (Xu et al., 2024) have been used to refine the quality and relevance of gener- ated molecules based on preference data compar- ing desired targets against less optimal alternatives, proving effective even with limited data (Gkoumas, 2024; Gkoumas and Liakata, 2024). Preference Tuning is not limited to text-only in- put but also can handle multi-modal inputs after SFT. These approaches focus on improving how the model utilizes structural information, although this remains a more nascent area of research. For exam- ple, Mol-LLM (Lee et al., 2025) demonstrates bet- ter leveraging of 2D graph inputs through Molec- ular Structure Preference Optimization (MolPO). After an initial SFT phase involving graph inputs, MolPO further trains the LLM using preference pairs where the distinction between "chosen" and "rejected" outputs is based on the correctness of the input molecular graph conditioning the genera- tion. This preference learning implicitly guides the model to better integrate and leverage the provided structural information during processing.4 Molecule Optimization Molecule optimization is the task of refining molec- ular structures to improve one or more desired properties, such as solubility, binding affinity, or synthetic accessibility. Unlike molecule genera- tion, optimization starts with an initial molecule and proposes targeted structural modifications to achieve specific goals. This section summarizes LLM-centric molecule optimization methods, with a focus on how different learning paradigms (see Section 2.3) are leveraged to guide optimization. 4.1 Molecule Optimization without Tuning Zero-Shot Prompting: Zero-Shot Prompting lever- ages the pre-trained capabilities of LLMs to mod- ify input molecules according to natural language instructions, without providing specific examples in the prompt. This setting assumes that the model can interpret molecular structure (often via SMILES) and property-related text well enough to perform molecule optimization. For example, LLM-MDE (Bhattacharya et al., 2024) guides opti- mization with natural language prompts that spec- ify desired property changes and structural con- straints, enabling controlled modifications to given parent molecules. MOLLEO, on the other hand, integrates LLMs into an evolutionary framework inspired by population-based algorithms (Jensen, 2019). It uses prompt-based sampling to gener- ate candidates through mutations and crossovers, while applying filtering steps to enforce structural similarity. These methods demonstrate the flexi- bility of zero-shot prompting in expressing diverse optimization goals, though they often struggle with precise control in multi-objective settings. In-Context Learning: In contrast, ICL incorpo- rates examples of previous molecular edits into the prompt. This allows the LLM to learn optimization strategies by modifying new molecules in ways consistent with observed property improvements or structural changes. CIDD (Gao et al.,
https://arxiv.org/abs/2505.16094v1
2025b) structures molecule optimization into a multi-step pipeline: interaction analysis, design, and reflec- tion. Each step is guided by prompts derived from interaction profiles, and during the design step, pre- vious designs and reflections are provided to make better modifications. Both LLM-EO (Lu et al., 2024) and MOLLM (Ran et al., 2025) integrate LLMs into an Evolution- ary Algorithm (EA) framework through in-context 5 prompting. LLM-EO specifically targets transition metal complexes and guides optimization through prompts that include both objectives and examples of successful or failed complexes, enabling iterative improvement across generations. MOLLM elim- inates external operators entirely, using the LLM to perform all genetic operations. The model is guided by structured prompt templates containing optimization goals, molecular context, and histori- cal experience. It includes modules for candidate selection (via Pareto and scalarized scoring), and prompt construction, all designed for effective in- context molecule refinement. Retrieval-augmented prompting further strength- ens ICL by retrieving structurally similar and high-performing molecules from a given database. ChatDrug (Liu et al., 2024c) retrieves structurally similar molecules and incorporates this informa- tion into the prompt context, allowing the LLM to iteratively propose refinements based on feed- back. Re2DF (Le and Chawla, 2024) enhances this paradigm by integrating chemical validity feedback via RDKit (Landrum et al., 2013). When invalid molecules are generated, the resulting error mes- sages are used as feedback, closing the loop and guiding the LLM toward valid outputs. Addition- ally, recent work by BOPRO (Agarwal et al., 2025) combines ICL with Bayesian optimization. A sur- rogate model scores generated candidates and pro- poses updated prompts that include high-quality ex- amples from the search history. The LLM then uses these prompts to generate new SMILES strings, forming a feedback-driven, example-conditioned optimization cycle. 4.2 Molecule Optimization with Tuning Supervised Fine-Tuning: SFT adapts pre-trained LLMs to molecule optimization tasks by training on curated datasets that pair molecular inputs with corresponding optimized outputs, under explicit property-based instructions. These datasets often include transformation examples where the input molecule is associated with property modification goals (e.g., improving solubility or binding affin- ity) and the corresponding optimized molecules. Through such supervision, the model learns how to perform controlled structural edits conditioned on specific objectives. Several recent methods leverage SFT to improve the ability of LLMs to conduct molecule edits. Dru- gAssist (Ye et al., 2025) fine-tunes LLaMA-2-7B-Chat using a curated instruction dataset MolOpt- Instructions and adopts a multi-task learning strat- egy that combines general conversational data and molecule-specific instructions, helping preserve interactivity while learning task-specific patterns. However, its focus on single- and dual-property tasks limits scalability to more complex objectives. To address this limitation, GeLLM3O (Dey et al., 2025) proposes an instruction-tuned framework for multi-property optimization. It introduces Mu- MOInstruct, a dataset curated for diverse objectives, and trains both specialist and generalist models. The generalist variant shows strong generalization to novel out-of-distribution tasks without retrain- ing, demonstrating potential for instruction-tuned LLMs as flexible optimization engines. In addition, MultiMol (Yu et al., 2025) represents a collabo- rative framework combining a fine-tuned worker model and a research agent. The worker, trained on over one million
https://arxiv.org/abs/2505.16094v1
molecules, reconstructs SMILES based on scaffold-property prompts and modu- lates them during inference for property optimiza- tion. The research agent (GPT-4o) extracts struc- ture–property patterns from literature and ranks candidates using regression-based scoring, ensur- ing consistency with domain-specific knowledge. Transformer-based chemical language models (CLM) (Ross et al., 2022, 2024; Wu et al., 2024; Dai et al., 2025; Liu et al., 2025d) have demon- strated strong potential for molecule optimiza- tion tasks. Unlike prior models that rely on raw SMILES sequences, DrugLLM (Liu et al., 2024d) introduces a group-based molecular representation (GMR) that encodes SMILES strings to align struc- ture and semantics. It adopts an autoregressive training objective to model the generative process of molecular modifications conditioned on property descriptions or prior examples. SFT also plays a key role in population- based optimization frameworks. LLM-Enhanced GA (Bedrosian et al., 2024) proposes an itera- tive process in which new candidates are gen- erated via prompt-based sampling from high- performing molecules, replacing traditional mu- tation and crossover. Explicit oracle modeling is incorporated through supervised fine-tuning on evaluated molecules when performance stagnates, allowing the LLM to progressively refine its under- standing of structure–property relationships. Beyond text-only molecule optimization, multi- 6 Benchmarking & EvaluationDatasetsPretraining-OnlyZINC (Irwin et al., 2012), PubChem (Kim et al., 2016, 2019, 2025), ChemData (Zhang et al., 2024a), MuMOInstruct (Dey et al., 2025), Mol-Instructions (Fang et al., 2023) Benchmark-OnlyMoleculeNet (Wu et al., 2018), ChemBench (Mirza et al., 2024), MOSES (Polykovskiy et al., 2020), TOMG-Bench (Li et al., 2024a) Pretraining & BenchmarkChEMBL (Gaulton et al., 2012), ChEBI-20 (Edwards et al., 2021), QM9 (Pinheiro et al., 2020), CrossDocked2020 (Francoeur et al., 2020), Dockstring (García-Ortegón et al., 2022), MolOpt-Instructions (Ye et al., 2025), L+M-24 (Edwards et al., 2024b), SMolInstruct (Yu et al., 2024b), OGBG-MolHIV (Hu et al., 2020) MetricsStructure-BasedValidity & SimilarityValidity (Polykovskiy et al., 2020), EM (Rajpurkar et al., 2016), BLEU (Papineni et al., 2002), Levenshtein (Levenshtein, 1966), FTS(MACCS (Durant et al., 2002), RDK (Landrum et al., 2013), Morgan (Morgan, 1965)), FCD (Preuer et al., 2018) Diversity & UniquenessNCircle (Jang et al., 2024),IntDiv (Benhenda, 2017), Novel Rate (Brown et al., 2019), Unique@1k (Wang et al., 2023), Unique@10k (Bagal et al., 2021) Property-BasedSingle-PropertyLogP (Hansch et al., 1968), TPSA (Ertl et al., 2000), SA score (Ertl and Schuffenhauer, 2009) QED (Bickerton et al., 2012) Multi-PropertySuccess Rate under Constraints (Jin et al., 2020), Pareto Optimality (Pareto, 1919), Composite Score (Jin et al., 2020) Figure 3: A Taxonomy of Benchmarking & Evaluation in Molecule Discovery. modal molecule optimization incorporates struc- tural information such as molecular graphs and 3D geometries. These additional modalities enable more accurate modeling of structure–property rela- tionships and improve control over chemical valid- ity (Zhang et al., 2024c; Lin et al., 2024; Nakamura et al., 2025). Molx-Enhanced LLM (Le et al., 2024) exemplifies this approach with a framework that in- tegrates SMILES strings, 2D molecular graphs, and handcrafted fingerprints into a unified embedding. It employs LLaMA-2-7B as the base LLM and in- troduces a trainable multi-modal module, MolX, which is pre-trained with supervised molecule–text pairs and auxiliary tasks to align molecular rep- resentations with the LLM’s textual input space.
https://arxiv.org/abs/2505.16094v1
Importantly, during fine-tuning, the use of graph en- coders and fingerprint integration ensures that the model captures both global topology and substruc- tural details, which are essential for chemically valid optimization. It indicates that fine-tuning the LLM to establish multi-modal models shows better performances than generalist chemical LLMs. Preference Tuning: Preference Tuning aims to adjust large language models to better follow human instructions, preferences, or task-specific goals (Park et al., 2025; Chen et al., 2025). In molecule optimization, alignment techniques help models generate molecules that meet specific op- timization criteria more reliably. RL-based align- ment techniques, such as DrugImproverGPT (Liu et al., 2025c) and ScaffoldGPT (Liu et al., 2025e), both built on Transformer-based architectures, ex- plicitly incorporate reward functions to guidemolecule optimization. Moving beyond reliance on direct supervised signals, NatureLM (Xia et al., 2025) augments its post-trained 8B model using DPO, to improve molecule optimization across nine pharmacologically relevant properties. Instead of training on absolute labels or scalar rewards, the model is optimized using a curated dataset of 179.5k prompt–response preference pairs, where each instance presents a "preferred" and a "re- jected" molecular output given the same prompt. By training with DPO, NatureLM demonstrates improved alignment with desirable molecular prop- erties and generalizes preference-guided optimiza- tion across diverse chemical objectives. 5 Benchmarking and Evaluation Rigorous benchmarking and comprehensive evalu- ation are crucial for tracking the progress of LLM- centric molecular discovery. This section provides an overview of the common resources and method- ologies used, focusing on the datasets that form the basis of benchmarking efforts and the metrics applied for robust evaluation. Our discussion is structured around the taxonomy presented in Fig. 3. 5.1 Datasets A variety of datasets serve as the foundation for training and benchmarking LLMs in molec- ular discovery, differing in their primary utility: Pretraining-Only Datasets provide vast quantities of unlabeled molecular structures or general chem- ical knowledge, such as ZINC (Irwin et al., 2012) and PubChem (Kim et al., 2025), or large-scale in- struction collections like ChemData (Zhang et al., 7 2024a). Benchmark-Only Datasets are smaller, curated collections designed for specific evalua- tion tasks. Examples include TOMG-Bench (Li et al., 2024a) for open-domain molecule genera- tion, and MOSES (Polykovskiy et al., 2020) for de novo design benchmarking. A third category comprises datasets suitable for both Pre-training and Benchmark applications, offering a balance of scale and task-specificity. Notable examples include ChEMBL (Gaulton et al., 2012) for bioac- tivity data, and instruction datasets like SMolIn- struct (Yu et al., 2024b). Further details and a comparative summary of these and other relevant datasets are available in Appendix B and Table 1. 5.2 Metrics The performance of LLMs in molecular tasks is critically assessed using a diverse set of met- rics, broadly categorized into structure-based and property-based evaluations, which are essential for quantifying success in both molecule generation and optimization. Structure-Based Metrics eval- uate the intrinsic quality and diversity of molec- ular structures, including (1) Validity and Simi- larity metrics, which assess chemical correctness and resemblance to reference structures (e.g., va- lidity rate, exact match); and (2) Diversity and Uniqueness metrics, which
https://arxiv.org/abs/2505.16094v1
quantify the variety and novelty of the generated outputs (e.g., uniqueness rate, novelty rate). Property-Based Metrics gauge how well molecules meet desired functional cri- teria, applied for (1) Single-Property evaluation focusing on individual targets like Quantitative Estimate of Drug-likeness (QED), Lipophilicity (LogP), Synthetic Accessibility (SA), and binding affinity; and (2) Multi-Property evaluation assess- ing performance across several objectives, often via composite scores or success rates under mul- tiple constraints. A comprehensive catalogue and detailed discussion of these evaluation metrics are provided in Appendix C. 6 Conclusion and Future Work This survey presents the first comprehensive re- view of recent advances in LLM-centric molec- ular discovery, covering both generation and op- timization. We introduce a novel taxonomy dis- tinguishing approaches based on different learning paradigms—specifically, without LLM tuning (e.g., zero-shot prompting, in-context learning) versus with LLM tuning (e.g., supervised fine-tuning, pref-erence tuning). This framework allows for a sys- tematic analysis of how current strategies leverage LLM capabilities, revealing key trends, strengths, and limitations. The rapid progress in this field underscores LLMs’ transformative potential to ac- celerate scientific discovery in chemistry and re- lated disciplines. However, several challenges and exciting opportunities remain for future research: Trustworthy Generation and Hallucination Miti- gation: While LLMs can generate chemically plau- sible molecules, they often produce outputs that are chemically invalid or factually incorrect with- out domain-specific supervision (Le and Chawla, 2024). This lack of transparency limits their ap- plicability in high-stakes domains such as drug development (Ma et al., 2025). While interpretable prompting and rationalization techniques (Xiao et al., 2025) offer promising solutions, controlled hallucinations may actually serve as a creativity mechanism, potentially uncovering novel molec- ular scaffolds inaccessible through conventional search methods (Edwards et al., 2024a; Yuan and Färber, 2025). The future challenge lies not in elim- inating hallucinations entirely, but in developing frameworks that can distinguish between harmful fabrications and beneficial creative leaps. LLM Agents for Interactive Discovery: LLMs are increasingly being integrated into agent-based frameworks, where they coordinate with external tools (e.g., retrosynthesis engines, docking soft- ware, or lab automation platforms) to complete multi-step discovery workflows (Feng et al., 2025; Liu et al., 2025a). Building robust LLM agents that can plan, reason, and interact with both humans and tools could enable more flexible and goal-directed molecular design (Gao et al., 2025a). These agents could potentially close the loop between computa- tional prediction and experimental validation, ac- celerating the iterative discovery process. Multi-Modal Modeling and Alignment: Incorpo- rating multiple molecular modalities remains a core challenge. Most current LLM-based approaches typically treat modalities separately, with limited cross-modal interaction. Future work should priori- tize architectures that unify these representations, allowing joint encoding and reasoning over chemi- cal topology, geometry, and textual semantics (Lu et al., 2023; Pirnay et al., 2025). By developing sophisticated tokenization and fusion techniques that bridge discrete and continuous representations, 8 future systems could achieve a more holistic un- derstanding of chemical structures and properties, potentially leading to more accurate and innovative molecular designs. Limitations This survey focuses on the use of large language models for two core tasks in text-guided molecu- lar discovery:
https://arxiv.org/abs/2505.16094v1
molecule generation and molecule optimization. These tasks represent the most di- rect applications of LLMs in molecular design and are the primary scope of current research. We are aware that LLMs can also significantly impact other important areas of molecular science (Sun et al., 2025), such as reaction prediction, retrosyn- thesis, protein–ligand modeling, and automated experimentation (Zhang et al., 2024d; Liu et al., 2024b, 2025b). Given the broad and rapidly evolv- ing landscape, we leave a systematic review of these directions to future work. By narrowing the scope of this work, we provide a focused and de- tailed resource for researchers working on LLM- driven molecular design In the future, we anticipate expanding this analysis to encompass these addi- tional domains as the field continues to evolve. References Raghad AbuNasser. 2024. Large language models in drug discovery: A survey. Dhruv Agarwal, Manoj Ghuhan Arivazhagan, Rajarshi Das, Sandesh Swamy, Sopan Khosla, and Rashmi Gan- gadharaiah. 2025. Searching for optimal solutions with llms via bayesian optimization. In The Thirteenth Inter- national Conference on Learning Representations . Viraj Bagal, Rishal Aggarwal, PK Vinod, and U Deva Priyakumar. 2021. Molgpt: molecular generation us- ing a transformer-decoder model. Journal of chemical information and modeling , 62(9):2064–2076. Menua Bedrosian, Philipp Guevorguian, Tigran Fahradyan, Gayane Chilingaryan, Hrant Khachatrian, and Armen Aghajanyan. 2024. Small molecule opti- mization with large language models. In Neurips 2024 Workshop Foundation Models for Science: Progress, Opportunities, and Challenges . Mostapha Benhenda. 2017. Chemgan challenge for drug discovery: can ai reproduce natural chemical di- versity? arXiv preprint arXiv:1708.08227 . Debjyoti Bhattacharya, Harrison J Cassady, Michael A Hickner, and Wesley F Reinhart. 2024. Large language models as molecular design engines. Journal of Chemi- cal Information and Modeling , 64(18):7086–7096.GR Bickerton, GV Paolini, J Besnard, S Muresan, and AL Hopkins. 2012. Quantifying the chemical beauty of drugs. Nature Chemistry , 4(2):90–98. Nathan Brown, Marco Fiscato, Marwin HS Segler, and Alain C Vaucher. 2019. Guacamol: Benchmarking mod- els for de novo molecular design. Journal of chemical information and modeling , 59(3):1096–1108. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and 1 others. 2020. Language models are few- shot learners. NeurIPS , 33:1877–1901. Diego Calanzone, Pierluca D’Oro, and Pierre-Luc Ba- con. 2025. Mol-moe: Training preference-guided routers for molecule generation. arXiv preprint arXiv:2502.05633 . Joseph M Cavanagh, Kunyang Sun, Andrew Gritsevskiy, Dorian Bagni, Thomas D Bannister, and Teresa Head- Gordon. 2024. Smileyllama: Modifying large language models for directed chemical space exploration. arXiv preprint arXiv:2409.02231 . Angelica Chen, Samuel D. Stanton, Frances Ding, Robert G. Alberstein, Andrew M. Watkins, Richard Bonneau, Vladimir Gligorijevi ´c, Kyunghyun Cho, and Nathan C. Frey. 2025. Generalists vs. specialists: Eval- uating llms on highly-constrained biophysical sequence optimization tasks. Yu Cheng, Yongshun Gong, Yuansheng Liu, Bosheng Song, and Quan Zou. 2021. Molecular design in drug discovery: a comprehensive review of deep generative models. Briefings in bioinformatics , 22(6):bbab344. Zhilian Dai, Jie Zhang, Songyou Zhong, Jiawei Fu, Yangyang Deng, Dan Zhang, Yichao Liu, and Peng Gao. 2025. A zero-shot single-point molecule optimization model: Mimicking medicinal chemists’
https://arxiv.org/abs/2505.16094v1
expertise. Nicola De Cao and Thomas Kipf. 2018. Molgan: An implicit generative model for small molecular graphs. arXiv preprint arXiv:1805.11973 . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. InProceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186. Vishal Dey, Xiao Hu, and Xia Ning. 2025. Gellm3o: Generalizing large language models for multi-property molecule optimization. arXiv preprint arXiv:2502.13398 . JL Durant, BA Leland, DR Henry, and JG Nourse. 2002. Reoptimization of mdl keys for use in drug discovery. Journal of Chemical Information and Computer Sci- ences , 42(6):1273–1280. Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke, Kyunghyun Cho, and Heng Ji. 2022. Translation be- 9 tween molecules and natural language. arXiv preprint arXiv:2204.11817 . Carl Edwards, Qingyun Wang, and Heng Ji. 2024a. Lan- guage+ molecules. In Proceedings of the 18th Confer- ence of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts , pages 14–20. Carl Edwards, Qingyun Wang, Lawrence Zhao, and Heng Ji. 2024b. L+ m-24: Building a dataset for language+ molecules@ acl 2024. arXiv preprint arXiv:2403.00791 . Carl Edwards, ChengXiang Zhai, and Heng Ji. 2021. Text2mol: Cross-modal molecule retrieval with natural language queries. In EMNLP , pages 595–607. Daniel C Elton, Zois Boukouvalas, Mark D Fuge, and Peter W Chung. 2019. Deep learning for molecular design—a review of the state of the art. Molecular Systems Design & Engineering , 4(4):828–849. Peter Ertl, Bernhard Rohde, and Paul Selzer. 2000. Fast calculation of molecular polar surface area as a sum of fragment-based contributions and its application to the prediction of drug transport properties. Journal of Medicinal Chemistry , 43(20):3714–3717. Peter Ertl and Ansgar Schuffenhauer. 2009. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contribu- tions. Journal of Cheminformatics , 1(1):8. Chuanliu Fan, Ziqiang Cao, Zicheng Ma, Nan Yu, Yimin Peng, Jun Zhang, Yiqin Gao, and Guohong Fu. 2025. Chatmol: A versatile molecule designer based on the numerically enhanced large language model. arXiv preprint arXiv:2502.19794 . Yin Fang, Xiaozhuan Liang, Ningyu Zhang, Kangwei Liu, Rui Huang, Zhuo Chen, Xiaohui Fan, and Huajun Chen. 2023. Mol-instructions: A large-scale biomolecu- lar instruction dataset for large language models. arXiv preprint arXiv:2306.08018 . Henri A Favre and Warren H Powell. 2014. Nomencla- ture of Organic Chemistry: IUPAC Recommendations and Preferred Names 2013 . Royal Society of Chem- istry. Jiazhan Feng, Shijue Huang, Xingwei Qu, Ge Zhang, Yujia Qin, Baoquan Zhong, Chengquan Jiang, Jinxin Chi, and Wanjun Zhong. 2025. Retool: Reinforcement learning for strategic tool use in llms. arXiv preprint arXiv:2504.11536 . Paul G Francoeur, Tomohide Masuda, Jocelyn Sun- seri, Andrew Jia, Richard B Iovanisci, Ian Snyder, and David R Koes. 2020. Three-dimensional convo- lutional neural networks and a cross-docked data set for structure-based drug design. Journal of chemical information and modeling , 60(9):4200–4215. Bowen Gao, Yanwen Huang, Yiqiao Liu, Wenxuan Xie, Wei-Ying Ma, Ya-Qin Zhang, and Yanyan Lan. 2025a. Pharmagents: Building a virtual pharma with large lan- guage model
https://arxiv.org/abs/2505.16094v1
agents. arXiv preprint arXiv:2503.22164 .Bowen Gao, Yanwen Huang, Yiqiao Liu, Wenxuan Xie, Wei-Ying Ma, Ya-Qin Zhang, and Yanyan Lan. 2025b. Pushing the boundaries of structure-based drug design through collaboration with large language mod- els.arXiv preprint arXiv:2503.01376 . Miguel García-Ortegón, Gregor NC Simm, Austin J Tripp, José Miguel Hernández-Lobato, Andreas Ben- der, and Sergio Bacallado. 2022. Dockstring: easy molecular docking yields better benchmarks for ligand design. Journal of chemical information and modeling , 62(15):3486–3502. Anna Gaulton, Louisa J Bellis, A Patricia Bento, Jon Chambers, Mark Davies, Anne Hersey, Yvonne Light, Shaun McGlinchey, David Michalovich, Bissan Al- Lazikani, and 1 others. 2012. Chembl: a large-scale bioactivity database for drug discovery. Nucleic acids research , 40(D1):D1100–D1107. Dimitris Gkoumas. 2024. Almol: Aligned language- molecule translation llms through offline preference con- trastive optimisation. arXiv preprint arXiv:2405.08619 . Dimitris Gkoumas and Maria Liakata. 2024. Less for more: Enhanced feedback-aligned mixed llms for molecule caption generation and fine-grained nli evalua- tion. arXiv preprint arXiv:2405.13984 . Rafael Gómez-Bombarelli, Jennifer N Wei, David Du- venaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera- Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. 2018. Automatic chemical de- sign using a data-driven continuous representation of molecules. ACS central science , 4(2):268–276. Daniele Grandi, Yash Patawari Jain, Allin Groom, Bran- don Cramer, and Christopher McComb. 2025. Evaluat- ing large language models for material selection. Jour- nal of Computing and Information Science in Engineer- ing, 25(2):021004. Huijie Guo, Xudong Xing, Yongjie Zhou, Wenjiao Jiang, Xiaoyi Chen, Ting Wang, Zixuan Jiang, Yibing Wang, Junyan Hou, Yukun Jiang, and 1 others. 2025. A survey of large language model for drug research and develop- ment. IEEE Access . Corwin Hansch, John E Quinlan, and Gary L Lawrence. 1968. Linear free-energy relationship between partition coefficients and the aqueous solubility of organic liquids. The journal of organic chemistry , 33(1):347–350. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, and 1 others. 2021. Lora: Low-rank adaptation of large language models. In ICLR . Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural infor- mation processing systems , 33:22118–22133. John J Irwin, Teague Sterling, Michael M Mysinger, Erin S Bolstad, and Ryan G Coleman. 2012. Zinc: a 10 free tool to discover chemistry for biology. Journal of chemical information and modeling , 52(7):1757–1768. Nikita Janakarajan, Tim Erdmann, Sarath Swaminathan, Teodoro Laino, and Jannis Born. 2024. Language mod- els in molecular discovery. In Drug Development Sup- ported by Informatics , pages 121–141. Springer. Hyosoon Jang, Yunhui Jang, Jaehyung Kim, and Sung- soo Ahn. 2024. Can llms generate diverse molecules? towards alignment with structural diversity. arXiv preprint arXiv:2410.03138 . Jan H Jensen. 2019. A graph-based genetic algo- rithm and generative model/monte carlo tree search for the exploration of chemical space. Chemical science , 10(12):3567–3572. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, and 1 others. 2023.
https://arxiv.org/abs/2505.16094v1
Mis- tral 7b. arXiv preprint arXiv:2310.06825 . Wengong Jin, Regina Barzilay, and Tommi Jaakkola. 2020. Multi-objective molecule generation using inter- pretable substructures. In International Conference on Machine Learning , pages 4849–4859. PMLR. Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gin- dulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, and 1 others. 2019. Pubchem 2019 update: improved access to chemical data. Nucleic acids research , 47(D1):D1102–D1109. Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gin- dulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, and 1 others. 2025. Pubchem 2025 update. Nucleic Acids Research , 53(D1):D1516–D1525. Sunghwan Kim, Paul A Thiessen, Evan E Bolton, Jie Chen, Gang Fu, Asta Gindulyte, Lianyi Han, Jane He, Siqian He, Benjamin A Shoemaker, and 1 others. 2016. Pubchem substance and compound databases. Nucleic acids research , 44(D1):D1202–D1213. Mario Krenn, Florian Häse, AkshatKumar Nigam, Pas- cal Friederich, and Alan Aspuru-Guzik. 2020. Self- referencing embedded strings (selfies): A 100% robust molecular string representation. Machine Learning: Science and Technology , 1(4):045024. Greg Landrum and 1 others. 2013. Rdkit: A software suite for cheminformatics, computational chemistry, and predictive modeling. Greg Landrum , 8(31.10):5281. Khiem Le and Nitesh V Chawla. 2024. Utilizing large language models in an iterative paradigm with domain feedback for molecule optimization. arXiv preprint arXiv:2410.13147 . Khiem Le, Zhichun Guo, Kaiwen Dong, Xiaobao Huang, Bozhao Nan, Roshni Iyer, Xiangliang Zhang, Olaf Wiest, Wei Wang, and Nitesh V Chawla. 2024. Molx: Enhancing large language models for molecular learning with a multi-modal extension. arXiv preprint arXiv:2406.06777 .Chanhui Lee, Yuheon Song, YongJun Jeong, Hanbum Ko, Rodrigo Hormazabal, Sehui Han, Kyunghoon Bae, Sungbin Lim, and Sungwoong Kim. 2025. Mol-llm: Generalist molecular llm with improved graph utiliza- tion. arXiv preprint arXiv:2502.02810 . Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet physics doklady , 10(8):707–710. Jiatong Li, Junxian Li, Yunqing Liu, Dongzhan Zhou, and Qing Li. 2024a. Tomg-bench: Evaluating llms on text-based open molecule generation. arXiv preprint arXiv:2412.14642 . Jiatong Li, Wei Liu, Zhihao Ding, Wenqi Fan, Yuqiang Li, and Qing Li. 2024b. Large language models are in-context molecule learners. arXiv preprint arXiv:2403.04197 . Jiatong Li, Yunqing Liu, Wenqi Fan, Xiao-Yong Wei, Hui Liu, Jiliang Tang, and Qing Li. 2024c. Empow- ering molecule discovery for molecule-caption transla- tion with large language models: A chatgpt perspective. IEEE transactions on knowledge and data engineering . Jiatong Li, Yunqing Liu, Wei Liu, Jingdi Le, Di Zhang, Wenqi Fan, Dongzhan Zhou, Yuqiang Li, and Qing Li. 2024d. Molreflect: Towards in-context fine-grained alignments between molecules and texts. arXiv preprint arXiv:2411.14721 . Chang Liao, Yemin Yu, Yu Mei, and Ying Wei. 2024. From words to molecules: A survey of large language models in chemistry. arXiv preprint arXiv:2402.01439 . Xiaohan Lin, Yijie Xia, Yupeng Huang, Shuo Liu, Jun Zhang, and Yi Qin Gao. 2024. Versatile molecular editing via multimodal and group-optimized generative learning. Xuan Lin, Long Chen, Yile Wang, Xiangxiang Zeng, and Philip S. Yu. 2025. Property enhanced instruction tuning for multi-task molecule generation with large language models. arXiv preprint arXiv:2412.18084 . Bang Liu,
https://arxiv.org/abs/2505.16094v1
Xinfeng Li, Jiayi Zhang, Jinlin Wang, Tan- jin He, Sirui Hong, Hongzhang Liu, Shaokun Zhang, Kaitao Song, Kunlun Zhu, and 1 others. 2025a. Ad- vances and challenges in foundation agents: From brain- inspired intelligence to evolutionary, collaborative, and safe systems. arXiv preprint arXiv:2504.01990 . Gang Liu, Michael Sun, Wojciech Matusik, Meng Jiang, and Jie Chen. 2024a. Multimodal large language models for inverse molecular design with retrosynthetic plan- ning. arXiv preprint arXiv:2410.04223 . Pengfei Liu, Jun Tao, and Zhixiang Ren. 2024b. Scien- tific language modeling: A quantitative review of large language models in molecular science. arXiv preprint arXiv:2402.04119 , page 3. Pengfei Liu, Jun Tao, and Zhixiang Ren. 2025b. A quantitative analysis of knowledge-learning preferences in large language models in molecular science. Nature Machine Intelligence , pages 1–13. 11 Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao, Ling Liu, Jian Tang, Chaowei Xiao, and Animashree Anandkumar. 2023. Multi-modal molecule structure–text model for text-based retrieval and editing. Nature Machine Intelligence , 5(12):1447–1457. Shengchao Liu, Jiongxiao Wang, Yijin Yang, Cheng- peng Wang, Ling Liu, Hongyu Guo, and Chaowei Xiao. 2024c. Conversational drug editing using retrieval and domain feedback. In The twelfth international confer- ence on learning representations . Xianggen Liu, Yan Guo, Haoran Li, Jin Liu, Shudong Huang, Bowen Ke, and Jiancheng Lv. 2024d. Drugllm: Open large language model for few-shot molecule gen- eration. arXiv preprint arXiv:2405.06690 . Xuefeng Liu, Songhao Jiang, Siyu Chen, Zhuoran Yang, Yuxin Chen, Ian Foster, and Rick Stevens. 2025c. Drugimprovergpt: A large language model for drug optimization with fine-tuning via structured policy opti- mization. arXiv preprint arXiv:2502.07237 . Xuefeng Liu, Songhao Jiang, Bo Li, and Rick Stevens. 2025d. Controllablegpt: A ground-up designed con- trollable gpt for molecule optimization. arXiv preprint arXiv:2502.10631 . Xuefeng Liu, Songhao Jiang, and Rick Stevens. 2025e. Scaffoldgpt: A scaffold-based large language model for drug improvement. arXiv preprint arXiv:2502.06891 . Hao Lu, Zhiqiang Wei, Xuze Wang, Kun Zhang, and Hao Liu. 2023. Graphgpt: A graph enhanced generative pretrained transformer for conditioned molecular gen- eration. International Journal of Molecular Sciences , 24(23):16761. Jieyu Lu, Zhangde Song, Qiyuan Zhao, Yuanqi Du, Yirui Cao, Haojun Jia, and Chenru Duan. 2024. Gen- erative design of functional metal complexes utilizing the internal knowledge of large language models. arXiv preprint arXiv:2410.18136 . Wenjie Ma, Jingxuan He, Charlie Snell, Tyler Griggs, Sewon Min, and Matei Zaharia. 2025. Reasoning mod- els can be effective without thinking. arXiv preprint arXiv:2504.09858 . A Mirza, N Alampara, S Kunchapu, B Emoekabu, A Krishnan, M Wilhelmi, M Okereke, J Eberhardt, AM Elahi, M Greiner, and 1 others. 2024. Are large language models superhuman chemists? arXiv preprint arXiv:2404.01475 . HL Morgan. 1965. The generation of a unique machine description for chemical structures—a technique devel- oped at chemical abstracts service. Journal of Chemical Documentation , 5(2):107–113. Shogo Nakamura, Nobuaki Yasuo, and Masakazu Seki- jima. 2025. Molecular optimization using a conditional transformer for reaction-aware compound exploration with reinforcement learning. Communications Chem- istry, 8(1):40. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalua-tion of machine translation. In Proceedings of the 40th annual meeting
https://arxiv.org/abs/2505.16094v1
of the Association for Computational Linguistics , pages 311–318. Vilfredo Pareto. 1919. Manuale di economia politica con una introduzione alla scienza sociale , volume 13. Società editrice libraria. Jinyeong Park, Jaegyoon Ahn, Jonghwan Choi, and Jibum Kim. 2025. Mol-air: Molecular reinforce- ment learning with adaptive intrinsic rewards for goal- directed molecular generation. Journal of Chemical Information and Modeling , 65(5):2283–2296. Gabriel A Pinheiro, Johnatan Mucelini, Marinalva D Soares, Ronaldo C Prati, Juarez LF Da Silva, and Mar- cos G Quiles. 2020. Machine learning prediction of nine molecular properties based on the smiles representation of the qm9 quantum-chemistry dataset. The Journal of Physical Chemistry A , 124(47):9854–9866. Jonathan Pirnay, Jan G Rittig, Alexander B Wolf, Mar- tin Grohe, Jakob Burger, Alexander Mitsos, and Do- minik G Grimm. 2025. Graphxform: graph transformer for computer-aided molecular design. Digital Discov- ery, 4(4):1052–1065. Daniil Polykovskiy, Alexander Zhebrak, Benjamin Sanchez-Lengeling, Sergey Golovanov, Oktai Tatanov, Stanislav Belyaev, Rauf Kurbanov, Aleksey Artamonov, Vladimir Aladinskiy, Mark Veselov, and 1 others. 2020. Molecular sets (moses): a benchmarking platform for molecular generation models. Frontiers in pharmacol- ogy, 11:565644. Kristina Preuer, Philipp Renz, Thomas Unterthiner, and 1 others. 2018. Fréchet chemnet distance: A metric for generative models for molecules in drug discov- ery. Journal of chemical information and modeling , 58(9):1736–1741. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 . Mayk Caldas Ramos, Christopher J. Collison, and An- drew D. White. 2025. A review of large language mod- els and autonomous agents in chemistry. Chemical Science . Nian Ran, Yue Wang, and Richard Allmendinger. 2025. MOLLM: multi-objective large language model for molecular design - optimizing with experts. arXiv preprint arXiv:2502.12845 . Jerret Ross, Brian Belgodere, Vijil Chenthamarakshan, Inkit Padhi, Youssef Mroueh, and Payel Das. 2022. Large-scale chemical language representations capture molecular structure and properties. Nature Machine Intelligence , 4(12):1256–1264. Jerret Ross, Samuel Hoffman, Brian Belgodere, Vijil Chenthamarakshan, Youssef Mroueh, and Payel Das. 2024. Learning to optimize molecules with a chemi- cal language model. In Annual Conference on Neural Information Processing Systems . 12 Sakhinana Sagar Srinivas and Venkataramana Runk- ana. 2024. Crossing new frontiers: Knowledge- augmented large language model prompting for zero- shot text-based de novo molecule design. arXiv preprint arXiv:2408.11866 . Kunyang Sun, Dorian Bagni, Joseph M Cavanagh, Yingze Wang, Jacob M Sawyer, Andrew Gritsevskiy, Oufan Zhang, and Teresa Head-Gordon. 2025. Syn- llama: Generating synthesizable molecules and their analogs with large language models. arXiv preprint arXiv:2503.12602 . Richard S Sutton, Andrew G Barto, and 1 others. 1998. Reinforcement learning: An introduction , volume 1. MIT press Cambridge. Xiangru Tang, Howard Dai, Elizabeth Knight, Fang Wu, Yunyang Li, Tianxiao Li, and Mark Gerstein. 2024. A survey of generative ai for de novo drug design: new frontiers in molecule and protein generation. Briefings in Bioinformatics , 25(4):bbae338. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, and 1 others. 2023. Llama: Open and ef- ficient foundation language models. arXiv preprint arXiv:2302.13971 . Haorui Wang, Marta Skreta, Cher Tian Ser, Wenhao Gao,
https://arxiv.org/abs/2505.16094v1
Lingkai Kong, Felix Strieth-Kalthoff, Chenru Duan, Yuchen Zhuang, Yue Yu, Yanqiao Zhu, Yuanqi Du, Alan Aspuru-Guzik, Kirill Neklyudov, and Chao Zhang. 2025. Efficient evolutionary search over chemi- cal space with large language models. In The Thirteenth International Conference on Learning Representations . Y . Wang, H. Zhao, S. Sciabola, and W. Wang. 2023. cmolgpt: A conditional generative pre-trained trans- former for target-specific de novo molecular generation. Molecules , 28(11):4430. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Bar- ret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, and 1 others. 2022a. Emergent abilities of large language models. TMLR . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022b. Chain-of-thought prompting elicits rea- soning in large language models. NeurIPS , 35:24824– 24837. David Weininger. 1988. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences , 28(1):31–36. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. Moleculenet: a bench- mark for molecular machine learning. Chemical sci- ence, 9(2):513–530. Zhenxing Wu, Odin Zhang, Xiaorui Wang, Li Fu, Huifeng Zhao, Jike Wang, Hongyan Du, Dejun Jiang,Yafeng Deng, Dongsheng Cao, and 1 others. 2024. Leveraging language model for advanced multiproperty molecular optimization via prompt engineering. Nature Machine Intelligence , pages 1–11. Yingce Xia, Peiran Jin, Shufang Xie, Liang He, Chuan Cao, Renqian Luo, Guoqing Liu, Yue Wang, Zequn Liu, Yuan-Jyue Chen, and 1 others. 2025. Naturelm: Deci- phering the language of nature for scientific discovery. arXiv preprint arXiv:2502.07527 . Meng Xiao, Xunxin Cai, Chengrui Wang, and Yuanchun Zhou. 2025. m-kailin: Knowledge-driven agentic scientific corpus distillation framework for biomedi- cal large language models training. arXiv preprint arXiv:2504.19565 . Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. 2024. Contrastive preference opti- mization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417 . Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. 2023. Harnessing the power of llms in prac- tice: A survey on chatgpt and beyond. arXiv preprint arXiv:2304.13712 . Nianzu Yang, Huaijin Wu, Kaipeng Zeng, Yang Li, Siyuan Bao, and Junchi Yan. 2024. Molecule generation for drug design: a graph learning perspective. Funda- mental Research . Yang Yao, Xin Wang, Zeyang Zhang, Yijian Qin, Zi- wei Zhang, Xu Chu, Yuekui Yang, Wenwu Zhu, and Hong Mei. 2024. Exploring the potential of large language models in graph generation. arXiv preprint arXiv:2403.14358 . Geyan Ye, Xibao Cai, Houtim Lai, Xing Wang, Jun- hong Huang, Longyue Wang, Wei Liu, and Xiangxi- ang Zeng. 2025. Drugassist: A large language model for molecule optimization. Briefings in Bioinformatics , 26(1):bbae693. Botao Yu, Frazier N Baker, Ziqi Chen, Xia Ning, and Huan Sun. 2024a. Llasmol: Advancing large language models for chemistry with a large-scale, comprehensive, high-quality instruction tuning dataset. arXiv preprint arXiv:2402.09391 . Botao Yu, Frazier N. Baker, Ziqi Chen, Xia Ning, and Huan Sun.
https://arxiv.org/abs/2505.16094v1
2024b. Llasmol: Advancing large language models for chemistry with a large-scale, comprehensive, high-quality instruction tuning dataset. arXiv preprint arXiv:2402.09391 . Jiajun Yu, Yizhen Zheng, Huan Yee Koh, Shirui Pan, Tianyue Wang, and Haishuai Wang. 2025. Collaborative expert llms guided multi-objective molecular optimiza- tion. arXiv preprint arXiv:2503.03503 . Shuzhou Yuan and Michael Färber. 2025. Hallucina- tions can improve large language models in drug discov- ery. arXiv preprint arXiv:2501.13824 . 13 Xiangxiang Zeng, Fei Wang, Yuan Luo, Seung-gu Kang, Jian Tang, Felice C Lightstone, Evandro F Fang, Wendy Cornell, Ruth Nussinov, and Feixiong Cheng. 2022. Deep generative molecular design reshapes drug discov- ery. Cell Reports Medicine , 3(12). Di Zhang, Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xiangyu Yue, Wanli Ouyang, and 1 others. 2024a. Chemllm: A chemical large language model. arXiv preprint arXiv:2402.06852 . Juzheng Zhang, Yatao Bian, Yongqiang Chen, and Quanming Yao. 2024b. Unimot: Unified molecule- text language model with discrete token representation. arXiv preprint arXiv:2408.00863 . Odin Zhang, Haitao Lin, Hui Zhang, Huifeng Zhao, Yufei Huang, Chang-Yu Hsieh, Peichen Pan, and Tingjun Hou. 2024c. Deep lead optimization: Leverag- ing generative ai for structural modification. Journal of the American Chemical Society , 146(46):31357–31370. Qiang Zhang, Keyan Ding, Tianwen Lv, Xinda Wang, Qingyu Yin, Yiwen Zhang, Jing Yu, Yuhao Wang, Xiao- tong Li, Zhuoyi Xiang, and 1 others. 2025. Scientific large language models: A survey on biological & chem- ical domains. ACM Computing Surveys , 57(6):1–38. Yu Zhang, Xiusi Chen, Bowen Jin, Sheng Wang, Shui- wang Ji, Wei Wang, and Jiawei Han. 2024d. A compre- hensive survey of scientific large language models and their applications in scientific discovery. arXiv preprint arXiv:2406.10833 . Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, and 1 others. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 . Yizhen Zheng, Huan Yee Koh, Maddie Yang, Li Li, Lauren T May, Geoffrey I Webb, Shirui Pan, and George Church. 2024. Large language models in drug discovery and development: From disease mechanisms to clinical trials. arXiv preprint arXiv:2409.04481 . 14 A Data Modalities for Molecular LLMs: LLMs used for molecular generation and optimiza- tion interface with structured molecular data in var- ious modalities. Each modality offers distinct struc- tural or physicochemical information. As shown in Fig. 4, commonly used molecular representations can be categorized into the following three formats: •1D Sequence Representations ( S): These are linear string encodings of molecular structures. Common formats include SMILES (Simplified Molecular Input Line Entry System) (Weininger, 1988) and SELFIES (Self-Referencing Embed- ded Strings) (Krenn et al., 2020). These repre- sentations are well-suited for LLMs due to their compatibility with token-based language model- ing. Another format used in certain settings is the IUPAC nomenclature (Favre and Powell, 2014), which provides systematic names for molecules and is employed as an alternative or auxiliary tex- tual representation in language modeling frame- works. •2D Graph Representations ( G): A molecule is represented as a graph G= (V, E ), where nodes v∈Vcorrespond to atoms and edges
https://arxiv.org/abs/2505.16094v1
e∈Ecorrespond to chemical bonds. Node and edge features may encode atom types, bond or- ders, aromaticity, and other topological attributes. While not directly token-based, 2D graphs can be integrated via hybrid models that combine lan- guage and graph encoders, or serialized (e.g., via adjacency lists or graph traversal sequences) to interface with LLMs. •3D Geometric Representations ( X): These rep- resentations capture atomic coordinates in three- dimensional space. Formally, X={(ai,⃗ ri)}N i=1, where aidenotes the atomic species and ⃗ ri∈R3 specifies the Cartesian coordinates of atom i. 3D information is essential for modeling stereochem- istry, conformational preferences, and interaction potentials. Incorporating 3D data into LLMs typ- ically requires transforming it into a sequence- compatible format or using auxiliary models to predict or refine 3D structures. B Datasets Datasets are crucial resources for advancing LLM- centric molecule design, serving extensively in both the training and evaluation phases of model development. Table 1 provides a comprehensivesummary of commonly utilized molecule datasets, detailing their key features. For each dataset listed, the table specifies its Last Update year, approxi- mate Scale (number of entries), whether it includes natural language Instruction components, and its suitability for Pretraining LLMs or as a Bench- mark for evaluation. Furthermore, the table in- dicates the types of Molecule Representations available within each dataset, such as SMILES, IU- PAC names, ready-to-dock formats ( Dock ), graph structures ( Graph ), 3D coordinates ( 3D), or for- mal chemical ontologies ( Ontology ). Finally, it highlights whether a dataset supports Generation orOptimization tasks, lists Other Tasks it is com- monly used for (e.g., property prediction, transla- tion), and provides a Link to access the resource. The subsequent subsections categorize these datasets based on their primary application focus, aligning with the classification used in Section 5 of the main text. B.1 Pretraining-Only Datasets Pretraining-only datasets typically contain diverse molecular structures and associated property in- formation, designed to support broad generaliza- tion capabilities when pretraining LLMs for down- stream tasks. These datasets generally do not in- clude explicit natural language instructions or task- specific labels for direct supervised learning of spe- cific generation or optimization objectives. •ZINC: ZINC (Irwin et al., 2012) is a public and comprehensive database containing over 20 mil- lion commercially available molecules presented in biologically relevant representations. These molecules can be downloaded in popular ready- to-dock formats and various subsets, making ZINC widely used for distribution learning-based and goal-oriented molecule generation tasks. •PubChem: PubChem (Kim et al., 2016, 2019, 2025) serves as a vast public chemical informa- tion repository, holding over 750 million records. It covers a wide array of data, including chemi- cal structures, identifiers, bioactivity outcomes, genes, proteins, and patents, and is organized into three interlinked databases: Substance (con- tributed chemical information), Compound (stan- dardized unique structures), and BioAssay (bio- logical experiment details). •ChemData: ChemData (Zhang et al., 2024a) is a large-scale dataset specifically curated for 15 CC(=O)NC1=CC =C(C=C1)OFigure 4: Illustration of an example molecule and its representation in different data modalities. From left to right following the 2D chemical structure diagram: its 1D SMILES string representation, a simplified
https://arxiv.org/abs/2505.16094v1
2D graph view, and its 3D ball-and-stick model. fine-tuning chemical LLMs, containing 7 million instruction query-response pairs. Derived from various online structural datasets like PubChem and ChEMBL, it encompasses a broad range of chemical domain knowledge and is frequently used for tasks in molecule understanding, chemi- cal process reasoning, and other domain-specific applications. •Mol-Instructions: Mol-Instructions (Fang et al., 2023) is a large-scale, diverse, and high- quality dataset designed for the biomolecular do- main, featuring over 2 million carefully curated biomolecular instructions. It is structured around three core components: molecule-oriented in- structions (148.4K across six tasks focusing on properties, reactions, and design), protein- oriented instructions (505K samples across five task categories related to protein structure, func- tion, and design), and biomolecular text instruc- tions (53K for bioinformatics and chemoinfor- matics NLP tasks like information extraction and question answering). •MuMOInstruct: MuMOInstruct (Dey et al., 2025) is presented as the first high-quality instruction-tuning dataset focused on complex, multi-property molecular optimization tasks. Un- like datasets such as MolOpt-Instruction (Ye et al., 2025) that primarily target single- or dual- property tasks, MuMOInstruct emphasizes tasks involving at least three properties, facilitating the evaluation of LLMs in both in-domain and out-of-domain settings. B.2 Benchmark-Only Datasets Benchmark-only datasets are specifically curated for the evaluation of models, particularly in genera- tive molecular tasks. These datasets often feature structured input-output pairs, such as instruction- molecule pairings, and are typically smaller in scale, manually verified, and tailored to specific evaluative purposes. •MoleculeNet: A large-scale benchmark com-pendium, MoleculeNet (Wu et al., 2018) is de- rived from multiple public databases. It com- prises 17 curated datasets with over 700,000 com- pounds, represented textually (e.g., SMILES) and in 3D formats. Covering a wide array of properties categorized into quantum mechanics, physical chemistry, biophysics, and physiology, it serves as a standard for evaluating molecular property prediction models. •ChemBench: ChemBench (Mirza et al., 2024) offers a comprehensive framework for bench- marking the chemical knowledge and reasoning abilities of LLMs. It consists of thousands of manually curated question-answer pairs from di- verse sources, focusing on three core aspects: Calculation, Reasoning, and Knowledge. •TOMG-Bench: As the first benchmark dedi- cated to the open-domain molecule generation capabilities of LLMs, TOMG-Bench (Text-based Open Molecule Generation Benchmark) (Li et al., 2024a) contains 45,000 samples. It is structured around three primary tasks: molecule editing (MolEdit), molecule optimization (MolOpt), and customized molecule generation (MolCustom). •MOSES: MOSES (Molecular Sets) (Polykovskiy et al., 2020) is a task-specific resource designed for both training and bench- marking molecule generation models in drug discovery. Containing approximately 1.9 million molecules in SMILES format derived from the ZINC Clean Leads dataset, it also furnishes training, testing, and scaffold-split subsets, along with built-in evaluation metrics. B.3 Datasets for Pretraining & Benchmark Applications A distinct category of datasets offers the flexibil- ity to be used for both pretraining LLMs and for subsequent benchmarking. These resources often combine substantial scale with features amenable to diverse evaluation scenarios. 16 Table 1: Summary of commonly used molecule datasets and their features. Dock denotes the "ready-to-dock" format; Ontology denotes the structured representation of the molecule; Captioning denotes molecule
https://arxiv.org/abs/2505.16094v1
captioning task; Docking denotes molecule docking (a way to find correct molecule binds for proteins); Translation denotes the translation from textual knowledge to molecular features; Conversion denotes the translation between different representations of a molecule’s identity; Prediction denotes property prediction, forward reaction prediction and retrosynthesis tasks; QM denotes hybrid quantum mechanics. DatasetsLast UpdateScaleInstruc- tionPretrain- ingBench -markMolecule Representations Genera- tionOptimi- zationOther Tasks Link SMILES IUPAC Dock Graph 3D Ontology PubChem (Kim et al., 2016, 2019, 2025)2025 119M ✗ ✓ ✗ ✓ ✓ ✗ ✓ ✓ ✓ ✓ ✗ Property Prediction & Biology Domain Link ChEMBL (Gaulton et al., 2012)2024 >20M ✗ ✓ ✓ ✓ ✓ ✗ ✓ ✗ ✗ ✓ ✓ Prediction & ML Benchmark Link CrossDocked2020 (Francoeur et al., 2020)2024 22.5M ✗ ✓ ✓ ✓ ✗ ✓ ✗ ✓ ✗ ✗ ✓ Docking Datasets Link ZINC (Irwin et al., 2012)2023 >980M ✗ ✓ ✗ ✓ ✓ ✓ ✓ ✓ ✗ ✓ ✓ Ligand Discovery Link Dockstring (García-Ortegón et al., 2022)2022 >260k ✗ ✓ ✓ ✓ ✗ ✓ ✓ ✓ ✗ ✓ ✓ Virtual Screening Link ChEBI-20 (Edwards et al., 2021)2021 33k ✗ ✓ ✓ ✓ ✓ ✗ ✓ ✗ ✓ ✓ ✗ Translation & Classification & Captioning Link OGBG-MolHIV (Hu et al., 2020)2020∼41k ✗ ✓ ✓ ✓ ✗ ✗ ✓ ✗ ✗ ✓ ✗ Graph Property Prediction Link MOSES (Polykovskiy et al., 2020)2020∼1.9M ✗ ✗ ✓ ✓ ✗ ✗ ✗ ✗ ✗ ✓ ✗ De novo Design Link MoleculeNet (Wu et al., 2018)2019 700k ✗ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✗ ✓ ✓ ML Benchmark Link QM9 (Pinheiro et al., 2020)2014 134k ✗ ✓ ✓ ✓ ✗ ✗ ✓ ✓ ✗ ✓ ✓ Hybrid QM/ML Modeling Link TOMG-Bench (Li et al., 2024a)2025 5k ✓ ✗ ✓ ✓ ✗ ✗ ✓ ✓ ✓ ✓ ✓ Molecule Editing Link MuMOInstruct (Dey et al., 2025)2025 873k ✓ ✓ ✗ ✓ ✗ ✗ ✗ ✗ ✗ ✗ ✓ — Link ChemData (Zhang et al., 2024a)2024 7M ✓ ✓ ✗ ✓ ✗ ✓ ✗ ✗ ✗ ✓ ✓ Conversion & Prediction & Reaction Link ChemBench (Mirza et al., 2024)2024 4k ✓ ✗ ✓ ✓ ✗ ✗ ✗ ✗ ✗ ✓ ✓ Reaction Benchmark & Virtual Screening Link Mol-Instructions (Fang et al., 2023)2024 2M ✓ ✓ ✗ ✓ ✗ ✗ ✗ ✗ ✗ ✓ ✓ Translation, Retrosynthesis Link MolOpt-Instructions (Ye et al., 2025)2024 1M ✓ ✓ ✓ ✓ ✗ ✗ ✗ ✗ ✗ ✗ ✓ — Link L +M-24 (Edwards et al., 2024b)2024 148k ✓ ✓ ✓ ✓ ✗ ✗ ✓ ✗ ✗ ✓ ✗ Captioning Link SMolInstruct (Yu et al., 2024b)2024 3.3M ✓ ✓ ✓ ✓ ✗ ✗ ✗ ✗ ✗ ✓ ✗ Captioning & Prediction Link •ChEMBL: ChEMBL (Gaulton et al., 2012) is a manually curated, open-access database focus- ing on drug-like bioactive molecules. It houses 5.4 million bioactivity measurements for over 1 million compounds and 5,200 protein targets, ef- fectively integrating chemical, bioactivity, and genomic data to support drug discovery and the translation of genomic insights into therapeutics. •ChEBI-20: ChEBI-20 (Edwards et al., 2021), derived from the ChEBI database, is a freely available,
https://arxiv.org/abs/2505.16094v1
manually curated dictionary of molec- ular entities concentrated on small chemical com- pounds. It includes over 20,000 molecules rep- resented by SMILES strings, natural language descriptions, and ontology terms, widely em- ployed in molecule generation and instruction- based tasks requiring chemical understanding. •CrossDocked2020: CrossDocked2020 (Fran- coeur et al., 2020) is a large-scale dataset specifi- cally geared towards structure-based drug design (SBDD). It features over 22 million 3D docked poses of protein-ligand pairs, making it a valu-able resource for tasks like pocket-conditioned 3D molecule generation. •Dockstring: Dockstring (García-Ortegón et al., 2022) provides a large-scale, well-curated dataset for molecular docking. It encompasses an exten- sive collection of docking scores and poses for more than 260,000 ligands against 58 medically relevant targets, and includes pharmaceutically relevant benchmark tasks such as virtual screen- ing and the de novo design of selective kinase inhibitors. •QM9: QM9(The Quantum Mechanics 9) dataset (Pinheiro et al., 2020) is a public quantum chemistry resource containing approximately 134,000 small organic molecules (composed of H, C, N, O, F; up to nine non-hydrogen atoms). It provides SMILES representations, 3D geome- tries, and quantum chemical properties, widely utilized for training and evaluating molecular property prediction models. •SMolInstruct: SMolInstruct (Yu et al., 2024b) is a large-scale, comprehensive, and high-quality 17 Molecule Discovery Molecule Generation Query: Build a molecule that meets the requirement: The molecule is a phenolate anion obtained by deprotonation of the 7-hydroxy group of noreugenin. It is the major microspecies at pH 7.3 (according to Marvin v 6.2.0.). It has a role as a plant metabolite. It is a conjugate base of a noreugenin. Response: Here is a potential molecule: CC1=CC(=O)C2=C([O-])C=C(O)C=C2O Molecule Optimization Query: Help me increase the water solubility value of the molecule Cc1cc(C(COCc2ccccc2)sc(C)(C)C)sn1 Response: Here is a potential molecule: Cc1cc(C(N)COCc2ccccc2)sn1Figure 5: Visualization of the Instruction dataset of molecule generation and optimization task. dataset for instruction tuning LLMs in chemistry. It consists of 3.3 million language-molecule pairs and 1.6 million distinct molecules, covering four types of molecular representations and 14 differ- ent tasks, with molecules represented in SMILES or SELFIES format. •OGBG-MolHIV: OGBG-MolHIV (Hu et al., 2020), part of the Open Graph Benchmark, is an open-access, task-specific dataset for binary molecular property prediction, specifically for classifying HIV inhibition. It contains 41,127 unique molecules in graph format, where nodes (atoms) have 9 numerical features and edges (bonds) have 3-dimensional features (type, stere- ochemistry, conjugation). It is derived from MoleculeNet and preprocessed using RDKit. •MolOpt-Instructions: MolOpt-Instructions (Ye et al., 2025) is an instruction-based dataset tai- lored for molecule optimization, containing over 1 million molecule-molecule pairs. It was con- structed by selecting molecules from ZINC and using MMPDB to generate and filter for highly similar pairs, covering six molecular properties including solubility, BBBP, and hERG inhibition. •L+M-24: L+M-24 (Language + Molecules 24 Tasks) (Edwards et al., 2024b) is a large-scale, multi-task instruction dataset designed to lever- age the benefits of natural language (composition- ality, functionality, abstraction) in molecule de- sign. Derived from PubChem and other sources, it contains over 148,000 language-molecule pairs spanning 24 distinct molecule design tasks across various application domains.C Evaluation Metrics In
https://arxiv.org/abs/2505.16094v1
the context of LLM-centric molecular generation and optimization, evaluation metrics are commonly grouped into categories based on molecular struc- ture, physicochemical properties, and optimization success, each reflecting distinct aspects of molecu- lar quality and model performance. This appendix details these metrics, aligning with the categoriza- tion presented in Fig. 3. C.1 Structure-Based Metrics Structure-based metrics are employed to assess the chemical plausibility, resemblance to reference compounds, and structural diversity of molecules generated or modified by LLMs. These metrics help ensure that the outputs are chemically mean- ingful and cover a sufficient breadth of the relevant chemical space. C.1.1 Validity & Similarity Metrics for validity and similarity evaluate the extent to which generated molecules conform to chemical rules, match the structural features of ref- erence molecules, and satisfy specified structural constraints. They are crucial for determining if the outputs are chemically sound and potentially useful for applications like drug discovery. •Validity Rate: The validity rate (Polykovskiy et al., 2020) indicates the fraction of gener- ated molecules that are chemically valid (e.g., parsable by RDKit) and often also considers uniqueness among valid structures. A high va- lidity rate suggests that the LLM has effectively learned the underlying rules of molecular repre- sentation (e.g., SMILES grammar) and the con- text provided by textual descriptions. •EM (Exact Match): Exact Match (Rajpurkar et al., 2016) assesses whether a generated molec- ular sequence is identical to a target reference sequence. A higher EM rate signifies a stronger capability of the model to precisely replicate ref- erence molecules when required. •BLEU (Bilingual Evaluation Understudy): This score (Papineni et al., 2002), originally from machine translation, measures the n-gram over- lap between generated and reference molecu- lar sequences. A higher BLEU score indicates greater similarity in token order and composi- tion, reflecting better fidelity to the ground truth molecule’s sequence. 18 •Levenshtein Distance: The Levenshtein dis- tance (Levenshtein, 1966) quantifies the dissimi- larity between two strings by counting the mini- mum number of single-character edits (insertions, deletions, or substitutions) required to change one string into the other. A lower Levenshtein distance signifies higher similarity between the generated and reference molecular sequences. •FTS (Fingerprint Tanimoto Similarity): FTS is a widely used metric for quantifying structural similarity based on molecular fingerprints, such as MACCS (Durant et al., 2002), RDKit topologi- cal fingerprints (RDK) (Landrum et al., 2013), or Morgan fingerprints (circular fingerprints) (Mor- gan, 1965). A higher FTS score (typically rang- ing from 0 to 1) indicates a greater overlap in key substructures and chemical patterns between the generated and reference molecules. •FCD (Fréchet ChemNet Distance): FCD (Preuer et al., 2018) evaluates the dissimilarity between the distribution of features (derived from a pretrained chemical neural network) of gener- ated molecules and a reference set (often ground truth molecules). Lower FCD values suggest that the generated molecules better capture the chemical diversity and property distribution of the reference set. C.1.2 Diversity & Uniqueness Metrics related to diversity, uniqueness, and nov- elty assess an LLM’s ability to produce a varied set of outputs. High performance in these areas can help prevent mode collapse and enhance
https://arxiv.org/abs/2505.16094v1
the explo- ration of chemical space for discovering novel and relevant molecules. •Uniqueness: Uniqueness quantifies the propor- tion of valid generated molecules that are distinct from each other within a given set. It reflects the model’s capacity to generate diverse struc- tures rather than redundant outputs. This is often evaluated at different scales, such as Unique@1k (within the first 1,000 valid samples) (Wang et al., 2023) and Unique@10k (within 10,000 valid samples) (Bagal et al., 2021), to assess short- range and broader diversity, respectively. •Novelty Rate: The novelty rate (Brown et al., 2019) measures the fraction of valid and unique generated molecules that are not present in the training dataset. It serves as an indicator of the model’s generalization ability and its potential todiscover previously unseen chemical entities. A low novelty rate may suggest overfitting. •IntDiv (Internal Diversity) and NCircles: These metrics further characterize the structural diversity within a set of generated molecules. Int- Div (Benhenda, 2017) calculates the average dis- similarity (1 minus Tanimoto similarity) between all pairs of molecules in the generated set, often using a power mean to adjust sensitivity: IntDiv p(S) = 1− 1 |S|2X si,sj∈ST(si, sj)p 1 p where T(si, sj)is the Tanimoto similarity be- tween molecules siandsj. NCircles (Jang et al., 2024) measures the size of the largest subset of generated molecules where no two molecules have a Tanimoto similarity exceeding a prede- fined threshold. A higher NCircles value indi- cates greater structural dissimilarity within the set. C.2 Property-Based Metrics Property-based metrics evaluate whether a de- signed or modified molecule satisfies specific physicochemical or biological property constraints, often crucial for assessing its potential utility, such as drug-likeness or target activity. C.2.1 Single-Property Evaluation Metrics In single-property evaluation, the primary goal is to assess the model’s ability to generate or optimize molecules with respect to a specific molecular prop- erty, such as drug-likeness, solubility, or binding affinity. •LogP (Octanol-Water Partition Coefficient): LogP (Hansch et al., 1968) is the logarithm of a compound’s partition coefficient between octanol and water, serving as a key indicator of molecular hydrophobicity and thus, often correlating with membrane permeability. •QED (Quantitative Estimate of Drug- likeness): QED (Bickerton et al., 2012) provides a heuristic score (ranging from 0 to 1) that integrates multiple physicochemical properties (e.g., molecular weight, LogP, number of hydrogen bond donors/acceptors) to estimate a compound’s overall drug-likeness. •TPSA (Topological Polar Surface Area): TPSA (Ertl et al., 2000) quantifies the surface 19 sum over all polar atoms in a molecule, reflecting its ability to form hydrogen bonds. It is often cor- related with properties like intestinal absorption and blood-brain barrier penetration. •SA Score (Synthetic Accessibility Score): The SA score (Ertl and Schuffenhauer, 2009) esti- mates the ease of synthesizing a compound, typi- cally on a scale from 1 (easy to synthesize) to 10 (very difficult to synthesize), based on fragment contributions and complexity penalties. C.2.2 Multi-Property Evaluation Metrics Multi-property evaluation assesses a model’s per- formance in satisfying multiple, often competing, objectives simultaneously. This is critical in real- world scenarios where a balance of several proper- ties is required. •Composite Score: A composite
https://arxiv.org/abs/2505.16094v1
score (Jin et al., 2020) aggregates multiple individual prop- erty scores into a single scalar objective, often through a weighted sum or other combination rules. This allows optimization frameworks (e.g., evolutionary algorithms, reinforcement learning) to be guided by a unified fitness metric. The weights can be adjusted to reflect task-specific priorities among properties like LogP, QED, and synthetic accessibility. •Pareto Optimality: In multi-objective opti- mization, a solution is considered Pareto opti- mal (Pareto, 1919) if none of its objective func- tion values can be improved without degrading at least one of the other objective values. The set of all Pareto optimal solutions forms the Pareto front, which is used to visualize and analyze trade-offs between conflicting objectives. •Success Rate under Constraints: This met- ric (Jin et al., 2020) quantifies the proportion of generated or modified molecules that success- fully meet or exceed predefined target thresholds across all specified properties. A common in- stantiation is the multi-property hit ratio, where a molecule is deemed successful only if all targeted property improvements satisfy their respective criteria. D Method Summary This section provides a consolidated overview of representative LLM-based methods for molecular discovery, as detailed in Table 2. The table orga- nizes these approaches primarily by the two coretask categories central to this survey: molecule generation and molecule optimization. Within each task, methods are further sub-categorized by their primary learning Strategy (referred to as "Cate- gory" and "Technique" in the table), encompassing approaches without LLM tuning (such as zero-shot prompting and in-context learning) and those with LLM tuning (supervised fine-tuning and preference tuning). Table 2 details several key aspects for each listed Method : •Venue : The publication venue or preprint archive where the method was reported. •Input Type : Specifies the primary format of molecular data and instructions provided to the LLM (e.g., SMILES strings, textual instructions, few-shot examples, or multi-modal inputs like graphs). •Base Model : Indicates the foundational LLM architecture (e.g., GPT-4, LLaMA variants, Mis- tral) upon which the method is built or applied. •Dataset : Lists the key molecular corpora or benchmarks used for training the model (if appli- cable) or for its evaluation in the context of the reported work. •Repository : Provides a link to the public code or resource repository, if available. This structured presentation aims to offer a clear comparative landscape of the current methodolo- gies in the field. 20 Table 2: Summary of LLM-based methods for molecule generation and optimization. Each row corresponds to a method, organized by Task (generation or optimization), and Technique .Input Type denotes the molecular data format provided to the model. Base Model denotes the large language model architecture used. Dataset denotes the molecular corpus or benchmark used for training or evaluation. Task Category Technique Method VenueInput TypeBase ModelDataset RepositoryMolecule Generationw/o TuningICLLLM4GraphGen (Yao et al., 2024)ArxivInstruction + Few shotGPT-4 OGBG-MolHIV Link MolReGPT (Li et al., 2024c)TKDEInstruction + Few shotGPT-3.5-turbo/ GPT-4ChEBI-20 Link FrontierX (Srinivas and Runkana, 2024)Arxiv Instruction GPT-3.5 ChEBI-20 N/A w/ TuningSFTMol-instructions (Fang et al., 2023)ICLR Instruction LLaMA-7B Mol-Instructions Link LlaSMol (Yu et al., 2024a)COLM InstructionGalactica 6.7B/ LLaMA-2-7B/ Mistral-7BSMolInstruct Link ChemLLM (Zhang
https://arxiv.org/abs/2505.16094v1
et al., 2024a)Arxiv InstructionInternLM2- 7B-BaseChemData N/A ICMA (Li et al., 2024b)TKDEInstruction + Few shotMistral-7BPubChem & ChEBI-20N/A MolReFlect (Li et al., 2024d) ArxivInstruction + Few shotMistral-7B ChEBI-20 Link ChatMol (Fan et al., 2025)Arxiv Instruction LLaMA-3-8B ZINC Link PEIT-LLM (Lin et al., 2025)Arxiv InstructionLLaMA-3.1-8B/ Qwen2.5-7BChEBI-20 Link NatureLM (Xia et al., 2025)ArxivSMILES + InstructionNatureLM-8BChEMBL & MoleculeNetLink SynLlama (Sun et al., 2025)Arxiv InstructionLLaMA-3.1-8B / LLaMA-3.2-1BChEMBL Link TOMG-Bench (Li et al., 2024a)Arxiv Instruction LLaMa-3.1-8B TOMG-Bench N/A UniMoT (Zhang et al., 2024b)Arxiv Instruction LLaMA-2-7B Mol-Instructions Link Preference TuningDiv-SFT (Jang et al., 2024)Arxiv Instruction LLaMA-7B ChEBI-20 N/A Mol-MOE (Calanzone et al., 2025)Arxiv Instruction LLaMA-3.2-1BChEMBL & ZINC & MOSESLink SmileyLLama (Cavanagh et al., 2024)NeurIPS Workshop Instruction LLaMA-3.1-8B ChEMBL N/A ALMol (Gkoumas, 2024)ACL Workshop Instruction Meditron-7B L+M-24 N/A Less for More (Gkoumas and Liakata, 2024)Arxiv Instruction Meditron-7B L+M-24 N/A Mol-LLM (Lee et al., 2025)Arxiv Instruction Mistral-7B ChEBI-20 N/AMolecule Optimizationw/o TuningZero-Shot PromptingLLM-MDE (Bhattacharya et al., 2024)JCIMSMILES + InstructionClaude 3 Opus ZINC N/A MOLLEO (Wang et al., 2025)ICLRSMILES + InstructionGPT-4 ZINC Link ICLCIDD (Gao et al., 2025b)ArxivSMILES + Interaction reportGPT-4o CrossDocked2020 N/A LLM-EO (Lu et al., 2024)ArxivSMILES + Ligands PoolClaude 3.5 Sonnet / OpenAI o1-previewTMC dataset Link MOLLM (Ran et al., 2025)ArxivSMILES + InstructionGPT-4o ZINC N/A ChatDrug (Liu et al., 2024c)ICLRSMILES + InstructionGalactica / LLaMA-2 / ChatGPTZINC Link Re2DF (Le and Chawla, 2024)ArxivSMILES + InstructionLLaMA-3.1-8B/ LLaMA-3.1-70BZINC Link BOPRO (Agarwal et al., 2025)ICLRSMILES + InstructionMistral-Large-Instruct-2407 Dockstring Link w/ TuningSFTMultiMol (Yu et al., 2025)ArxivSMILES + InstructionQwen2.5-7B / LLaMA-3.1-8B / Galactica 6.7BPubChem Link DrugAssist (Ye et al., 2025)Brief BioinformSMILES + InstructionLLaMA-2-7B-Chat MolOpt-Instructions Link GeLLM3O (Dey et al., 2025)ArxivSMILES + InstructionMistral-7B-Instruct / LLaMA-3.1-8B-InstructMuMOInstruct Link DrugLLM (Liu et al., 2024d)ArxivGroup-based Molecular RepresentationLLaMA-2-7BZINC & ChEMBLN/A TOMG-Bench (Li et al., 2024a)Arxiv Instruction LLaMa-3.1-8B TOMG-Bench N/A LLM-Enhanced GA (Bedrosian et al., 2024)NeurIPS Workshop JSON ObjectsChemma / ChemlacticaPubChem Link Molx-Enhanced LLM (Le et al., 2024)ArxivSMILES + Graph + InstructionLLaMA-2-7B PubChem N/A Preference TuningNatureLM (Xia et al., 2025)ArxivSMILES + InstructionNatureLM-8BChEMBL & MoleculeNetN/A 21
https://arxiv.org/abs/2505.16094v1
arXiv:2505.16100v1 [cs.AI] 22 May 2025BIODSA-1K: Benchmarking Data Science Agents for Biomedical Research Zifeng Wang∗Benjamin Danek∗Jimeng Sun University of Illinois Urbana-Champaign Correspondence: {zifengw2,jimeng}@illinois.edu https://ryanwangzf.github.io/projects/biodsa Abstract Validating scientific hypotheses is a central challenge in biomedical research, and remains difficult for artificial intelligence (AI) agents due to the complexity of real-world data analysis and evidence interpretation. In this work, we present BIODSA -1K, a benchmark designed to evaluate AI agents on realistic, data-driven biomedical hypothesis validation tasks. BIODSA -1K consists of 1,029 hypothesis- centric tasks paired with 1,177 analysis plans, curated from over 300 published biomedical studies to reflect the structure and reasoning found in authentic re- search workflows. Each task includes a structured hypothesis derived from the original study’s conclusions, expressed in the affirmative to reflect the language of scientific reporting, and one or more pieces of supporting evidence grounded in empirical data tables. While these hypotheses mirror published claims, they remain testable using standard statistical or machine learning methods. The benchmark enables evaluation along four axes: (1) hypothesis decision accuracy, (2) alignment between evidence and conclusion, (3) correctness of the reasoning process, and (4) executability of the AI-generated analysis code. Importantly, BIODSA -1K includes non-verifiable hypotheses: cases where the available data are insufficient to support or refute a claim, reflecting a common yet underexplored scenario in real-world science. We propose BIODSA -1K as a foundation for building and evaluating generalizable, trustworthy AI agents for biomedical discovery. 1 Introduction Artificial intelligence (AI) agents promise to accelerate scientific discovery [ 1,2], with the emergence of “AI scientists” [ 3] capable of collaborating with human researchers to perform research tasks such as literature mining and data analysis [ 4–7]. Large language models (LLMs) [ 8] can serve as the intelligence backbone for converting natural language to structured outputs such as code and mathematical expressions. As a core task in biomedical research, data science bridges the gap from proposed hypotheses to novel discoveries leveraging biomedical data. For example, a researcher might hypothesize that “Genes involved in histone modification “ are frequently mutated in non- Hodgkin lymphoma.” Testing such hypotheses often requires close collaboration between biomedical experts and data scientists to design analyses, write code, and interpret results, and thus far has been mainly manual efforts in practice [9]. Recent efforts have demonstrated LLM-based agents capable of designing experiments, generating code, and summarizing results [ 10–13]. However, existing systems often focus on narrow tasks within biomedical research or are evaluated on limited scenarios. In this work, we aim to systematically investigate the following research question: To what extent can state-of-the-art LLMs and AI agents perform data science tasks in biomedical research? Answering this question requires a curated ∗Equal contribution. Preprint. GenomicsIntegrativeTherapeuticsBiomarkersTranslationalMolecularOthers020406080100120140Analysis types Comparison Frequency Survival Correlation Structural Functional Clustering Pathway 102103104105 Number of Rows101102103Number of ColumnsMutational Signficance Analysis (n=23)Gene Expression (n=202)Other (n=140) Mutation Data (n=331) Patient Timeline (n=89)Copy Number Alteration (n=244) Clinical Data (n=638)Structural Variation (n=140) Gene Panel (n=175)Protein Expression (n=26)Figure 1: Benchmark statistics. (left) BIODSA -1K includes diverse types of biomedical research and data analysis tasks created from 329 publications; the x-axis indicates the publication types.; (Right)
https://arxiv.org/abs/2505.16100v1
Bubble plot illustrating the diverse range of biomedical data tables in BIODSA -1K, showing each data table’s number of rows (x-axis, log-scale) versus number of columns (y-axis, log-scale). benchmark dataset that captures the breadth and complexity of data science tasks in biomedical research. The following challenges have not been fully explored: (1) although previous studies leverage publications to create data science tasks [ 10,14,15], those test cases are drawn from a small number of papers, which may not reflect the full scope of biomedical research; (2) limited task diversity as a consequence of restricted case selection; (3) the involved tasks are performed on relatively simple datasets, such as one or two tables with tens of columns; (4) overlooking the foundational data analysis steps and observed evidence that support or refute a hypothesis, thus, correct hypothesis prediction alone does not guarantee the agent performed the correct analysis; and (5) the inclusion of non-verifiable hypotheses, where the required data is absent or insufficient to support a conclusive answer, yet such cases are rarely discussed. In this paper, we introduce BIODSA -1K ( Biomedical DataScience Agent Benchmark), a novel framework for evaluating AI agents on biomedical data science research tasks (Figure 1). BIODSA - 1K specifies a complete cycle of hypothesis formulation, data analysis, and validation, by curating detailed experimental components extracted from published biomedical studies. Specifically, each instance includes a hypothesis statement, corresponding analysis plans, evidence summaries, and quantitative outcome measures. As illustrated in Figure 1, BIODSA -1K includes 1,029 scientific hypotheses and the corresponding 1,177 analysis tasks drawn from 329 publications of eight types of publications. The analysis tasks are also comprehensive in terms of common analysis is done in biomedical research. A comparison to other representative data science benchmarks is illustrated in Table 1. 2 B IODSA-1K: Benchmark data and tasks BIODSA -1K is constructed from scientific publications and their associated biomedical data. At the core of the benchmark are structured components that mirror the research process: a curated collection of publications and corresponding data tables, extracted hypotheses paired with supporting evidence, and data analysis tasks derived from these elements. This framework supports the development and assessment of AI agents on a wide spectrum of capabilities, from code generation and reasoning to hypothesis testing, grounded in scientific discovery workflows. In the following subsections, we detail the construction of BIODSA -1K, including how publications and data were collected, how hypotheses and supporting evidence were extracted, and how downstream tasks were defined to challenge and benchmark agent performance. 2.1 Publication and dataset collection To construct a benchmark that reflects practical biomedical data science, it is essential to include not only scientific publications but also the corresponding biomedical datasets on which those studies are based. We therefore leverage cBioPortal [ 16], a comprehensive cancer genomics and clinical data 2 TP53 pathway alterations are significantly associated with the treatment response.TP53 alterationsTreatment responseSample IDTP53 Mutationsaf21311qrw1230asfg5341Ge31670……TP53 alterations were found in 40% of tumors.TP53 is an oncogene of prostate cancer tumors. Supporting evidenceHypothesis AI agentsBiomedical datasets GeneLab Timeline…Experiments Hypothesis decisions•Type I error•Type II errorEvidence alignmentAlignment score ∈0,1Non-verifiable decision Evaluation
https://arxiv.org/abs/2505.16100v1
metrics •Precision•RecallCode quality AnalyzeObserve Executability rate ∈0,1 PlanProgramBenchmark curation Publication Hypothesis Hypothesis HypothesisFigure 2: Overview of BIODSA -1K. a, Benchmark curation: Scientific publications linked to biomedical datasets are parsed to extract hypotheses and their corresponding supporting evidence, forming the core reasoning challenges. b, Experiments: AI agents are tasked with validating hypotheses by planning analysis steps, generating executable code, observing results, and making decisions based on structured biomedical datasets. cEvaluation metrics: Agent performance is evaluated based on hypothesis decision accuracy (Type I and Type II errors), evidence alignment with publication findings, non-verifiable hypothesis detection (precision and recall), and code executability rate. portal that maintains structured datasets with direct linkage to peer-reviewed publications. It is under a publicly available Open Database License [ 17]. This ensures that our benchmark captures both the analytical context and the quantitative evidence underlying published findings. In particular, we assume that each publication highlights its primary results within the abstract, often supported by descriptive statistics, statistical testing, and predictive modeling results derived from the associated biomedical tables. Thus, the raw data underlying BIODSA -1K consists of two components: the publication abstracts and their corresponding structured biomedical data tables. We utilize the cBioPortal API2to retrieve all available datasets in bulk. Each dataset includes study metadata that specifies the associated publication(s), including PubMed identifiers (PMIDs). Using these PMIDs, we collect the publication abstracts through the PubMed API3. In most cases, there is a one-to-one mapping between a dataset and a publication. However, we exclude ambiguous cases involving multiple papers and datasets when their analytical scope extends beyond the specific dataset. This filtering step avoids introducing non-verifiable hypotheses into the benchmark, thereby maintaining a clear linkage between reported findings and the underlying data. According to established biomedical literature [ 18], we categorize the publications in our benchmark by study types. Definitions of these categories are provided in Appendix D. As illustrated in Figure 1, BIODSA -1K spans a diverse array of study types, including genomics, integrative, therapeutics, biomarkers, translational, and molecular studies, along with various analysis methodologies. This distribution highlights the comprehensiveness of BIODSA -1K, capturing both high-level exploratory research and focused hypothesis-driven studies. 2.2 Dataset caption We caption the data tables for benchmarking in data science tasks while preserving privacy. The details of how the captioning works can be found in Appendix B. Specifically, we do not send any patient-level records to LLMs and instead construct a schema-based representation. For each column in a data table, we compute type-specific descriptive statistics, such as the number of unique values, missing value ratio, most frequent entries, and data ranges. In this way, for whatever LLM API provider we use, only the captions of the dataset will be shared. For future research and experiments 2https://github.com/cbioportal/cbioportal/ 3https://www.ncbi.nlm.nih.gov/home/develop/api/ 3 Table 1: Comparison of BIODSA -1K with representative benchmarks in general and biomedical domains. “Avg. # Tables” denotes the average number of tables per task; “Avg. # Columns” refers to the average columns per table. “–” indicates missing or non-tabular data. “# Tasks” shows the number of unique data science tasks. “*” indicates the biology-related portions of
https://arxiv.org/abs/2505.16100v1
the benchmarks. Benchmark Domain Task Levels Task Sources Avg. # Tables Avg. # Columns* # Tasks DS-1000 [19] General Analysis Stackoverflow 1 - 1000 MLAgentBench [20] General Analysis Publications 1 47 13 DSBench [21] General Analysis Kaggle - - 466 BLADE [22] General Hypothesis and analysis 31 Publications 1 13 12 ScienceAgentBench [15] General Hypothesis and analysis 44 Publications - - 102 DiscoveryBench-Bio* [14] Biology Hypothesis and analysis 2 Publications 2 26 16 SciCode-Bio* [23] Biology Hypothesis and analysis 8 Publications - - 8 BioCoder [12] Biomedical Analysis Github - - 460 ChatGPT-ADA [24] Biomedical Hypothesis and analysis 4 Publications 1 548 4 AI Co-scientist [3] Biomedical Hypothesis and analysis - - - 3 BioDiscoveryAgent [11] Biomedical Analysis Publications 1 - 6 BioDSBench [10] Biomedical Analysis 39 Publications - - 293 BIODSA-1K (ours) Biomedical Hypothesis and analysis 328 Publications 6 879 1029 with this benchmark, researchers can download the raw data from cBioPortal and execute the LLM- generated code on them locally. Figure 1 shows the scale and diversity of the biomedical tables included in BIODSA -1K. Each point represents a data type, positioned by its typical number of rows and columns, and sized by its prevalence in the dataset. The benchmark encompasses a wide spectrum of commonly used biomedical data types, including clinical data, mutation data, gene expression, copy number alteration, protein expression, structural variation, and patient timelines. These data sources are foundational to modern biomedical research and collectively capture the heterogeneity of real-world biomedical analysis. Moreover, the wide variance in both row and column dimensions, ranging from compact gene panels to large-scale expression matrices, demonstrates the high dimensionality and analytical complexity present in BIODSA -1K. Compared to existing benchmarks (as shown in Table 1), which often involve simpler, smaller, or less diverse datasets, our benchmark presents a significantly more challenging and realistic setting for evaluating AI agents on biomedical data science tasks. 2.3 Hypothesis and evidence All data science challenges in BIODSA -1K are extracted from published biomedical studies using a GPT-4o model. The details of the extraction process can be found in Appendix C. Each challenge is centered around a hypothesis and its corresponding supporting evidence, reflecting how scientific claims are typically articulated in real-world literature. Rather than stating hypotheses solely in null form (e.g., “no difference between groups”), authors of original studies often present claims affirmatively (e.g., “Treatment A improves survival”), while the underlying analyses are grounded in statistical tests against a null hypothesis. To preserve fidelity to real-world practice, our benchmark follows this formulation, presenting hypotheses as definitive statements derived from the study’s conclusions. Importantly, our design does not assume these statements are inherently true; instead, we evaluate whether AI agents can reconstruct the reasoning and analysis pipeline leading to such claims, including identifying when the data are insufficient to support them. Each entry includes (1) a clearly stated hypothesis that is supported or rejected in the original publication, and (2) a plausible counter-hypothesis designed to test the agent’s ability to reason discriminatively. An example is provided in Supplementary Figure 1. To support hypothesis validation, we extract
https://arxiv.org/abs/2505.16100v1
one or more evidence entries per hypothesis, each corresponding to a distinct data analysis performed in the study. Each evidence entry is annotated with the following fields: •Analysis plan: a concise description of the statistical or computational procedure used (e.g., frequency analysis, correlation test, clustering). •Evidence: a textual summary of the result as reported in the publication. •Variables: input variables used in the analysis and the result variable serving as the output to support or refute the hypothesis. 4 To mitigate bias toward Type I error, our benchmark includes a significant fraction of non-verifiable cases where the available data are insufficient to reach a definitive conclusion. This design encourages agents not merely to “prove” hypotheses, but to assess them critically in the context of the available evidence, akin to a real-world research setting. 2.4 Tasks and evaluation The primary task in BIODSA -1K is hypothesis validation using structured biomedical data. Given a hypothesis extracted from a publication and the corresponding dataset, an AI agent is required to generate executable code to analyze the data and produce empirical observations. Based on these observations, the agent must decide whether the hypothesis is True,False , orNon-verifiable . To distinguish between the latter two, we define a hypothesis as False if the agent can identify relevant variables in the dataset and derive contradicting evidence through analysis. Conversely, a hypothesis is considered Non-verifiable if no relevant features or data tables exist in the dataset to support or reject the claim. For example, the hypothesis “Prostate cancer brain metastases (PCBM) have a higher mutational burden compared to non-brain metastases” is labeled as Non-verifiable if the dataset lacks mutational burden variables or comparative group labels. We evaluate agent performance across multiple dimensions. On the hypothesis decision level, we compute both Type I and Type II error rates. Let H∈True,False denote the ground truth label of a hypothesis, and ˆHbe the label predicted by the agent. The Type I error (false positive rate) is defined as: Type I Error =PI[H=False∧ˆH=True]PI[H=False], (1) and the Type II error (false negative rate) is given by: Type II Error =PI[H=True∧ˆH=False]PI[H=True], (2) where I[·]is the indicator function. In addition to correctness at the decision level, we assess how well the generated observations align with the supporting evidence reported in the original publication. Let Edenote the set of ground truth supporting evidences and Othe set of observations generated by the agent. We use a large language model (LLM)-as-a-judge [25] approach to measure the evidence alignment score: Alignment Score =|O∩E| |E|. (3) This metric quantifies the proportion of reported evidence that is successfully captured by the agent’s analysis pipeline. Furthermore, we evaluate the technical quality of the generated code. For each hypothesis, let C denote the total number of code cells generated and Cexecthe number of those that are executable without error. The code executability rate is defined as: Executability Rate =Cexec C. For ReAct-style agents that explore through multi-step reasoning, this metric is computed over all code snippets generated during the interaction trace. Lastly, we systematically assess agents on their ability to reject non-verifiable hypotheses. These
https://arxiv.org/abs/2505.16100v1
hypotheses are curated by taking claims from other publications that reference unrelated datasets. An ideal agent should classify such hypotheses as Non-verifiable due to the absence of relevant data. Let H=Non-verifiable be the ground truth and ˆHbe the predicted label. We report the non-verifiable detection accuracy as: Non-verifiable Accuracy =PI[H=Non-verifiable ∧ˆH=Non-verifiable ]PI[H=Non-verifiable ]. (4) 5 Table 2: Performance of hypothesis validation across publication types. Each cell reports the Type I error rate ( EI, false positive rate) and Type II error rate ( EII, false negative rate), with lower values indicating better performance. “R*” is short for “Reasoning” version of CodeGen and ReAct, respectively. Bold values highlight the best performance in each column. MethodsBiomarkers (n=244)Genomics (n=662)Integrative (n=392)Molecular (n=108)Pan-Cancer (n=78)Therapeutics (n=344)Translational (n=224) EI EII EI EII EI EII EI EII EI EII EI EII EI EII CodeGen (gpt-4o) 0.090 0.164 0.077 0.168 0.095 0.153 0.157 0.157 0.077 0.167 0.087 0.137 0.094 0.147 CodeGen (o3-mini) 0.107 0.145 0.128 0.187 0.122 0.191 0.098 0.118 0.103 0.179 0.157 0.181 0.143 0.138 ReAct (gpt-4o) 0.102 0.148 0.069 0.159 0.066 0.148 0.120 0.167 0.115 0.128 0.090 0.155 0.089 0.161 CodeGen-R* 0.082 0.156 0.054 0.139 0.082 0.125 0.111 0.139 0.141 0.154 0.083 0.148 0.060 0.110 ReAct-R* 0.090 0.094 0.060 0.125 0.074 0.107 0.074 0.093 0.051 0.167 0.087 0.122 0.098 0.112 This provides insight into how well agents can discern dataset limitations and avoid over-assertive conclusions. 3 Experiment 3.1 Implemented methods 0.06 0.08 0.10 0.12 0.14 0.16 0.18 Type I Error Rate0.060.080.100.120.140.160.18Type II Error RateBiomarkers BiomarkersGenomicsGenomics IntegrativeIntegrative Molecular MolecularPan-CancerPan-Cancer TherapeuticsTherapeutics Translational TranslationalBiomarkersGenomics IntegrativeMolecular Pan-CancerTherapeuticsTranslational Biomarkers Genomics IntegrativeMolecularPan-Cancer Therapeutics Translational BiomarkersGenomics Integrative MolecularPan-Cancer Therapeutics Translational CodeGen ReAct CodeGen-Reasoning ReAct+Reasoning Figure 3: Comparison of Type I and Type II error rates across publication types and agent variants. Each point denotes an agent’s performance on a specific publication type.We implement four agent-based methods to eval- uate performance on BIODSA -1K. CodeGen di- rectly generates a single executable Python code block based on the input hypothesis and dataset schema, and returns a final decision, True, False, or Non-verifiable, based on the produced ob- servations, without explicit intermediate reason- ing [26]. We evaluate two variants of CodeGen: one powered by GPT-4o and the other by O3- mini. ReAct follows the ReAct framework [ 27], in which the agent alternates between reason- ing steps (“thoughts”) and code execution (“ac- tions”), allowing iterative refinement of analysis and conclusions. This version is implemented using GPT-4o. To enable more structured reasoning, we also introduce two reasoning-augmented agents aligned with recent developments in data anal- ysis agents [ 6,13], which decouple experiment planning from execution. CodeGen-Reasoning first prompts O3-mini to generate a structured analysis plan detailing key reasoning and statis- tical steps, and then passes this plan to GPT-4o for code generation and execution, allowing divi- sion of labor between planning and implementation. ReAct-Reasoning extends ReAct with structured planning and uses O3-mini as the backend agent. It supports iterative reasoning and dynamic plan refinement based on intermediate observations across multiple steps. 3.2 Hypothesis validation Table 2 shows that AI agents tend to be conservative in hypothesis validation across all tested
https://arxiv.org/abs/2505.16100v1
publication types. In nearly every setting, the Type II error rate ( EII), which measures the frequency of missed relevant findings, is consistently higher than the Type I error rate ( EI), which reflects the incidence of false positives. For example, in the Biomarkers category, CodeGen (gpt-4o) exhibits a Type II error of 0.164 compared to a Type I error of 0.090. Figure 3 and Table 2 also demonstrate that reasoning augmentation improves both sensitivity and specificity. Reasoning-enhanced agents (denoted with an asterisk, e.g., CodeGen-R* and ReAct-R*) consistently outperform their base counterparts in terms of lower error rates. For instance, ReAct-R* 6 CodeGen CodeGen-ReasoningReAct ReAct-Reasoning0.00.10.20.30.4Alignment Score0.25 0.200.210.23 0.22 0.220.25 0.22 n=2034 n=2033 n=1029 n=1029 n=1029 n=1028 n=1029 n=1028** ns * nsTrue Hypothesis False HypothesisFigure 5: Evidence alignment scores for true vs. false hypotheses across methods. Frequency(n=4765)Structural(n=1993) Comparison(n=7487)Pathway (n=1250) Correlation(n=3185)Survival (n=3465)Clustering(n=1154)Functional(n=1542)0.000.050.100.150.200.250.300.35Alignment Score 0.27 (0.26-0.28) 0.26 (0.24-0.28) 0.23 (0.22-0.24) 0.21 (0.19-0.23)0.20 (0.19-0.22) 0.19 (0.18-0.20)0.19 (0.17-0.21) 0.16 (0.14-0.17)Figure 6: Evidence alignment scores by types of analyses. reduces the Type I and II errors in the Genomics category to 0.060 and 0.125, respectively, compared to 0.069 and 0.159 for the base ReAct model. Similarly, CodeGen-R* achieves a Type I error of 0.082 and Type II error of 0.156 on Biomarkers, outperforming the original CodeGen (gpt-4o) with errors of 0.090 and 0.164. These results indicate that structured reasoning enhances the agent’s ability to identify relevant evidence while reducing false positives. CodeGen CodeGen ReasoningReAct ReAct Reasoning020406080100Percentage (%)76.9% 58.2%64.9%86.6%12.0% 27.7%32.1%9.0%7.1% Executable Variable/Object MisuseMath/Logic Error Import/Module ErrorOther Errors Figure 4: Code excitability analysis and the break- down of error types in non-executable code across the selected AI agents.Figure 3 shows that ReAct-based methods con- sistently outperform CodeGen models, partic- ularly when reasoning is applied. Even with- out reasoning, ReAct (gpt-4o) achieves lower Type II errors in challenging categories such as Integrative (0.148 vs. 0.153) and Pan-Cancer (0.128 vs. 0.167) compared to CodeGen (gpt- 4o). When reasoning is incorporated, ReAct-R* outperforms CodeGen-R* in most domains, for example, in Translational, ReAct-R* reports a Type II error of 0.112 compared to 0.110 for CodeGen-R*, while maintaining a lower Type I error (0.098 vs. 0.060). Finally, Figure 3 suggests that reasoning brings the greatest improvements in domains with higher baseline error rates. This trend is evi- dent in the Genomics and Integrative categories, where non-reasoning methods exhibit relatively high Type II errors: up to 0.191 for CodeGen (o3-mini) in Integrative. In contrast, ReAct-R* reduces the same error to 0.107. This implies that reasoning is particularly valuable in more complex or information-dense publication types, helping agents better navigate and resolve ambiguous or detailed hypotheses. 3.3 Analysis quality Making a correct hypothesis decision does not necessarily imply that the AI agent followed a valid or faithful analytical process, which is a limitation largely overlooked in prior evaluations. To address this, our benchmark explicitly assesses the evidence alignment score, which measures how well the agent-generated analysis captures the ground-truth evidence reported in the original studies. We also examine the executability of the analysis code produced by the agents as a proxy for code quality and practical usability.
https://arxiv.org/abs/2505.16100v1
As shown in Figure 5, the evidence alignment scores remain modest across all methods, typically ranging from 0.20 to 0.25, regardless of whether the hypothesis being validated is ultimately True or False. Among the evaluated methods, ReAct-based agents exhibit marginally higher alignment scores compared to code generation baselines. However, the consistently low scores across the board suggest 7 CodeGen gpt-4oCodeGen o3-miniCodeGen Reasoning020406080100Percentage (%) 5.9%9.2%8.2%8.8%86.9% 85.9% 87.3%n=597 n=341 n=843Non-Executable Code CodeGen gpt-4oCodeGen o3-miniCodeGen Reasoning29.0%34.6%30.8%46.3%48.1%52.1%24.7%17.3% 17.1%n=1460 n=1669 n=1173Executable Code True False Not VerifiableFigure 7: Hypothesis validation results’ distribution by the code excitability for CodeGen methods. that AI agents often diverge from the evidence used in human-authored analyses, possibly reflecting a lack of domain knowledge or contextual understanding required for appropriate methodological choices. Figure 6 further breaks down alignment scores by analysis type. We observe that simpler analytical tasks, such as frequency counts, are more reliably handled by AI agents. In contrast, more complex tasks, including clustering and survival analysis, pose significant challenges, with notably lower alignment scores across all models. These findings highlight the need for improved reasoning strategies and domain-specific modeling capabilities in AI systems aimed at biomedical data analysis. Figure 4 presents the executability of the code generated by different AI agents and categorizes the types of errors found in non-executable outputs. Overall, ReAct-based agents exhibit the highest code executability rates, with ReAct Reasoning achieving 86.6% and ReAct at 84.9%, outperforming both CodeGen (76.9%) and CodeGen Reasoning (58.2%). Among the error types, variable or object misuse is the most common failure mode, especially prominent in CodeGen Reasoning (27.7%) and ReAct (32.1%). Logic and mathematical errors, as well as import or module-related issues, occur less frequently but still contribute to code failure across methods. 3.4 Non-verifiable hypothesis CodeGen gpt-4oCodeGen o3-miniCodeGen ReasoningReAct ReAct Reasoning020406080100 11.0%26.0%63.0% 11.0%32.0%57.0% 22.0%76.0% 34.0%64.0% 7.0%92.0% True False Not Verifiable Figure 8: Hypothesis decision distribution for the non-verifiable hypothesis by different agents.Figure 4 illustrates the proportion of executable versus non-executable code, stratified by error type, across different CodeGen methods. Since CodeGen and CodeGen-Reasoning only attempt a single-shot code generation, any failure in code execution should theoretically preclude mean- ingful hypothesis validation. To explore this, Figure 7 presents the distribution of hypothesis decisions (True, False, Not Verifiable) based on whether the generated code was executable or not. The results reveal a marked difference in decision patterns between the executable and non-executable cases. In the non-executable setting, all three CodeGen variants default to deciding the hypothesis as Not Verifiable in ap- proximately 87% of instances, but still decide around 8% as False and 5% as True. These 13% cases indicate that the AI agents sometimes turn out to hallucinate the findings. By contrast, in cases where the generated code is executable, the proportion of Not Verifiable decisions significantly drops, while the rates of True and False decisions increase substantially. We further investigated whether AI agents can act cautiously when faced with non-verifiable hypothe- ses. As shown in Figure 8, we constructed a set of 100 hypotheses that are strictly non-verifiable, 8 meaning the associated dataset lacks the information needed to either accept or
https://arxiv.org/abs/2505.16100v1
reject them. In this setting, the correct model behavior is to respond with “Not Verifiable”; any decision of “True” or “False” reflects overconfidence or hallucination. The ability to correctly identify these cases, quantified as the true positive rate (TPR) for the “Not Verifiable” class, varies substantially across agents. One-round code generation methods, such as CodeGen gpt-4o and CodeGen o3-mini, achieve only 63% and 57% TPR, respectively, often making incorrect verifiable claims in 37% and 43% of the cases. In contrast, reasoning-augmented agents like CodeGen Reasoning, ReAct, and particularly ReAct Reasoning perform more conservatively, with ReAct Reasoning achieving a TPR of 92%. 4 Related work Benchmarks Recent efforts have introduced benchmark datasets to evaluate AI agents in scien- tific discovery and data science tasks. General-purpose scientific discovery benchmarks such as DiscoveryBench [ 14], ScienceAgentBench [ 15], and SpiderV2 [ 28] focus on a broad range of tasks but often overlook specialized biomedical reasoning challenges. In parallel, several benchmarks specifically target the core task of code generation in scientific domains, including SciCode [ 23], Blade [ 22], and DSBench [ 21]. Within biomedicine, BioCoder [ 12] and CliniDSBench [ 10] address coding tasks related to biomedical data analysis. However, these benchmarks primarily emphasize code generation, while our work focuses on the broader hypothesis validation process derived directly from published scientific studies. Moreover, BIODSA -1K offers a significantly larger and more diverse evaluation scale, encompassing over three hundred publications, substantially exceeding the coverage of previous benchmarks. Agents A growing body of work explores the use of AI agents for data science and scientific research. Several systems target general data science tasks, including machine learning modeling and analysis on structured datasets such as Kaggle competitions [ 29–31,20]. Gao et al. [7]emphasizes the potential of developing agents specifically tailored for biomedical research. In the biomedical domain, Co-Scientist [ 3] and BioDiscoveryAgent [ 11] focus on a niche area: automating the design and execution of genetic perturbation experiments. Other agent frameworks have applied LLMs for bioinformatics programming, biomedical question answering [ 32], and the development of predictive models for biological outcomes [ 24]. Closest to our work is the line of research on hypothesis validation agents [ 13], which investigates how agents can reason over structured data to accept or refute scientific claims. Our work builds on these foundations but uniquely grounds the validation tasks in hypotheses and evidence derived from real-world publications, enabling broader and more rigorous evaluation of biomedical data science agents. 5 Discussion and conclusion While our benchmark draws from over 300 biomedical studies, it does not fully capture the diversity of the biomedical research landscape. The dataset naturally overrepresents well-established topics with high publication volume, potentially underrepresenting emerging areas or those with limited available data. This skew may influence model performance and generalizability, highlighting the need to continuously expand and rebalance the benchmark to reflect a wider spectrum of scientific inquiry. More broadly, as AI agents become increasingly capable of performing end-to-end data science tasks, they also introduce the risk of generating plausible but incorrect scientific claims. Without proper oversight, such systems could accelerate
https://arxiv.org/abs/2505.16100v1
the propagation of false findings under the guise of data-driven analysis. Ensuring transparency, interpretability, and human-in-the-loop validation will be critical to responsibly deploying these tools in high-stakes scientific domains. In this work, we present BIODSA -1K, a benchmark designed to evaluate AI agents on realistic biomedical data science tasks. By extracting over a thousand hypotheses and corresponding analysis plans from hundreds of published studies, BIODSA -1K captures the diversity and complexity inherent in real-world biomedical research. Unlike prior benchmarks, it encompasses not only hypothesis validation tasks with sufficient evidence, but also non-verifiable cases where the available data are inconclusive: a frequent yet underrepresented scenario in scientific reasoning. The benchmark enables comprehensive evaluation across multiple dimensions, including decision accuracy, evidence grounding, reasoning validity, and analysis code executability. We envision BIODSA -1K as a foundation for developing more robust, transparent, and trustworthy AI agents for scientific discovery. 9 References [1]Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents: A survey. Science China Information Sciences , 68(2):121101, 2025. [2]Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V Chawla, Olaf Wiest, and Xiangliang Zhang. Large language model based multi-agents: a survey of progress and challenges. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence , pages 8048–8057, 2024. [3]Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864 , 2025. [4]Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature , 624(7992):570–578, 2023. [5]Zifeng Wang, Lang Cao, Qiao Jin, Joey Chan, Nicholas Wan, Behdad Afzali, Hyun-Jin Cho, Chang-In Choi, Mehdi Emamverdi, Manjot K Gill, et al. A foundation model for human-ai collaboration in medical literature mining. arXiv preprint arXiv:2501.16255 , 2025. [6]Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Sanchaita Hazra, Ashish Sabharwal, and Peter Clark. Position: data-driven discovery with large generative models. In Forty-first International Conference on Machine Learning , 2024. [7]Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha Ektefaie, Jovana Kondic, and Marinka Zitnik. Empow- ering biomedical discovery with ai agents. Cell, 187(22):6125–6151, 2024. [8]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [9]Melanie A Meyer. Healthcare data scientist qualifications, skills, and job focus: a content analysis of job postings. Journal of the American Medical Informatics Association , 26(5): 383–391, 2019. [10] Zifeng Wang, Benjamin Danek, Ziwei Yang, Zheng Chen, and Jimeng Sun. Can large language models replace data scientists in biomedical research? arXiv preprint arXiv:2410.21591 , 2024. [11] Yusuf H Roohani, Jian V ora, Qian Huang, Percy Liang, and Jure Leskovec. BioDiscoveryAgent: An ai agent for designing genetic perturbation experiments. In ICLR 2024 Workshop on Machine Learning for Genomics Explorations , 2024. [12] Xiangru Tang, Bill Qian, Rick Gao, Jiakang Chen,
https://arxiv.org/abs/2505.16100v1
Xinyun Chen, and Mark B Gerstein. BioCoder: a benchmark for bioinformatics code generation with large language models. Bioin- formatics , 40(Supplement_1):i266–i276, 2024. [13] Kexin Huang, Ying Jin, Ryan Li, Michael Y Li, Emmanuel Candès, and Jure Leskovec. Automated hypothesis validation with agentic sequential falsifications. arXiv preprint arXiv:2502.09858 , 2025. [14] Bodhisattwa Prasad Majumder, Harshit Surana, Dhruv Agarwal, Bhavana Dalvi Mishra, Abhi- jeetsingh Meena, Aryan Prakhar, Tirth V ora, Tushar Khot, Ashish Sabharwal, and Peter Clark. Discoverybench: Towards data-driven discovery with large language models. In The Thirteenth International Conference on Learning Representations , 2024. [15] Ziru Chen, Shijie Chen, Yuting Ning, Qianheng Zhang, Boshi Wang, Botao Yu, Yifei Li, Zeyi Liao, Chen Wei, Zitong Lu, et al. Scienceagentbench: Toward rigorous assessment of language agents for data-driven scientific discovery. arXiv preprint arXiv:2410.05080 , 2024. 10 [16] Jianjiong Gao, Bülent Arman Aksoy, Ugur Dogrusoz, Gideon Dresdner, Benjamin Gross, S. Onur Sumer, Yichao Sun, Anders Jacobsen, Rileen Sinha, Erik Larsson, Ethan Cerami, Chris Sander, and Nikolaus Schultz. Integrative analysis of complex cancer genomics and clinical profiles using the cbioportal. Science Signaling , 6(269):pl1–pl1, 2013. doi: 10.1126/scisignal. 2004088. URL https://www.science.org/doi/abs/10.1126/scisignal.2004088 . [17] Open data commons open database license (odbl) v1.0. https://opendatacommons.org/ licenses/odbl/1-0/ . Accessed: 2025-05-01. [18] Bert V ogelstein, Nickolas Papadopoulos, Victor E Velculescu, Shibin Zhou, Luis A Diaz Jr, and Kenneth W Kinzler. Cancer genome landscapes. Science , 339(6127):1546–1558, 2013. [19] Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen- tau Yih, Daniel Fried, Sida Wang, and Tao Yu. DS-1000: A natural and reliable benchmark for data science code generation. In International Conference on Machine Learning , pages 18319–18345. PMLR, 2023. [20] Qian Huang, Jian V ora, Percy Liang, and Jure Leskovec. Mlagentbench: Evaluating language agents on machine learning experimentation. In Forty-first International Conference on Machine Learning , 2024. [21] Liqiang Jing, Zhehui Huang, Xiaoyang Wang, Wenlin Yao, Wenhao Yu, Kaixin Ma, Hongming Zhang, Xinya Du, and Dong Yu. DSBench: How far are data science agents from becoming data science experts? In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=DSsSPr0RZJ . [22] Ken Gu, Ruoxi Shang, Ruien Jiang, Keying Kuang, Richard-John Lin, Donghe Lyu, Yue Mao, Youran Pan, Teng Wu, Jiaqian Yu, et al. BLADE: Benchmarking language model agents for data-driven science. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 13936–13971, 2024. [23] Minyang Tian, Luyu Gao, Shizhuo Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland Haas, Pan Ji, Kittithat Krongchon, Yao Li, et al. Scicode: A research coding benchmark curated by scientists. Advances in Neural Information Processing Systems , 37:30624–30650, 2024. [24] Soroosh Tayebi Arasteh, Tianyu Han, Mahshad Lotfinia, Christiane Kuhl, Jakob Nikolas Kather, Daniel Truhn, and Sven Nebelung. Large language models streamline automated machine learning for clinical studies. Nature Communications , 15(1):1603, 2024. [25] Jinlan Fu, See Kiong Ng, Zhengbao Jiang, and Pengfei Liu. GPTScore: Evaluate as you desire. InProceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 6556–6576, 2024. [26] Tal Ridnik, Dedy Kredo, and Itamar
https://arxiv.org/abs/2505.16100v1
Friedman. Code generation with alphacodium: From prompt engineering to flow engineering. arXiv preprint arXiv:2401.08500 , 2024. [27] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023. [28] Ruisheng Cao, Fangyu Lei, Haoyuan Wu, Jixuan Chen, Yeqiao Fu, Hongcheng Gao, Xinzhuang Xiong, Hanchong Zhang, Wenjing Hu, Yuchen Mao, et al. Spider2-v: How far are multi- modal agents from automating data science and engineering workflows? Advances in Neural Information Processing Systems , 37:107703–107744, 2024. [29] Siyuan Guo, Cheng Deng, Ying Wen, Hechang Chen, Yi Chang, and Jun Wang. DS-Agent: Automated data science by empowering large language models with case-based reasoning. In International Conference on Machine Learning , pages 16813–16848. PMLR, 2024. [30] Antoine Grosnit, Alexandre Maraval, James Doran, Giuseppe Paolo, Albert Thomas, Refinath Shahul Hameed Nabeezath Beevi, Jonas Gonzalez, Khyati Khandelwal, Ignacio Iacobacci, Abdelhakim Benechehab, et al. Large language models orchestrating structured reasoning achieve kaggle grandmaster level. arXiv preprint arXiv:2411.03562 , 2024. 11 [31] Ziming Li, Qianbo Zang, David Ma, Jiawei Guo, Tuney Zheng, Minghao Liu, Xinyao Niu, Yue Wang, Jian Yang, Jiaheng Liu, et al. Autokaggle: A multi-agent framework for autonomous data science competitions. arXiv preprint arXiv:2410.20424 , 2024. [32] Nikita Mehandru, Amanda K Hall, Olesya Melnichenko, Yulia Dubinina, Daniel Tsirulnikov, David Bamman, Ahmed Alaa, Scott Saponas, and Venkat S Malladi. BioAgents: Democratizing bioinformatics analysis with multi-agent systems. arXiv preprint arXiv:2501.06314 , 2025. 12 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: [NA] Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: [NA] Guidelines: •The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. •The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with
https://arxiv.org/abs/2505.16100v1
a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] 13 Justification: This paper does not involve theoretical results. Guidelines: • The answer NA means that the paper does not include theoretical results. •All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that the paper does not include experiments. •If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same
https://arxiv.org/abs/2505.16100v1
dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. •While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? 14 Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that paper does not include experiments requiring code. •Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. •While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). •The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). •Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they
https://arxiv.org/abs/2505.16100v1
were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that the paper does not include experiments. •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. •The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that the paper does not include experiments. •The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). •The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) • The assumptions made should be given (e.g., Normally distributed errors). •It should be clear whether the error bar is the standard deviation or the standard error of the mean. 15 •It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). •If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments compute resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that the paper does not include experiments. •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: [NA] Guidelines: •The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. •If the authors answer No, they should explain the special circumstances that require a
https://arxiv.org/abs/2505.16100v1
deviation from the Code of Ethics. •The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that there is no societal impact of the work performed. •If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. •Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to 16 generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. •If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: [NA] Guidelines: • The answer
https://arxiv.org/abs/2505.16100v1
NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. •The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. •For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. •If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. •For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. •If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? 17 Answer: [Yes] Justification: [NA] Guidelines: • The answer NA means that the paper does not release new assets. •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. •The paper should discuss whether and how consent was obtained from people whose asset is used. •At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: [NA] Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: [NA] Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors
https://arxiv.org/abs/2505.16100v1
to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. •For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 16.Declaration of LLM usage Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [NA] 18 Justification: [NA] Guidelines: •The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. •Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM ) for what should or should not be described. 19 Contents of Appendix A Example hypothesis and supporting evidence 21 B Captions of Biomedical Data Tables 22 C Extracting hypothesis and evidence from publications 23 D Categorization of biomedical publications 24 E Categorization of analysis tasks 25 F Categorization of code errors 26 G Agent prompts 26 G.1 CodeGen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 G.2 ReAct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 G.3 CodeGen-Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 G.4 ReAct-Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 20 A Example hypothesis and supporting evidence { "hypotheses" : [ { "hypothesis" : "MGMT promoter methylation is positively associated with a hypermutator phenotype in treated glioblastomas." , "null_hypothesis" : "There is no association between MGMT promoter methylation and a hypermutator phenotype in treated glioblastomas." , "supporting_evidences" : [ { "analysis_plan" : "Perform a correlation analysis to evaluate the relationship between MGMT promoter methylation status and mutation rates in treated glioblastoma samples." , "evidence" : "A significant positive correlation was found between MGMT promoter methylation and mutation rate (r = 0.65, p < 0.01), indicating a potential association with the hypermutator phenotype." , "analysis_variables" : [ "MGMT_promoter_methylation_status" , "mutation_rate" ], "result_variable" : "correlation_coefficient" , "result_variable_value" : "r = 0.65, p < 0.01" }, { "analysis_plan" : "Compare mutation rates between glioblastomas with methylated and unmethylated MGMT promoters using a t-test." , "evidence" : "Tumors with methylated MGMT promoters exhibited significantly higher mutation rates than those without methylation (mean difference = 15 mutations/sample,
https://arxiv.org/abs/2505.16100v1
p < 0.05), supporting the association." , "analysis_variables" : [ "MGMT_promoter_methylation_status" , "mutation_rate" ], "result_variable" : "mean_mutation_rate_difference" , "result_variable_value" : "15 mutations/sample, p < 0.05" } ] }, { "hypothesis" : "The PIK3R1 gene is frequently mutated in glioblastoma samples." , "null_hypothesis" : "The PIK3R1 gene is not frequently mutated in glioblastoma samples." , "supporting_evidences" : [ { "analysis_plan" : "Calculate the mutation frequency of the PIK3R1 gene across all glioblastoma samples." , "evidence" : "Mutations in PIK3R1 were observed in 25% of glioblastoma samples, indicating a relatively high mutation frequency." , "analysis_variables" : [ "PIK3R1_mutation_status" ], "result_variable" : "mutation_frequency" , "result_variable_value" : "25%" }, { "analysis_plan" : "Rank gene mutation frequencies to determine whether PIK3R1 is among the most frequently mutated genes in glioblastoma." , "evidence" : "PIK3R1 mutation frequency ranked within the top 10% among all genes analyzed, suggesting it is frequently altered in glioblastoma." , "analysis_variables" : [ "gene_mutation_status" ], "result_variable" : "relative_mutation_frequency" , "result_variable_value" : "Top 10% among all genes" } ] } ] } Supplementary Figure 1: Examples of the hypothesis, counter-hypothesis, and supporting evidence extracted from biomedical publications. To illustrate the structure of entries in BIODSA -1K, we present two representative examples derived from glioblastoma studies. Each example consists of a hypothesis formulated from the original study’s conclusions, a corresponding null hypothesis, and multiple supporting analyses that provide evidence for or against the claim. 21 Example 1: MGMT Methylation and Hypermutation. The hypothesis states that MGMT promoter methylation is positively associated with a hypermutator phenotype in treated glioblastomas. The corresponding null hypothesis asserts no such association. In one analysis, the study performed a correlation analysis between MGMT promoter methylation status and mutation rates, reporting a statistically significant positive correlation ( r= 0.65,p <0.01), suggesting that higher methylation is linked to increased mutation burden. A second analysis compared mutation rates between tumors with methylated versus unmethylated MGMT promoters, showing that methylated tumors had significantly higher mutation rates (mean difference = 15 mutations per sample, p <0.05). These analyses collectively support the hypothesis. Example 2: PIK3R1 Mutation Frequency. This hypothesis posits that the PIK3R1 gene is frequently mutated in glioblastoma samples, with the null hypothesis stating that it is not. In the first analysis, the study reported a mutation frequency of 25% for PIK3R1 across glioblastoma samples, indicating a notable prevalence. A second analysis ranked gene mutation frequencies and found that PIK3R1 was among the top 10% of all mutated genes, further supporting the claim of frequent alteration. These examples demonstrate how BIODSA -1K captures both the semantic structure of biomedical hypotheses and the analytical reasoning used to evaluate them, providing a grounded framework for assessing the capabilities of AI agents in data-driven scientific inference. B Captions of Biomedical Data Tables To support automated hypothesis validation and dataset reasoning tasks, we systematically generated structured captions for biomedical data tables from the cBioPortal repository. Each caption describes the content and structure of a tabular dataset, including its schema, value distributions, and metadata annotations. We developed a modular pipeline to process every dataset directory and extract metadata from text-based tables,
https://arxiv.org/abs/2505.16100v1
primarily those with filenames beginning with data_ . For each dataset, we extracted high-level metadata including dataset ID, cancer type, and description. Within each dataset directory, we identified all data tables and parsed their contents while ignoring comment lines (those beginning with “#”). The first non-comment line was interpreted as the column header. Subsequent lines were parsed as tab-delimited rows, with short rows padded and long rows truncated to maintain schema alignment. To ensure consistency and facilitate downstream usage, we cleaned column names by removing punctuation and replacing whitespace with underscores. We then inferred the data type of each column using a custom heuristic function and computed column-wise statistics depending on the inferred type: •Binary and categorical columns: Top value counts and number of unique values were reported, along with missing value rate. •Integer-valued columns: We computed quantiles (1%, 20%, 40%, 60%, 80%, 99%) as well as minimum and maximum values. •Continuous columns: Descriptive statistics were generated, including count, mean, standard deviation, and range, rounded to four decimal places. Each table’s caption includes its name, number of rows and columns, column-level statistics, and preserved comment rows if present. The final structured metadata was saved as JSON files under a centralized metadata directory, one per dataset. These captions serve as machine-readable documen- tation for real-world biomedical tables and are critical for enabling dataset-aware reasoning by AI agents. An example metadata structure is as follows: { "dataset_id": "example_ds", "type_of_cancer": "glioblastoma", "description": "Clinical and genomic data for GBM samples", "tables": [ { 22 "name": "data_clinical.txt", "n_rows": 287, "n_columns": 12, "n_comment_rows": 4, "columns": [ { "name": "age_at_diagnosis", "data_type": "integer", "n_unique": 45, "missing_rate": 0.03, "statistics": { "min": 22, "0.2": 45, "0.8": 73, "max": 88, "statistics_type": "quantiles" } }, ... ] } ] } This structured captioning process enables interpretability, reusability, and intelligent query capabili- ties across diverse biomedical datasets in B IODSA-1K. C Extracting hypothesis and evidence from publications To construct a benchmark of data-driven scientific claims, we developed a large-scale pipeline for extracting testable hypotheses and their supporting evidence from biomedical publications. Each extracted instance consists of a binary hypothesis derived from the abstract, a plausible counter- hypothesis, and one or more structured evidence entries grounded in quantitative findings. We began with a curated metadata file from cBioPortal, which includes over 1,000 publications indexed by PubMed ID (PMID), their associated dataset identifiers, and accompanying titles, ab- stracts, and result summaries. After deduplication and filtering, we paired each publication with its corresponding abstract and dataset identifiers. For each entry, we concatenated the title and abstract to form a unified context and sent it to a large language model (GPT-4o) via a structured prompt. The prompt was designed to elicit hypotheses that are: • Binary and testable using statistical or machine learning methods, •Grounded in measurable outcomes, with clear references to statistical relationships or effect sizes, •Accompanied by structured supporting evidence, including analysis plans, involved variables, statistical measures, and result values. # The prompt: """ The following is the abstract of a publication: {abstract} Task: Given the abstract of a publication, your task is to extract binary hypotheses
https://arxiv.org/abs/2505.16100v1
and their supporting evidences that can be tested through data analysis. Requirements for hypotheses and evidences: 1. Each hypothesis must be testable using statistical analysis or machine learning methods 23 2. All evidence must include specific, measurable quantities or statistical relationships 3. Result values must be numerical (e.g., percentages, counts, p-values, correlation coefficients) or categorical with clear classifications 4. Analysis variables must be specific data columns or features that exist in the dataset Return your answer as a JSON object in the following format: ‘‘‘json { "hypotheses": [ { "hypothesis": a specific, binary hypothesis that can be tested statistically, from the abstract, the one which is considered to be true from the study, "wrong_hypothesis": make a random perturbation of the hypothesis so that it is a wrong hypothesis, "supporting_evidences": [ // the evidences that support the alternative hypothesis { "analysis_plan": a brief analysis plan that can yield this evidence, "evidence": specific statistical finding or measurement, "analysis_variables": list of exact variables/features needed for analysis, "result_variable": the specific metric or statistical measure used, "result_variable_value": numerical value, statistical measure, or categorical outcome }, ... ] }, ... ] } """ To increase throughput and reliability, we used batched LLM calls with zero temperature to ensure deterministic completions. Each LLM output was parsed using a custom function that handled both valid JSON and malformed output formats via regular expression matching. The output structure follows a fixed schema that includes a hypothesis , awrong_hypothesis (a small pertur- bation to simulate a plausible counterfactual), and a list of supporting_evidences , each contain- ing fields such as analysis_plan ,evidence ,analysis_variables ,result_variable , and result_variable_value . Each extracted hypothesis is linked to its source publication (via PMID) and the relevant datasets (via dataset ID) so that the claim can later be validated against real-world biomedical tables. The full outputs were stored as structured JSON files, one per publication. This corpus forms the foundation of B IODSA-1K, enabling AI agents to reason over realistic, evidence-backed scientific claims. D Categorization of biomedical publications Genomics Publications in this category focus on large-scale genomic profiling of tumors. These studies utilize high-throughput sequencing to catalog somatic mutations, copy-number variations, and other genetic alterations across cancer samples. Molecular This class covers research that investigates molecular characteristics beyond DNA mutations. It includes analyses of transcriptomic, proteomic, and epigenomic data, often derived from both patient samples and established cancer cell lines. 24 Pan-Cancer Pan-Cancer studies undertake comparative analyses across multiple types of can- cers. They aim to identify common molecular patterns and differences, thereby deepening our understanding of shared and unique cancer pathways. Therapeutics These publications explore the relationship between genomic alterations and drug responses. The focus is on identifying potential therapeutic targets and advancing personalized treatment strategies based on genetic and molecular data. Biomarkers Research in this class is dedicated to discovering and validating diagnostic and prognostic markers. These biomarkers help in predicting disease outcomes, guiding treatment decisions, and supporting early detection. Methods Publications categorized as Methods introduce new computational tools, algorithms, or experimental techniques that facilitate the analysis and interpretation of complex biomedical data. Integrative Integrative
https://arxiv.org/abs/2505.16100v1
studies combine data from multiple omics layers—such as genomics, transcriptomics, and proteomics—to provide a comprehensive view of tumor biology. They aim to interconnect disparate data types into coherent biological insights. Translational: This class emphasizes bridging the gap between research and clinical application. Translational studies apply genomic and molecular findings to improve diagnostic methods, prognostic assessments, and treatment strategies in clinical practice. E Categorization of analysis tasks Correlation analysis (Correlation) Tasks that focus on statistically relating two or more variables. Examples include correlating gene methylation status with mutation rates or associating mutational profiles with clinical factors such as smoking status. Comparative analysis (Comparison) Tasks that directly contrast groups or conditions. These in- clude comparing mutation frequencies between groups (e.g., methylated vs. unmethylated promoters) or contrasting profiles across different cancer subtypes. Frequency analysis (Frequency) Tasks that measure the occurrence or rate of specific genomic events. Typical examples are calculating the mutation frequency for a given gene or determining the prevalence of a particular genetic alteration. Clustering and classification (Clustering) Tasks that involve grouping data based on similarities. These studies might use cluster analysis to categorize samples by genomic features or mutational signatures. Survival and prognostic analysis (Survival) Tasks that associate molecular or genomic features with patient outcomes, such as survival curve comparisons or prognostic evaluations. Functional and experimental analysis (Functional) Tasks that explore gene function or cellular behavior through experimental approaches. This includes RNA interference experiments or assays measuring the effects of gene knockdown on cell proliferation. Genomic structural analysis (Structural) Tasks that analyze genomic architecture or structural variants. Examples include the evaluation of copy-number alterations, genomic rearrangements, or spatial mutation distributions. Pathway and integrative analysis (Pathway) Tasks that integrate multiple data types to elucidate biological pathways and networks. These include integrative pathway analyses, enrichment studies, or assessments of driver mutations in signaling cascades. 25 F Categorization of code errors To enable systematic analysis of common failure modes in AI-generated code, we group low-level Python error types into broader categories reflecting common code quality issues. This many-to-one mapping provides a more interpretable summary of model behaviors and facilitates downstream visualization and comparison. The categorization is defined as follows: •Variable/Object Misuse: Errors such as KeyError ,AttributeError ,NameError , and IndexError that arise from referencing undefined variables, missing dictionary keys, or invalid object attributes. •Math/Logic Error: Includes errors like ZeroDivisionError ,ValueError , and numpy.linalg.LinAlgError , which typically result from invalid arithmetic operations, numerical instability, or logical violations. •Import/Module Error: Consists of ImportError andModuleNotFoundError , indicating miss- ing dependencies or incorrect import paths. •File/I-O Error: Captures input/output-related issues such as FileNotFoundError andOSError , often caused by referencing unavailable files or malformed I/O operations. •Pandas/Data Error: Includes errors from data processing libraries, such as pandas.errors.ParserError ,MergeError , and IndexingError , typically caused by invalid parsing, merging, or indexing operations. •General Exception: Encompasses generic or runtime-specific errors such as Exception and RuntimeError , which represent critical failures not captured by more specific categories. G Agent prompts G.1 CodeGen # Prompts for CodeGen methods: """ # TASK Given the user-provided scientific hypothesis, you **Must** write {language} code to help the
https://arxiv.org/abs/2505.16100v1
user evaluate the hypothesis. # IMPORTANT: CODE OUTPUT REQUIREMENTS You must import all the necessary libraries at the beginning of your code. You must use explicit print() statements for ALL outputs you want to see or analyze. Simply writing expressions like ’df.head()’ will NOT show results in the execution log. Always use: - print(df.head()) - print(analysis_result) - print(statistical_test_output) Every intermediate result and final output must be wrapped in a print() statement to be visible in the execution log. # DATASET PATHS {dataset_paths} # DATASET SCHEMA {dataset_schema} ## Ouptut Your output should be in Markdown format and you should wrap the generated code in ‘‘‘{language} ‘‘‘ tags. """ 26 G.2 ReAct # Prompts for ReAct methods: """ # TASK Evaluate the user’s scientific hypothesis using the datasets provided. You can write and execute {language} code to evaluate the hypothesis by invoking the tool {tool_name}. # IMPORTANT: CODE OUTPUT REQUIREMENTS You must import all the necessary libraries at the beginning of your code. You must use explicit print() statements for ALL outputs you want to see or analyze. Simply writing expressions like ’df.head()’ will NOT show results in the execution log. Always use: - print(df.head()) - print(analysis_result) - print(statistical_test_output) Every intermediate result and final output must be wrapped in a print() statement to be visible in the execution log. # DATASET SCHEMA {dataset_schema} # DATASET PATHS {dataset_paths} """ G.3 CodeGen-Reasoning # CodeGen-Reasoning prompts ANALYSIS_PLAN_PROMPT_TEMPLATE = """ # TASK Generate an analysis plan to evaluate the user’s scientific hypothesis using the datasets provided. The plan should consist of clear, actionable steps that can be **easily converted to {language} code** without needing any additional information. # REQUIREMENTS - Use only table and column names from the schema: do not invent or guess names. - Ensure every step is unambiguous and directly executable. - Use consistent naming for all variables (e.g., tables, columns) throughout the plan. - Be as concise as possible while maintaining full clarity and precision. # DATASET PATHS {dataset_paths} # DATASET SCHEMA {dataset_schema} # OUTPUT FORMAT Wrap the analysis plan in <analysis_plan> </analysis_plan> tags. So an example output would be ‘‘‘ <analysis_plan> 1. load the dataset 2. print hello world </analysis_plan> """ 27 CODE_GENERATION_PROMPT_TEMPLATE = """ # TASK Given the user-provided analysis plan for the user’s scientific hypothesis, you **Must** write {language} code to fulfill the plan so that user can execute the code later to evaluate the hypothesis. # IMPORTANT: CODE OUTPUT REQUIREMENTS You must import all the necessary libraries at the beginning of your code. You must use explicit print() statements for ALL outputs you want to see or analyze. Simply writing expressions like ’df.head()’ will NOT show results in the execution log. Always use: - print(df.head()) - print(analysis_result) - print(statistical_test_output) Every intermediate result and final output must be wrapped in a print() statement to be visible in the execution log. # DATASET PATHS {dataset_paths} ## Ouptut Your output should be in Markdown format and you should wrap the generated code in ‘‘‘{language} ‘‘‘ tags. """ G.4 ReAct-Reasoning AGENT_MODEL_PROMPT_TEMPLATE = """ You are a scientific agent who can plan and execute python
https://arxiv.org/abs/2505.16100v1
code iteratively to evaluate a scientific hypothesis. Note: - You must execute and refine the given analysis plan iteratively until you have enough evidence to support the hypothesis. - You must always write a single Python code block that can be executed directly based on the analysis plan. - Use ‘print()‘ statements in your code to get the observations. """ PLANNING_PROMPT_TEMPLATE = """ # TASK Generate an analysis plan to evaluate the user’s scientific hypothesis using the datasets provided. The plan should consist of clear, actionable psudo codesteps that can be **easily converted to python code** without needing any additional information. # REQUIREMENTS - Use only table and column names from the schema: do not invent or guess names. - Ensure every step is unambiguous and directly executable. - Use consistent naming for all variables (e.g., tables, columns) throughout the plan. - Be as concise as possible while maintaining full clarity and precision. # DATASET PATHS {dataset_paths} # DATASET SCHEMA {dataset_schema} 28 """ 29
https://arxiv.org/abs/2505.16100v1
arXiv:2505.16102v2 [cs.CL] 23 May 2025Continually Self-Improving Language Models for Bariatric Surgery Question–Answering Yash Kumar Atri atri@virginia.edu School of Data Science University of Virginia Charlottesville, VA, USA Thomas Shin thomas.shin@uvahealth.org Department of Surgery University of Virginia School of Medicine Charlottesville, VA, USA Thomas Hartvigsen hartvigsen@virginia.edu School of Data Science University of Virginia Charlottesville, VA, USA Abstract While bariatric and metabolic surgery (MBS) is considered the gold standard treatment for severe and morbid obesity, its therapeutic efficacy hinges upon active and longitudinal engagement with multidisciplinary providers, including surgeons, dietitians/nutritionists, psychologists, and endocrinologists. This engagement spans the entire patient journey, from preoperative preparation to long-term postoperative management. However, this process is often hindered by numerous healthcare disparities, such as logistical and access barriers, which impair easy patient access to timely, evidence-based, clinician-endorsed information. To address these gaps, we introduce bRAGgen , a novel adaptive RAG-based model that autonomously integrates real-time medical evidence when response confidence dips below dynamic thresholds. This self-updating architecture ensures that responses remain current and accurate, reducing the risk of misinformation. Additionally, we introduce bRAGq , a curated dataset of 1 ,302 bariatric surgery–related questions, validated by expert bariatric surgeon, constituting the first large-scale, domain-specific benchmark for comprehensive MBS care. In a two-phase evaluation, bRAGgen is benchmarked against state-of-the-art models using both large language model (LLM)-based metrics and expert surgeon review. Across all evaluation dimensions, bRAGgen demonstrates substantially superior performance in generating clinically accurate and relevant responses. Data and Code available at https://github.com/yashkumaratri/bRAGgen 1. Introduction Severe obesity and its subsequent metabolic disease have become a widespread endemic condition, leading to a projected incidence of 25% across the United States by 2030, causing a massive health burden in the general US population (Ward et al., 2019). Metabolic and bariatric surgery (MBS) remains the gold standard treatment for severe obesity and metabolic disease, with over 270,000 annual procedures in the United States (Clapp et al., Continually Self-Improving Language Models for Bariatric Surgery QA 2024; Mechanick et al., 2020; Barres et al., 2013; Loos and Yeo, 2022; Setarehdan et al., 2023). However, successful weight loss post-MBS heavily relies on patient education, which has led National Bariatric Surgery and Medical societies to emphasize extensive education services for MBS patients (Mechanick et al., 2020). Perioperative MBS patient education includes information about dietary modification, adjunctive lifestyle modifications, expectations surrounding postoperative complications, and psychosocial support; all important mitigators of periprocedural complications and postoperative weight regain, which can afflict up to 64% of patients in 5–10 years post- MBS (Groller et al., 2017; Bjerkan et al., 2022; David et al., 2020; McLennan et al., 2023; Kim et al., 2023). And after surgery, patients only attend their yearly followup an aver- age of 6.5–29.6% of the time, leaving patients and providers with limited opportunities to communicate. Overall, the lack of sustained patient engagement and education post-MBS is a critical impediment to optimal postoperative outcomes with many causes, including low health literacy rates, information inaccessibility, and geographic distances to health- care providers (Setarehdan et al., 2023; Mechanick et al., 2020; Schlottmann et al., 2023; Bartholomay et al., 2024). Given these challenges, there
https://arxiv.org/abs/2505.16102v2
is a clear need for scalable, accessible, and continually updated educational and decision-support tools tailored to the unique needs of MBS pa- tients—spanning from preoperative preparation to long-term postoperative management (P et al., 2024). Traditional patient education materials—whether delivered in print, via static websites, or through periodic telehealth visits—often fail to adapt dynamically to emerging clinical evidence (Javanparast et al., 2021) or to the evolving clinical status of individual patients. Moreover, existing digital health platforms seldom incorporate mechanisms to detect when their guidance may be outdated or insufficiently confident (Wang et al., 2025), leading to knowledge gaps in both patients and clinicians. Large language models (LLMs) (Grattafiori et al., 2024; Abdin et al., 2024; Minaee et al., 2025) offer a potential solution by providing natural language interfaces for patients to query. However, LLMs face limitations due to fixed knowledge cutoffs (Cheng et al., 2024) and the most-capable are trained on broad, general-purpose corpora (Alber et al., 2025), leaving them unaware of the latest bariatric surgery guidelines or nuanced postoperative considerations (B´ elisle-Pipon, 2024). One popular way to address outdated knowledge in LLMs is through retrieval-augmented generation (RAG) methods (Gao et al., 2024), which retrieve up-to-date documents from a database. However, their databases are typically static and efforts to increase them over time easily introduce “context noise,” overwhelming the LLM with conflicting inputs and producing higher hallucination rates (Zhang and Zhang, 2025). Furthermore, these methods lack built-in mechanisms to assess when their own outputs may be insufficiently confident, placing patients and clinicians at risk of incomplete or incorrect guidance (Lewis et al., 2021). We propose bRAGgen , an adaptive RAG framework that continuously monitors its re- sponse confidence and, upon detecting uncertainty, autonomously retrieves and integrates the latest peer-reviewed evidence and clinical guidelines from trusted biomedical sources such as PubMed1. This self-updating architecture ensures that guidance remains current, accurate, and clinically relevant, reducing the risk of outdated or misleading recommenda- 1. https://pubmed.ncbi.nlm.nih.gov 2 Continually Self-Improving Language Models for Bariatric Surgery QA tions. Complementing bRAGgen , we introduce bRAGq , a curated dataset of 1 ,302 bariatric surgery–related questions validated by a bariatric surgeon, constituting the first large-scale, domain-specific benchmark for the full spectrum of MBS patient care—from preoperative preparation to long-term postoperative management. In a two-phase evaluation, leveraging both LLM-based metrics and expert surgeon reviews, bRAGgen demonstrates substantially superior performance in generating clinically accurate, relevant, and actionable responses, paving the way for more accessible, evidence-based support for MBS patients. Generalizable insights about machine learning in the context of healthcare While we focus on bariatric surgery patients’ educational needs, our work includes gener- alizable insights for other healthcare contexts. First, the need to access up-to-date medical evidence is widespread, especially in areas where the scientific literature grows quickly. Our work demonstrates that it is feasible to approach this problem by training models to directly access webpages, and keeping them relevant throughout deployment. Second, our methods are general purpose and are widely applicable. While there is a major need for bariatric surgery education and it is our expertise, our machine learning methods can be generalized
https://arxiv.org/abs/2505.16102v2
to any domains where patient questions can be collected, relevant literature exists, and model answers can be validated. 2. Related Work The landscape of digital health interventions for bariatric care across the entire surgical spectrum-from preoperative preparation to long-term postoperative management has ex- panded considerably, with multiple studies evaluating the efficacy and usability of mobile applications and web-based platforms. For instance, a German cohort study by Wu et al. (2024c) demonstrated that mHealth follow-up via a dedicated mobile app achieved compara- ble outcomes to traditional in-person care across weight loss, quality of life, and nutritional status metrics. Similarly, a systematic review by Patel and Thind (2020) identified 33 usabil- ity studies of mHealth apps across surgical subspecialties, underscoring both the potential of digital tools and the persistent challenge of sustaining long-term patient engagement. In parallel, advances in large language models (LLMs) (Grattafiori et al., 2024; Abdin et al., 2024; Minaee et al., 2025) have led to growing interest in their ability to distill up-to- date information (Atri et al., 2023c; Dey et al., 2020; Atri et al., 2023b,a, 2021) and support tasks such as (medical) question answering (Khlaut et al., 2024; Sviridova et al., 2024; Vladika and Matthes, 2024; Saeed, 2024) and clinical decision-making (Kim et al., 2024; Lu et al., 2024; Singhal et al., 2025). The GPT models (Kojima et al., 2023) showcased impres- sive zero- and few-shot capabilities but is inherently limited by its fixed pretraining cutoff and general-purpose data (Tamkin et al., 2021). Retrieval-augmented generation (RAG) (Gao et al., 2024) based methods ground LLM outputs in external document collections to improve factuality (Li et al., 2024; Cai et al., 2024). However, as the size of the retrieval corpus grows, RAG systems can suffer from “context noise” that increases hallucination rates and lack internal confidence estimates to flag uncertain outputs (Wu et al., 2024a). To overcome the limitations of conventional RAG systems, recent research has explored adaptive retrieval strategies that react to model uncertainty. Iterative and gated retrieval approaches (Jiang et al., 2025; Heydari et al., 2025) selectively filter external documents 3 Continually Self-Improving Language Models for Bariatric Surgery QA Question Category No. of Questions Percentage (%) Preparation & Logistics 68 5.22 Surgical & Medical Info 263 20.20 Risks & Complications 221 16.97 Recovery & Lifestyle 296 22.73 Nutrition & Diet 102 7.83 Mental & Emotional Health 287 22.04 Cost & Insurance 65 4.99 Table 1: Distribution of bariatric surgery-related questions across high-level categories in thebRAGq dataset. This table presents the number and percentage of questions within each thematic category, highlighting the diverse informational needs of patients throughout the bariatric surgery journey. and refine the evidence set across multiple rounds, showing improvements in factuality and coherence. While these approaches mitigate irrelevant content and reduce hallucinations, they remain fundamentally external—they treat retrieval as an auxiliary process (Lewis et al., 2020) and stop short of modifying the model’s internal knowledge. As such, they lack the capacity to assess and revise the model’s internal parameters in response to evolving clinical evidence. This gap leaves current systems vulnerable to recurring errors when previously seen
https://arxiv.org/abs/2505.16102v2
topics reappear under different linguistic formulations. In contrast, our proposed framework, bRAGgen , introduces an adaptive self-updating mechanism that not only monitors response confidence but actively integrates validated, up-to-date clinical information into the model itself. By embedding uncertainty detection and retrieval within a continual learning loop, bRAGgen transitions from passive retrieval to active knowledge refinement. This enables it to stay synchronized with the latest postop- erative guidelines and avoid repeating outdated or incorrect responses over time. Complementing this architecture is bRAGq , a rigorously curated dataset of 1 ,302 real- world patient questions covering nutrition, lifestyle, complications, and mental health in the postoperative MBS setting. Validated by board-certified bariatric surgeon, bRAGq offers the first specialized benchmark to evaluate clinical QA systems beyond general-purpose health datasets. Together, bRAGgen and bRAGq address the dual challenge of knowledge obsolescence and domain specificity—paving the way for clinically grounded, scalable, and responsive patient support in bariatric aftercare. 3. Dataset We introduce bRAGq , a domain-specific dataset curated to reflect the breadth and depth of questions commonly posed by bariatric surgery patients. Designed to support the develop- ment of intelligent tools for patient education and clinical decision support, bRAGq captures concerns spanning the entire surgical journey—from preoperative preparation to long-term postoperative management. The dataset was constructed in close collaboration with board- certified bariatric surgeons to ensure clinical validity and relevance, encompassing psycho- logical, medical, and lifestyle-related questions. It comprises 1 ,302 total entries: 611 drawn 4 Continually Self-Improving Language Models for Bariatric Surgery QA from PubMedQA (Jin et al., 2019), of which 201 were flagged by experts as not representa- tive of everyday patient concerns, and 691 synthetically generated based on expert-informed templates and real-world patient interactions. The questions span a wide range of thematic categories, ensuring comprehensive cover- age of key issues in bariatric care. These include pre-surgical considerations, intraoperative topics, postoperative management, dietary guidance, mental health, and lifestyle adapta- tion. As shown in Table 1, the largest proportion of questions fall under Recovery & Lifestyle (22.73%), followed by Mental & Emotional Health (22.04%) and Surgical & Medical Info (20.20%), reflecting the areas patients most frequently seek guidance on. Table 2 presents representative examples from each category, illustrating the dataset’s granularity and di- versity. By aligning with real-world patient priorities and clinical input, bRAGq provides a rigorous benchmark for evaluating the performance of language models in delivering accu- rate, trustworthy, and context-aware responses in the bariatric surgery domain. Beyond benchmarking, it also serves as a valuable resource for training patient-facing conversa- tional agents that are empathetic, evidence-informed, and sensitive to the unique needs of this clinical population. Question Category Sample Questions Risks & Complications 1. Are vitamin D levels and bone turnover markers related to non-alcoholic fatty liver disease in severely obese patients? 2. Does older age limit postbariatric surgery cognitive benefits: a preliminary investigation? Recovery & Lifestyle 1. Does clinical trial demonstrate exercise following bariatric surgery improves insulin sensitivity? 2. Are serum markers of bone turnover increased at six and 18 months after Roux-en-Y bariatric surgery: correlation with the reduction in leptin? Preparation & Logistics 1.
https://arxiv.org/abs/2505.16102v2
Does a Pre-Hospital Patient Education Program improve Outcomes of Bariatric Surgery? 2. Does perioperative care map improve compliance with best practices for the morbidly obese? Surgical & Medical Info 1. Is laparoscopic gastric bypass superior to laparoscopic gastric banding for treatment of morbid obesity? 2. Is potentially life-threatening sleep apnea unrecognized without aggressive evaluation? Cost & Insurance 1. Does medicare and Medicaid status predict prolonged length of stay after bariatric surgery? 2. Is medication cost significantly reduced after Roux-en-Y gastric bypass in obese patients? Mental & Emotional Health1. Are patient expectations of bariatric surgery gender specific – a prospective, multicenter cohort study? 2. Is support group meeting attendance associated with better weight loss? Nutrition & Diet 1. Does dehydroepiandrosterone-sulfate modify human fatty acid composition of different adipose tissue depots? 2. Does low 25-hydroxyvitamin D affect insulin sensitivity in obesity after bariatric surgery? Table 2: Sample Questions for Each Bariatric Surgery Category. This table presents two example questions from each major category within the bariatric surgery domain, reflecting the primary concerns of patients throughout their surgical journey. 5 Continually Self-Improving Language Models for Bariatric Surgery QA Vector Embed C: Semantic Cache G: Text Generation h1 h2 h3 hnL: Safety Check Hit or MissHigh Confidence Low Confidence Constraint Decoding Updated Embeds L: Online Learning h1h2h3 hl Llama-3 Adapter Layers Q A Figure 1: Architecture of the proposed method bRAGgen , The system integrates large lan- guage models (eg. Llama3) with real-time web retrieval capabilities. When con- fidence falls below the threshold ( α), the system automatically retrieves updated information from authoritative medical sources to enhance response accuracy. 4. Proposed Methodology In response to the growing need for timely, evidence-based decision support in clinical set- tings, we propose an integrated framework that enhances retrieval-augmented generation (RAG) with continuous online adaptation. Our system is specifically designed to provide contextually relevant, accurate, and safe clinical recommendations by combining several key components: a semantic cache, a multi-source web retrieval engine, an adaptive text gen- eration module, and an online learning protocol. These components are carefully chosen to address critical challenges in clinical decision-making, including rapid access to high-quality medical information, comprehensive evidence retrieval from trusted sources, dynamic and context-sensitive response generation, and continuous model refinement. We formalize the framework as: S= (C,R,G,L) where each component is defined as follows: •Crepresents the semantic cache, which ensures fast retrieval of relevant medical doc- uments. It leverages SentenceTransformer embeddings and Faiss indexing for rapid access to domain-specific information, minimizing latency. •Ris the web-based multi-source retrieval engine, which uses a Markov Decision Pro- cess (MDP) to focus on authoritative medical sources. The engine aggregates external evidence from trusted web sources, enriching the context when the cache does not suf- fice. 6 Continually Self-Improving Language Models for Bariatric Surgery QA •Gis the adaptive text generation module, which employs low-rank adaptation (LoRA) techniques to fine-tune a large pre-trained language model for domain-specific tasks. This module tailors the generated responses to clinical contexts while maintaining accuracy and compliance with domain constraints. •Lis the online learning module, which continuously refines the model. It updates the system
https://arxiv.org/abs/2505.16102v2
with new data and interactions, enabling the framework to improve over time and adapt to evolving clinical guidelines. By combining these components, our framework offers a dynamic, evidence-based deci- sion support system that remains adaptable and effective over time. Each module addresses specific challenges, ensuring accurate, context-aware, and clinically safe recommendations in real-time clinical environments. 4.1. Semantic Knowledge Caching To minimize response latency and ensure the rapid retrieval of high-quality clinical evidence, our system incorporates a semantic knowledge caching mechanism. This cache maintains a collection of document-query pairs, denoted as D={(qi, di)}N i=1, where each query qi∈R768 is generated using the BioClinicalBERT model and is paired with its corresponding clinical document di. Given an input query q, the cache efficiently retrieves the document djthat maximizes the cosine similarity between the query and document embeddings: C(q) = arg max dj∈Dq·qj ∥q∥∥qj∥subject toq·qj ∥q∥∥qj∥≥τc, (1) where τc= 0.7 is a cosine similarity threshold that ensures only the most relevant documents are retrieved. To maintain the cache’s relevance, new query-document pairs ( q, d) are continuously added. Obsolete entries are removed based on an eviction policy designed to prioritize documents that remain valuable over time. Specifically, the cache is updated as follows: D ← D ∪ { (q, d)} \ {arg min dkψ(dk)}, (2) where the eviction score ψ(dk) for a document dkis defined as: ψ(dk) =αfu(dk) + (1 −α)e−t/β, with α= 0.6,fu(dk) representing the document’s usage frequency, and e−t/βaccounting for the document’s age, where tis the time since the last access, and βis document total stored time. This eviction policy ensures that frequently accessed and recent documents are retained in the cache, while less relevant or outdated documents are pruned. The caching mechanism is implemented using SentenceTransformer embeddings, with Faiss indexing, enabling efficient similarity search. The cache is constrained by a fixed size (e.g., 500 documents), ensuring fast retrieval and minimal computational overhead. By maintaining a small, high-quality set of relevant documents, the semantic cache significantly reduces response times during clinical decision support, providing timely access to critical information. 7 Continually Self-Improving Language Models for Bariatric Surgery QA 4.2. Multi-Source Web Retrieval Engine When the semantic cache does not provide sufficient information to answer a query, the multi-source web retrieval engine is triggered to gather additional evidence from trusted external sources. This component is designed to ensure that the system can access compre- hensive, up-to-date information from diverse domains, including those not covered by the cached documents. The retrieval process is framed as a Markov Decision Process (MDP), which enables focused and dynamic crawling across medical websites to retrieve relevant content. At each time step t, the action attaken by the retrieval engine is determined by maxi- mizing the expected cumulative reward, which is computed as follows: at= arg max a∈AX s′P(s′|s, a) R(s, a) +γV(s′) , (3) where the reward function R(s, a) is defined as: R(s, a) =ITLD( s)∈{.gov,.edu}·BM25( s, q), (4) where ITLD( s)∈{.gov,.edu}is an indicator function that ensures the retrieved documents are from authoritative sources (i.e., websites with ‘.gov‘ or ‘.edu‘ top-level domains), while BM25( s, q) is a content
https://arxiv.org/abs/2505.16102v2
relevance score computed using the BM25 ranking function to assess how well the document sanswers the query q. The retrieval process is powered by the DuckDuckGo API, which allows for broad web searches while prioritizing authoritative sources through URL domain filtering. The BM25 scoring system is applied to rank the retrieved documents based on their relevance to the input query, ensuring that the most pertinent and reliable documents are selected. Once the relevant documents are retrieved, they are incorporated into the semantic cache, thereby enriching the context available for generating the system’s response. This mechanism enhances the system’s ability to provide informed, evidence-based rec- ommendations, particularly in scenarios where the cached knowledge does not suffice, by tapping into the vast amount of publicly available, authoritative medical content across the web. 4.3. Adaptive Text Generation To generate accurate and contextually relevant clinical recommendations, we employ the Llama3-8B model, enhanced using low-rank adaptation (LoRA). This approach is designed to efficiently fine-tune a large pre-trained language model to domain-specific tasks, while minimizing computational overhead and memory usage. The adaptation is performed in a low-rank fashion, enabling the model to adjust quickly to specific medical domains without the need for full retraining. For each transformer layer lin the model, the adaptive hidden representation is com- puted as follows: hadapt l=hbase l+ ∆Wlhbase l,∆Wl=BlAl, (5) 8 Continually Self-Improving Language Models for Bariatric Surgery QA where Bl∈Rd×randAl∈Rr×dare the learned low-rank matrices, and r= 32 is the rank used for adaptation. The matrices BlandAlcapture domain-specific information while ensuring that the adaptation process remains efficient and scalable. The term hbase l represents the original, pre-trained hidden representation of the model at layer l. To evaluate the quality of the generated response, we utilize a perplexity measure, which quantifies the uncertainty in predicting the next token in the sequence: P(r|q) = exp −1 TTX t=1logpθ(rt|r<t, q)! , (6) where P(r|q) is the perplexity of the generated response rgiven the input query q, and Tis the length of the response. The term pθ(rt|r<t, q) represents the model’s predicted probability of the token rtat position t, conditioned on the preceding tokens and the query. If the perplexity of the response exceeds a threshold τp= 4.5, it indicates that the model’s output is not sufficiently confident or relevant. In such cases, the system triggers additional retrieval and adaptation cycles to refine the response, improving its accuracy and relevance by incorporating more domain-specific knowledge. This adaptive approach ensures that the system can generate high-quality clinical rec- ommendations that are both contextually appropriate and tailored to the specific needs of the patient or healthcare provider. 4.4. Online Learning Protocol To ensure that the model remains up-to-date and adaptable in the face of new evidence and user interactions, we implement an online learning module. This protocol allows the model to continuously refine its performance by integrating fresh data and adjusting its parameters over time. The training objective is designed to balance the model’s ability to predict accurate outcomes while avoiding overfitting to recent data, using a regularized cross-entropy loss function: Ladapt =E(q,d)∼B[−logpθ(d|q)] +λ∥ΘA∥2 F, (7) where E(q,d)∼Brepresents
https://arxiv.org/abs/2505.16102v2
the expectation over a mini-batch Bof query-document pairs, pθ(d|q) is the predicted probability of document dgiven query q, and λis the regulariza- tion parameter that controls the magnitude of the model’s parameters. The term ∥ΘA∥2 F represents the Frobenius norm of the model’s adaptation parameters Θ A, which serves as a regularizer to prevent overfitting during updates. The experience buffer Bis updated dynamically to maintain a diverse and representa- tive sample of query-document pairs. This buffer is managed using a Faiss-based nearest neighbor search mechanism, which ensures that new samples are included in a way that preserves diversity and reduces redundancy. The update rule is as follows: B ← B ∪ { (qi, di)} \ {arg max (qj,dj)sim(qj, qi)}, (8) where sim( qj, qi) denotes the similarity between queries qjandqi, and we remove the pair that is most similar to the newly added sample, ensuring the buffer contains varied 9 Continually Self-Improving Language Models for Bariatric Surgery QA and non-redundant training data. This approach helps the model avoid memorizing specific query-answer pairs and encourages generalization across a broad range of contexts. The online learning module ensures that the system adapts in real-time to emerging evidence, evolving patient needs, and new clinical knowledge. As a result, the model con- tinually improves its performance, staying current with the latest developments and capable of providing up-to-date, accurate recommendations. 4.5. Safety and Response Validation Ensuring that generated outputs are both safe and clinically valid is of paramount im- portance in our system. To achieve this, we apply constrained decoding during the text generation process, which ensures that generated responses adhere to safety guidelines and domain-specific constraints. The constrained decoding objective is formulated as follows: rsafe= arg max r∈V∗pθ(r|q)nY i=1ϕi(ri), (9) where ris the generated response, and V∗is the vocabulary space. Each constraint function ϕi(ri) is designed to enforce specific safety requirements on individual tokens riin the response. The constraint function ϕiis defined as: ϕi(ri) =In ∄w∈ri:w∈ W speculativeo , (10) where Iis the indicator function, and Wspeculative is a set of words or phrases that are deemed speculative or unsafe in a clinical context. This constraint ensures that the generated response does not include any terms or statements that might mislead patients or suggest unverified clinical practices. To ensure clinical validity, we further compare outputs with reference texts using BERTScore (Zhang et al., 2020), which evaluates semantic similarity via contextual embeddings. To- gether, constrained decoding and BERTScore validation ensure that responses remain both safe and aligned with evidence-based clinical content. 5. Baselines We evaluated several baseline models to assess their performance on our medical question- answering benchmark: (i)RAG2(Sohn et al., 2024) relies on a pre-cached offline corpus of biomedical documents for retrieval, avoiding real-time web queries. It enhances standard RAG methods by using perplexity-based labels and LLM-generated rationales to selectively retrieve and filter con- text, improving relevance and reducing noise. (ii) MedGraphRAG (Wu et al., 2024b) also uses an offline cache of biomedical documents for retrieval. It further integrates a structured medical knowledge graph to guide the retrieval process, leveraging clinical relationships to improve the contextual
https://arxiv.org/abs/2505.16102v2
relevance and factual accuracy of the retrieved information. (iii) Llama3-8B (Grattafiori et al., 2024) is a 8-billion-parameter large language model eval- uated under two configurations: (a) Zero-shot , where the model responds using only its pre-trained knowledge; and (b) Context-prompted , where external context retrieved from offline sources is appended to the prompt to improve answer quality. (iv) Phi-3 (Abdin 10 Continually Self-Improving Language Models for Bariatric Surgery QA et al., 2024) is a lightweight 3.8-billion-parameter model optimized for efficiency and edge deployment. We evaluate Phi-3 in both (a) Zero-shot and (b) Context-prompted modes, assessing its ability to handle medical queries with and without retrieval-based augmenta- tion. (v) Mistral Instruct (Jiang et al., 2023) is a 7-billion-parameter instruction-tuned model designed for strong performance on alignment-focused tasks. It is tested in (a) Zero- shot mode, where it relies solely on instruction tuning, and (b) Context-prompted mode, where it incorporates retrieved medical content to guide its responses. 6. Experimental Setup We evaluate our proposed bRAGgen model through both expert human evaluation and an LLM-as-Judge protocol. Our goal is to assess the clinical quality of responses generated by various model configurations, focusing on three key axes: factual accuracy, clinical relevance, and comprehensiveness, in the context of bariatric surgery patient education. We benchmark four categories of systems: (i) Offline RAG , which includes a standard retrieval-augmented generation baseline and a domain-tuned MedGraphRAG variant using graph-based retrieval; (ii) Zero-shot LLMs , where large language models (Llama3-8B, Phi-3, and Mistral Instruct) generate answers without additional context; (iii) Context- Prompted LLMs , where retrieved context is appended at inference time without param- eter updates; and (iv) bRAGgen (Proposed) , which applies confidence-aware parametric updates using retrieved evidence. All systems are tested under identical conditions and use a shared retrieval pipeline where applicable. For expert evaluation, we consult one board-certified bariatric surgeon, who reviewed model outputs for 105 instances. Each response is scored independently across three di- mensions: Factuality (accuracy and correctness), Clinical Relevance (appropriateness in a clinical context), and Comprehensiveness (completeness and informativeness for pa- tients). Ratings are based on a 5-point Likert scale, with final scores reported as averages across questions. To complement the expert review and enable scalable comparison, we also evaluate all models using an LLM-as-Judge setup, where ChatGPT-4o2is prompted with each question, the corresponding model-generated answer, and a rubric defining the evaluation criteria. The model then rates each answer on the same 1–5 scale. To assess the reliability of this proxy, we compute the rank correlation between expert and ChatGPT-4o scores and observe a strong alignment ( ρ= 0.94), confirming the viability of using LLMs for early-stage model quality assessment. 7. Results We evaluate our proposed model, bRAGgen , using both expert evaluation review by board- certified bariatric surgeon and LLM-as-Judge using ChatGPT-4o. We compare bRAGgen against a suite of baselines, including standard retrieval-augmented models, zero-shot LLMs, and context-prompted variants. 2. https://openai.com/index/hello-gpt-4o/ 11 Continually Self-Improving Language Models for Bariatric Surgery QA System Metrics Type Model Factuality Clinical Rel. Compre. Avg Offline RAG23.62 3.45 3.53 3.53 MedGraphRAG 3.85 3.92 4.38 4.05 Zero-shot Llama3-8B 3.41 3.25 3.46
https://arxiv.org/abs/2505.16102v2
3.37 Phi-3 2.37 2.15 2.25 2.26 Mistral instruct 2.23 2.18 2.14 2.18 Context prompted Llama3-8B 3.82 3.91 4.34 4.02 Phi-3 2.64 2.75 2.42 2.60 Mistral instruct 2.69 2.37 3.35 2.80 bRAGgen with Llama3-8B 4.18 4.58 4.76 4.51 Phi-3 2.87 3.17 2.60 2.88 Mistral instruct 2.95 2.71 3.63 3.09 Table 3: Evaluation of various models across different configurations by expert surgeons. Each system is evaluated on Factuality, Clinical Relevance (Clinical Rel.), and Comprehensiveness (Compre.), with scores from 1 (poor) to 5 (excellent). The final three rows under each model group show the performance of our Online bRAGgen setting. The ’Avg’ column reports the average of the three evaluation metrics. 7.1. Expert Evaluation To assess the clinical quality of generated responses, we conducted a blinded evaluation with board-certified bariatric surgeon, who rated model outputs across three dimensions: Factuality ,Clinical Relevance , and Comprehensiveness , using a 1–5 Likert scale (higher is better). Table 3 presents the average scores for each system under multiple configurations. Among all baselines, MedGraphRAG , an offline domain-specific RAG model, achieved the highest average score (4.05), outperforming both standard offline RAG baselines (RAG2: 3.53) and all zero-shot models (Llama3-8B: 3.37; Phi-3: 2.26; Mistral: 2.18). Context- prompted models (i.e., inputting relevant question context during inference) moderately improved scores, especially for Llama3-8B (Avg: 4.02), but still fell short of delivering optimal factual and clinical consistency. Our proposed bRAGgen framework delivered the best overall performance across all met- rics. When paired with Llama3-8B, bRAGgen achieved the highest average score (4.51), with near-expert level performance on Comprehensiveness (4.76) and Clinical Rele- vance (4.58). Notably, bRAGgen also improved the performance of smaller models like Phi-3 and Mistral, elevating their average scores by +0.6–0.7 points compared to their context- prompted or zero-shot baselines. These gains highlight the effectiveness of our confidence- aware updating mechanism, which not only retrieves up-to-date clinical evidence but also integrates it into the model’s internal parameters, enabling more robust, domain-adapted reasoning. 12 Continually Self-Improving Language Models for Bariatric Surgery QA These results demonstrate that bRAGgen significantly enhances the clinical utility of LLMs across model sizes, especially when compared to conventional static RAG setups or prompting-only strategies. System Metrics Type Model Factuality Clinical Rel. Compre. Avg Offline RAG23.49 3.28 3.36 3.38 MedGraphRAG 3.67 3.76 4.45 3.96 Zero-shot Llama3-8B 3.28 3.18 3.34 3.27 Phi-3 2.24 2.07 2.12 2.14 Mistral instruct 2.17 2.11 2.01 2.10 Context prompted Llama3-8B 3.67 3.76 4.45 3.96 Phi-3 2.49 2.61 2.38 2.49 Mistral instruct 2.57 2.24 3.21 2.67 bRAGgen with Llama3-8B 4.03 4.43 4.87 4.44 Phi-3 2.73 3.03 2.54 2.77 Mistral instruct 2.83 2.58 3.48 2.96 Table 4: Evaluation of various models across different configurations using LLM-as-Judge metrics. Each system is evaluated on Factuality, Clinical Relevance (Clinical Rel.), and Comprehensiveness (Compre.), with scores from 1 (poor) to 5 (excellent). The final three rows under each model group show the performance of our Online bRAGgen setting. The ’Avg’ column reports the average of the three evaluation metrics. 7.2. LLM-as-Judge Evaluation To complement expert evaluation, we further assess all models using an LLM-as-Judge framework, where we use ChatGPT-4o model scores responses along
https://arxiv.org/abs/2505.16102v2
three axes: Factuality , Clinical Relevance , and Comprehensiveness , using a 5-point Likert scale. Table 4 summarizes the performance of baseline systems and our proposed bRAGgen across these dimensions. Among the baselines, MedGraphRAG and context-prompted Llama3-8B show relatively strong performance, achieving average scores of 3 .96. However, our proposed bRAGgen ap- proach consistently outperforms all baselines across all metrics and models. For instance, bRAGgen with Llama3-8B achieves the highest overall score of 4 .44, reflecting substantial improvements in factual correctness (+0 .36), clinical relevance (+0 .67), and comprehensive- ness (+0 .42) over the best-performing baseline. Notably, even with smaller models like Phi-3 and Mistral instruct, bRAGgen enhances output quality, particularly in relevance and completeness. These findings demonstrate that our system not only boosts performance for high-capacity LLMs but also meaningfully improves the reliability of lightweight models, making it practical for resource-constrained settings. 13 Continually Self-Improving Language Models for Bariatric Surgery QA <-2 -2-(-1) -1-0 0-1 1-2 >22040 Confidence ChangeCounta) Confidence Change Distribution 60 80 100 120 140PubMedPMCNIH Countb) Top Search Domains 0200 400 600 8001,0001,200123 IterationLoss Valuec) Training Loss Progression 0-10 10-20 20-30 30-40 40+0204060 Duration (sec)Countd) Total Duration Distribution Figure 2: Exploratory Analysis of Model Editing Dynamics. (a) Distribution of changes in confidence scores post-edit, showing that most changes are modest and pos- itive. (b) Frequency of search queries across external biomedical domains, with PubMed dominating. (c) Training loss progression across iterations, illustrating convergence patterns and volatility. (d) Distribution of total duration taken for each edit operation, highlighting that most edits are executed within 10-20 sec- onds. 7.3. Expert vs. LLM-as-Judge: Score Alignment To assess the alignment between human and expert evaluation, we compare expert ratings (Table 3) with those produced by the LLM-as-Judge framework (Table 4) across all models and configurations. Overall, we observe a high degree of consistency in relative rankings across systems. For instance, both experts and the LLM-as-Judge identify MedGraphRAG and context- prompted Llama3-8B as the strongest baselines, while zero-shot models like Phi-3 and Mistral perform the worst across all axes. Furthermore, our proposed bRAGgen yields the highest scores in both evaluation schemes, affirming its robustness across human and model- based judgments. 14 Continually Self-Improving Language Models for Bariatric Surgery QA Metric-wise, the strongest agreement is seen in the Comprehensiveness andClinical Relevance dimensions, where score trends closely track each other across settings. Some minor variation arises in the Factuality scores, particularly for models like Phi-3 and Mistral, where the LLM-as-Judge is slightly more conservative than human reviewers. This discrepancy likely stems from the LLM’s heightened sensitivity to surface-level inaccuracies, compared to domain experts who may weigh overall clinical soundness more heavily. Importantly, the average correlation between expert and LLM-as-Judge scores across all models is ρ= 0.94 (Spearman), underscoring the reliability of using LLMs as surrogate eval- uators in low-resource or iterative development settings. These results suggest that LLM- as-Judge provides a scalable and reasonably aligned proxy for expert review—particularly useful for rapid benchmarking and ablation testing during system development. 8.bRAGgen Analysis Figure 2 presents a comprehensive analysis across multiple dimensions to evaluate the be- havior, responsiveness,
https://arxiv.org/abs/2505.16102v2
and efficiency of bRAGgen during real-time knowledge integration in the context of bariatric care. (a) Confidence Change Distribution. The histogram in Fig. 2.a illustrates the distribution of confidence score changes triggered by the adaptive retrieval mechanism. Most examples exhibit moderate confidence gains (bins 0-1and 1-2), with the highest concentration in the 0-1bin. This confirms that the system’s dynamic thresholding yields frequent yet stable updates, allowing the model to autonomously improve responses without overcorrecting. Rare occurrences of extreme confidence shifts ( <-2or>2) indicate that the system maintains a conservative stance, prioritizing stability in medical contexts. (b) Top Search Domains. To assess external evidence sources, we analyzed the frequency of domain-level API queries. PubMed ,PMC, and NIHemerged as the top knowledge sources (cf. Fig 2.b), underscoring bRAGgen ’s strong preference for authoritative biomedical repositories. This supports the design goal of maintaining clinical fidelity and alignment with evidence-based guidelines during patient-facing interactions. (c) Training Loss Progression. The loss trajectory (cf. Fig 2.c) across 1,302 iter- ations reveals several key phases in the model’s learning dynamics. An initial sharp drop from 2 .71 to 1 .32 by iteration 50 is followed by a spike to 3 .13 at iteration 100, likely due to early exploratory updates. Subsequent iterations show improved stability and convergence, with the lowest loss (0 .86) reached around iteration 500. Notable local minima at iterations 300 and 600 indicate consistent refinement, while the uptick at iteration 1000 may reflect a transient deviation before re-stabilization. Overall, the pattern confirms that bRAGgen ’s adaptive updating mechanism supports gradual convergence while accommodating knowl- edge volatility. (d) Total Duration Distribution. The majority of update operations complete within 10-20 seconds , with fewer cases extending beyond 30 seconds . This distribution validates that bRAGgen ’s self-updating pipeline is both computationally lightweight and suitable for real-time deployment in longitudinal MBS care settings ensuring timely and trustworthy information delivery across all stages of the surgical journey. Further qualitative comparisons across diverse questions and model outputs are pre- sented in Appendix B. 15 Continually Self-Improving Language Models for Bariatric Surgery QA 9. Conclusion We introduced bRAGgen , an adaptive retrieval-augmented generation (RAG) system tai- lored for bariatric and metabolic surgery (MBS) support. By autonomously incorporating real-time medical evidence when confidence dips below dynamic thresholds, bRAGgen en- sures that responses remain timely, accurate, and clinically reliable. To facilitate robust benchmarking, we also introduced bRAGq , the first large-scale, expert-validated dataset of postoperative bariatric care questions. Through comprehensive evaluation using both LLM- based metrics and expert surgeon assessments, bRAGgen consistently outperformed existing state-of-the-art models in clinical accuracy and relevance. References Marah Abdin, Jyoti Aneja, Hany Awadalla, and et al. Phi-3 technical report: A highly capable language model locally on your phone, 2024. URL https://arxiv.org/abs/ 2404.14219 . Daniel Alexander Alber, Zihao Yang, Anton Alyakin, Eunice Yang, Sumedha Rai, Aly A Valliani, Jeff Zhang, Gabriel R Rosenbaum, Ashley K Amend-Thomas, David B Kurland, Caroline M Kremer, Alexander Eremiev, Bruck Negash, Daniel D Wiggan, Michelle A Nakatsuka, Karl L Sangwon, Sean N Neifert, Hammad A Khan, Akshay Vinod Save, Adhith Palla, Eric A Grin, Monika Hedman,
https://arxiv.org/abs/2505.16102v2
Mustafa Nasir-Moin, Xujin Chris Liu, Lavender Yao Jiang, Michal A Mankowski, Dorry L Segev, Yindalon Aphinyanaphongs, Howard A Riina, John G Golfinos, Daniel A Orringer, Douglas Kondziolka, and Eric Karl Oermann. Medical large language models are vulnerable to data-poisoning attacks. Nat. Med., 31(2):618–626, February 2025. Yash Kumar Atri, Shraman Pramanick, Vikram Goyal, and Tanmoy Chakraborty. See, hear, read: Leveraging multimodality with guided attention for abstractive text summarization. Knowledge-Based Systems , 227:107152, 2021. ISSN 0950-7051. doi: https://doi.org/10. 1016/j.knosys.2021.107152. URL https://www.sciencedirect.com/science/article/ pii/S0950705121004159 . Yash Kumar Atri, Vikram Goyal, and Tanmoy Chakraborty. Fusing multimodal signals on hyper-complex space for extreme abstractive text summarization (tl;dr) of scientific contents. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , KDD ’23, page 3724–3736, New York, NY, USA, 2023a. Association for Computing Machinery. ISBN 9798400701030. doi: 10.1145/3580305.3599830. URL https://doi.org/10.1145/3580305.3599830 . Yash Kumar Atri, Vikram Goyal, and Tanmoy Chakraborty. Multi-document sum- marization using selective attention span and reinforcement learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 31:3457–3467, 2023b. doi: 10.1109/TASLP.2023.3316459. Yash Kumar Atri, Arun Iyer, Tanmoy Chakraborty, and Vikram Goyal. Promoting topic coherence and inter-document consorts in multi-document summarization via simplicial 16 Continually Self-Improving Language Models for Bariatric Surgery QA complex and sheaf graph. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Pro- ceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 2154–2166, Singapore, December 2023c. Association for Computational Linguis- tics. doi: 10.18653/v1/2023.emnlp-main.133. URL https://aclanthology.org/2023. emnlp-main.133/ . Romain Barres, Henri Kirchner, Morten Rasmussen, Jing Yan, Daniel Kantor, Anna Krook, Erik Naslund, Juleen R. Zierath, and Charlotte Ling. Weight loss after gastric bypass surgery in human obesity remodels promoter methylation. Cell Reports , 3(4):1020–1027, 2013. doi: 10.1016/j.celrep.2013.03.019. Emily M. Bartholomay, Patrick W. Stewart, David B. Sarwer, Thomas A. Wadden, and Anthony N. Fabricatore. Sociodemographic factors related to bariatric follow-up ap- pointment attendance and weight outcomes. Surgery for Obesity and Related Diseases , 20:1388–1395, 2024. doi: 10.1016/j.soard.2024.02.010. Jean-Christophe B´ elisle-Pipon. Why we need to be careful with LLMs in medicine. Front. Med. (Lausanne) , 11:1495582, December 2024. Kristin K. Bjerkan, Audun Viste, Else M. Aasheim, Oda Mj˚ aland, Torstein Mala, Nina E. Kløw, Jo Røislien, and Siv K. Bøhn. The long-term impact of postoperative educational programs on weight loss after roux-en-y gastric bypass. Obesity Surgery , 32:3005–3012, 2022. doi: 10.1007/s11695-022-05913-0. Tianchi Cai, Zhiwen Tan, Xierui Song, Tao Sun, Jiyan Jiang, Yunqi Xu, Yinger Zhang, and Jinjie Gu. Forag: Factuality-optimized retrieval augmented generation for web-enhanced long-form question answering. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , KDD ’24, page 199–210. ACM, August 2024. doi: 10.1145/3637528.3672065. URL http://dx.doi.org/10.1145/3637528.3672065 . Jeffrey Cheng, Marc Marone, Orion Weller, Dawn Lawrie, Daniel Khashabi, and Ben- jamin Van Durme. Dated data: Tracing knowledge cutoffs in large language models, 2024. URL https://arxiv.org/abs/2403.12958 . Bryan Clapp, Lillian Khaitan, William J. English, Michel Gagner, William B. Inabnet, J. Michael Morton, Walter J. Pories, Philip R. Schauer, Brian M. Wolfe, and Mary M. Wolfe. American society for metabolic and bariatric surgery 2022 estimate of metabolic and bariatric procedures performed in the united
https://arxiv.org/abs/2505.16102v2