text string | source string |
|---|---|
prediction/suppression neu- rons (two of the output-based functional roles of Gurnee et al., 2024), i.e., each of the six neurons is a predic- tion/suppression neuron as well as exemplifying one of our six classes. For ease of interpretability, we choose that prediction/suppression neuron of a particular IO class with the highest cos(wout, WU)kurtosis, where WU∈Rdmodel×dvocabdenotes the unembedding matrix. (For orthogonal output we chose the clearest of all sup- pression neurons.) The 6 neurons are in the last layers of the model because that’s where prediction/suppression neurons tend to appear. See Appendix F for details on prediction/suppression, Appendix G for more details on these case studies, and Appendix H for more case studies. 6.1 Methods We combine the IO perspective with two well- established neuron analysis methods. For each neuron, we project its weight vectors to vocabulary space with the unembedding matrix WUand inspect high-scoring tokens. (This is analogous to (nostalgebraist, 2020) and has been done e.g. in (Geva et al., 2022; Gurnee et al., 2024; V oita et al., 2024).) Additionally, we examine examples for which the neuron is strongly activated (positive or negative) among a subset of 20M tokens from Dolma (Soldaini et al., 2024), OLMo’s training set. (Activation-based analyses have been done e.g. in Geva et al., 2021; V oita et al., 2024; Gurnee et al., 2024. The size of 20M tokens follows V oita et al., 2024.) 6.2 Analysis For many of these neurons, the largest positive activa- tion is much larger than the largest negative one (or vice versa). Often the larger of the two is also more interpretable. In these cases we just describe the larger activation and refer to Table 4 in Appendix H for more details. 6 Enrichment neuron 28.4737 predicts review (and related tokens) if activated positively, which happens ifreview is already present in the residual stream. The maximally positive activations are in standard contexts that continue with review or similar, such as the newline after the description of an e-book (the next paragraph often is the beginning of a review). Conditional enrichment neuron 28.9766’s IO func- tionality concerns well and similar tokens. 28.9766 promotes them if activated positively, which happens when both wgateandwinindicate that well is represented in the residual stream. This is a case of double checking. The maximally positive activation in our sample occurs onOh, in a context in which Oh, well makes sense (and is the actual continuation). Depletion neuron 31.9634. −woutof 31.9634 is clos- est to forms of again . Judging by the weights, the neu- ron activates positively when the residual stream con- tains information both for and against predicting again , and then depletes the again direction. It activates neg- atively when the residual stream contains the “minus again ” direction, and then depletes that direction. Sur- prisingly, despite its strong negative cosine similarity (cos(wgate, win) =−0.7164 ), the neuron often activates positively. On the positive side, strong activations are often on punctuation, and the actual next token is often meanwhile orinstead . The neuron may ensure only these tokens are predicted, and not the relatively similar | https://arxiv.org/abs/2505.17936v1 |
again . On the negative side, the activations do not have any obvious semantic relationship to again . We hypoth- esize that sometimes the residual stream ends up near “minus again ” for semantically unrelated reasons (there are many more possible concepts than dimensions, so the corresponding directions cannot be fully orthogonal; see Elhage et al., 2022); in these cases the neuron would reduce the unjustified presence of this “minus again ” direction. There are also weaker negative activations when again is a plausible continuation, e.g., on the token once . In these cases, again is already weakly present in the residual stream before the last MLP. Accordingly, Swish (wgate·xnorm)is weakly negative (but distinct from 0), and win·xnorm>0, which leads to a negative activation and thus reinforces again . Conditional depletion neuron 29.10900. Gate and linear input weight vectors act as two independent ways of checking that these is not present in the residual stream (i.e., a case of double checking). At the same time, they check for predictions like today, nowadays . When such predictions are present, the neuron promotes these . This is a plausible choice in these cases because of the expression these days . An example is social media tools change and come and go at the drop of ahat. (This sentence talks about a characteristic of current times, so these days would indeed be a plausible continuation.) Proportional change neuron 30.10972 predicts the token when if activated negatively. This happens if when is absent from the residual stream (gate condition) and is proportional to the presence of time-related tokens(-win). An example for a large negative activation is puts you on multiple webpages at.3Conversely, if when is absent, and time-related tokens are absent too, the neuron activates positively and suppresses when further. Orthogonal output neuron 29.4180 predicts there (positive activation) if the residual stream contains a component that we interpret as “complement of place expected” (e.g., here,therein ). Both wgateandwincheck for (different aspects of) this component being present, another case of double checking. The largest positive activation is on hereor. Overall, these neurons all promote a specific set of to- kens (we chose them that way), but under very different circumstances. The (conditional) enrichment neurons are the most straightforward to interpret, because their input and output clearly correspond to the same con- cept. In contrast, depletion neurons inherently involve (an apparent) conflict between the intermediate model prediction and what the neuron promotes. 7 Discussion 7.1 Variation across models Our work on gated activation functions questions the generality of previous findings (V oita et al., 2024; Gurnee et al., 2024) on non-gated activation functions. Specifically, we saw in Section 5 that (conditional) de- pletion neurons appear mostly in later layers. On the other hand, Gurnee et al. (2024) find (for GPT-2 (Rad- ford et al., 2019), with activation GeLU) that what we call depletion neurons mostly appear in earlier layers. Similarly, V oita et al. (2024) find (for OPT (Zhang et al., 2022), with activation ReLU) that some neurons in early layers detect specific tokens and then suppress them. | https://arxiv.org/abs/2505.17936v1 |
(Their analysis is not weight-based, so these may or may not be depletion neurons in our weight-based sense.) This confirms the importance of our work for models with gated activation functions: their internal structure is quite different from older models with GeLU or ReLU. Despite minor differences (especially in the first layer), our results across gated activation models are remarkably consistent. Most importantly, all of them are dominated by conditional enrichment neurons in early-middle layers and all of them tend towards deple- tion in the very last layers. 7.2 Double checking Our case studies suggest that conditional enrichment or conditional depletion neurons often behave in a way 3The actual sentence ends with as soon as and comes from a now-dead webpage. We also found one occurrence ofat when in what seems to be a paraphrase of the same text, on https://www.docdroid.net/RgxdG5s/fantastic-tips-for- bloggers-of-all-amountsoystcpdf-pdf . We suspect that both texts are machine-generated paraphrases of an original text containing at once (when andas soon as can be synonyms of once in other contexts), and that the model has (also) seen a paraphrased version with at when . In fact many of the largest negative activations are on atin contexts calling for at once . 7 analogous to their unconditional counterparts. One rea- son is that our threshold for distinguishing conditional and unconditional classes is somewhat arbitrary. These and other neurons (for example, proportional change neurons like 25.8607, Appendix H) display a phenomenon we called double checking: They use two quite different reading weight vectors to check for a single concept. Double checking is rooted in the following geometric fact: Two vectors w1, w2(wgateandwinin our case) can be orthogonal to each other but still have a high similarity to a third vector u(e.g., a token unembedding). Example: w1= (1,0), w2= (0,1), u= (1,1). Here, w1, w2are orthogonal, but uhas a cosine of√ 2 2≈0.7 to both. Double checking is useful because it shrinks the re- gion in model space that activates the neuron positively. If (say) win=wgate= (1,0), the neuron activates whenever the (normalized) residual input xsatisfies x·(1,0)>0; this happens on the whole half-space x1>0. If however wgate= (1,0)andwin= (0,1), the neuron activates positively only in the first quadrant (x1, x2>0). This behavior thus enables more precise concept de- tection. This may explain why conditional neurons are more frequent than their unconditional counterparts. 7.3 Stages of inference We saw in Section 5 that different layers are dominated by different IO functionalities. This leads to a follow- up question: Why does the model use these specific IO functionalities in these specific layers? In partic- ular: Why are there so many conditional enrichment neurons in early-middle layers? And what is the role of (conditional) depletion neurons in later layers? We hy- pothesize that different IO classes might be responsible for different stages of inference (Lad et al., 2024), as de- scribed in the following subsections. In future work, we plan to test this hypothesis using ablation experiments. 7.4 Enrichment We saw in Section 5 that there often is positive simi- larity between reading and writing weights of neurons, | https://arxiv.org/abs/2505.17936v1 |
especially with conditional enrichment neurons in early- middle layers. These neurons seem a good fit for the feature engi- neering stage (Lad et al., 2024), corresponding to en- richment as defined by Geva et al. (2023). Indeed, they output a direction similar to the one they detect, which could correspond to related concepts. Geva et al.’s (2023) description of enrichment precisely involves writ- ing related concepts to the residual stream. In later layers, the (conditional) enrichment neurons we investigated in our case studies (Section 6) have an output that is semantically identical to the input. Thus they seem to reinforce existing predictions. In general, we use the term enrichment because the output weight is never mathematically identical to oneof the reading weights. But depending on the analysis of a particular neuron (e.g., by way of a case study), magnification (no change) or enrichment (e.g., change Ireland in the input to Dublin in the output) may be the more intuitive human interpretation. 7.5 Depletion We saw in Section 5 that depletion neurons appear mostly in the last few layers, and conditional depletion neurons appear in later-middle layers (if at all). These neurons reduce the presence of the directions they detect. Therefore they seem a good fit for the residual sharpening stage – getting rid of attributes that are not directly needed for next token prediction. We found depletion neurons more difficult to inter- pret than enrichment neurons. Most notably, neuron 31.9634 was a complex case in that we found contexts in which a weak positive presence of again led to an enrichment-like functionality (see Section 6.2). This mechanism involves a negative value of Swish. Previ- ous authors (Gurnee et al., 2023) often assumed that GELU (or equivalently, Swish) is “essentially the same activation as a ReLU”, and said they “would be partic- ularly excited to see future work exhibiting [...] case studies” of mechanisms involving negative values of such an activation function. To our knowledge, we show for the first time that negative values of Swish can play a crucial role in how transformers function. Still, all neurons we investigated do deplete input directions from the output even if they do not do so in all contexts. We plan to further elucidate the intuitive role depletion plays in follow-up work. 8 Conclusion We explored the IO perspective for investigating gated neurons in LLMs. Our method complements prior inter- pretability approaches and provides new insights into the inner workings of LLMs. We observed that a large share of neurons exhibit non- trivial IO interactions. The concrete IO functionalities differ from layer to layer, which is probably related to different stages of inference. In particular, early- middle layers are dominated by conditional enrichment neurons, which may be responsible for representation enrichment. We plan to further develop this new perspective in future work. In particular, we will do ablation experi- ments to conclusively show if, as we hypothesized, the conditional enrichment neurons in early-middle layers are responsible for representation enrichment and the depletion neurons in the last few layers contribute to residual sharpening. We also plan | https://arxiv.org/abs/2505.17936v1 |
to investigate the evolution of IO functionalities during model training. Finally, we would like to go beyond the analysis of sin- gle neurons and address the question of how neurons work together within and across IO classes. 8 Limitations This paper focuses on a parameter-based interpretation ofsingle neurons . This has the advantage of being sim- ple and efficient, but is also inherently limited in scope. Accordingly, our method is not designed to replace other neuron analysis methods, but to complement them. The mathematical similarities of weights are insight- ful, but they should not be taken as one-to-one represen- tations of semantic similarity: We find cases in which close-to-orthogonal vectors represent very similar con- cepts (Section 7.2), and cases in which mathematically similar vectors represent related but non-identical con- cepts (Section 7.3). Our case studies of individual neurons can be accused of cherry-picking: we picked neurons that we expected to be interpretable, all of which occur on the last few layers. Therefore our interpretations may not carry over to less interpretable (e.g. polysemantic) neurons, or to neurons in earlier layers. Finally, we provide only possible interpretations of the phenomena we observe, and do not claim them to be definitive explanations. References 01.AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Guoyin Wang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Sen- bin Yang, Shiming Yang, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yanpeng Li, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, and Zonghong Dai. 2025. Yi: Open foundation models by 01.ai. Preprint , arXiv:2403.04652. Nora Belrose, Zach Furman, Logan Smith, Danny Ha- lawi, Igor Ostrovsky, Lev McKinney, Stella Bider- man, and Jacob Steinhardt. 2023. Eliciting latent predictions from transformers with the tuned lens. Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. 2022. Toy models of superposi- tion. Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jack- son Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. 2021. A mathematical framework for transformer circuits. Amit Elhelo and Mor Geva. 2024. Inferring functional- ity of attention heads from their parameters. Preprint , arXiv:2412.11965.Team Gemma. 2024. Gemma. Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. 2023. Dissecting recall of factual associa- tions in auto-regressive language models. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12216–12235, Singapore. Association for Computational Linguis- tics. Mor Geva, Avi Caciularu, Kevin Wang, and Yoav Gold- berg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 30–45, Abu Dhabi, United Arab Emirates. | https://arxiv.org/abs/2505.17936v1 |
As- sociation for Computational Linguistics. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key- value memories. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Pro- cessing , pages 5484–5495, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Dirk Groeneveld, Iz Beltagy, Evan Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muen- nighoff, Aakanksha Naik, Crystal Nam, Matthew Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, William Smith, Emma Strubell, Nishant Subramani, Mitchell Worts- man, Pradeep Dasigi, Nathan Lambert, Kyle Richard- son, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah Smith, and Hannaneh Hajishirzi. 2024. OLMo: Accelerating the science of language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 15789–15809, Bangkok, Thailand. Association for Computational Linguistics. Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, and Dimitris Bertsimas. 2024. Universal neurons in gpt2 language models. Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. 2023. Finding neurons in a haystack: Case studies with sparse probing. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. 9 Vedang Lad, Wes Gurnee, and Max Tegmark. 2024. The remarkable robustness of llms: Stages of inference? Preprint , arXiv:2406.19384. Joseph Miller and Clement Neo. 2023. We found an neuron in gpt-2. Beren Millidge and Sid Black. 2022. The singular value decompositions of transformer weight matrices are highly interpretable. Ari S. Morcos, David G.T. Barrett, Neil C. Rabinowitz, and Matthew Botvinick. 2018. On the importance of single directions for generalization. Neel Nanda and Joseph Bloom. 2022. Transformerlens. https://github.com/TransformerLensOrg/ TransformerLens . Jingcheng Niu, Andrew Liu, Zining Zu, and Gerald Penn. 2024. What does the knowledge neuron thesis have to do with knowledge? nostalgebraist. 2020. Interpreting gpt: The logit lens. Kiho Park, Yo Joong Choe, and Victor Veitch. 2024. The linear representation hypothesis and the geometry of large language models. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 39643–39666. PMLR. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Cody Rushing and Neel Nanda. 2024. Explorations of self-repair in language models. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 42836–42855. PMLR. Noam Shazeer. 2020. Glu variants improve transformer. Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, | https://arxiv.org/abs/2505.17936v1 |
Ananya Jha, Sachin Ku- mar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew Peters, Ab- hilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Evan Walsh, Luke Zettlemoyer, Noah Smith, Han- naneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, and Kyle Lo. 2024. Dolma: an open corpus of three trillion tokens for language model pretraining research. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 15725–15788, Bangkok, Thailand. Association for Computational Linguistics. Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Be- linkov, Xingyi Song, Mrinmaya Sachan, and Neel Nanda. 2024. Confidence regulation neurons in lan- guage models.Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. ArXiv , abs/2302.13971. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Neural Information Processing Systems . Elena V oita, Javier Ferrando, and Christoforos Nalmpan- tis. 2024. Neurons in large language models: Dead, n-gram, positional. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 1288– 1301, Bangkok, Thailand. Association for Computa- tional Linguistics. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. 2024. Qwen2 techni- cal report. arXiv preprint arXiv:2407.10671 . Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi- haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. Preprint , arXiv:2205.01068. A Overview of the appendix Appendix B: Software and data. Appendix C: Impact statement. Appendix D: “Responsible NLP” statements. Appendix E: Visualization of a SwiGLU neuron (Sec- tion 3). Appendix F: IO classes vs. Gurnee et al.’s (2024) functional roles . Used in Section 6. Appendix G: Details on case studies (Section 6). Appendix H: More case studies (complementing Sec- tion 6). Appendix I: Results across models (complementing Section 5). We chose to put the last section at the end because it is very long and would otherwise disrupt reading of the other sections. 10 B Software and data We publish the | https://arxiv.org/abs/2505.17936v1 |
software at this Github URL. See the readme file for detailed documentation. The repository also contains the visualizations of max/min activations for the neuron case studies in Sec- tion 6. Everything else can be quickly reproduced, and the plots are included in this paper. The repository is under the Apache 2.0 license. C Impact statement This paper presents work whose goal is to advance the field of machine learning interpretability. The underly- ing assumption of the field is that models have under- lying structure (are not just an inscrutable mess) and that discovering this structure will have several benefits. First, ideally, any scientific field should have a deep understanding of the models it uses; results that are ob- tained using blackbox models are hard to understand, replicate and generalize. Second, once we understand our models better, we will be better able to address failure modes. For example, once we understand how unaligned behavior like bias and hallucinations comes about, it will be easier to address them, e.g., by chang- ing the model architecture. Third, interpretability can support explainability. If we understand how a recom- mendation or answer came about, we can better assess its validity. D “Responsible NLP” statements D.1 Models and data Gemma. To download the model one needs to explicitly accept the terms of use. NLP research is explicitly listed as an intended usage. Primarily English and code (Gemma, 2024). Llama. Inference code and weights under an ad hoc license. There is also an “Acceptable Use Policy”. Our work is well within those terms. Languages mostly include English and programming languages, but also Wikipedia dumps from “bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk” (Touvron et al., 2023). OLMo and Dolma. Training and inference code, weights (OLMo), and data (Dolma) under Apache 2.0 license. “The Science of Language Models” is explicitly mentioned as an intended use case. Dolma is quality- filtered and designed to contain only English and pro- gramming languages (though we came across some French sentences as well, see Table 4) (Groeneveld et al., 2024; Soldaini et al., 2024). Mistral. Inference code and weights are released under the Apache 2.0 license, but accessing them re- quires accepting the terms. Languages are not explicitly mentioned in the paper, but clearly include English and code (Jiang et al., 2023). Qwen. Inference code and weights under Apache 2.0 license. Supports “over 29 languages, including Chinese, English, French, Spanish, Portuguese, German,Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more” (Yang et al., 2024). Yi.Inference code and weights under Apache 2.0 license. Trained on English and Chinese (01.AI et al., 2025). D.2 Computational experiments All our experiments can be run on a single NVIDIA RTX A6000 (48GB). The main analysis, computing the weight cosines, needs less than a minute per model. The most expensive part was the activation-based analysis in Section 6: We needed a single run of ≈25h to store the max/min activating examples for all neurons, and then≈45s per neuron ( ≈5min) to recompute its activations on the relevant | https://arxiv.org/abs/2505.17936v1 |
texts and visualize them. We use TransformerLens (Nanda and Bloom, 2022). A colleague kindly provided us with a version that also supports OLMo. E More on SwiGLU Figure 6 visualizes a SwiGLU neuron (described in Section 3). F IO classes vs. functional roles We compare our results with those of another classifica- tion scheme we mentioned in Section 2: the functional roles defined by Gurnee et al. (2024). See Appendix F.3 for the results. F.1 Definition of functional roles The definition of functional roles is based exclusively on the neuron’s output weight wout. Most of the roles are defined by their output token distribution, i.e., properties of the distribution cos(wout, WU) = wout·WU[:,1] ∥wout∥∥WU[:,1]∥, ...,wout·WU[:,dvocab] ∥wout∥∥WU[:,dvocab]∥ ∈[−1,1]dvocab, the cosine of the product of output weight vector and unembedding matrix. Functional roles are defined as follows. Prediction andsuppression neurons have a cos(wout, WU)with high kurtosis (meaning there are many outliers) and a high skew in absolute value (meaning the outliers tend to be only on one side). Positive skew corresponds to pre- dicting a subset of tokens, negative skew to suppressing it.Partition neurons have a distribution cos(wout, WU) with high variance . This often corresponds to two sets of output tokens, one that is promoted and one that is suppressed. In entropy neurons (examined in more de- tail by Stolfo et al. (2024), woutlies in a direction that does not correspond to any output tokens. Mathemati- cally, a high proportion of the norm of woutis inWU’s effective null space, i.e., it corresponds to singular vec- tors of WUwhose corresponding singular values are close to zero. Entropy neurons increase or decrease the presence of such directions. This changes the norm of the residual stream, but leaves the token ranking more or less untouched. Because a final LayerNorm is applied before WU, this indirectly affects the logits of all tokens: 11 xnormxin xgate xswishf(xnorm) xout ·wgate·win Swishmultiply·wout Figure 6: Visualization of the SwiGLU activation function for a single neuron. Boxes represent vectors, ellipses represent scalars. the output token probabilities become more evenly dis- tributed (higher entropy), or less so (lower entropy). At- tention (de)activation neurons (de)activate an attention head by having it put less (or more) of its attention on the BOS token. (The effect of a head attending only to BOS is negligible.) Consider an attention head with query matrix WQ∈Rdmodel×dhead=R4096×128and BOS key vector kBOS∈Rdhead. Attention (de)activation neu- rons for this head are those with a high positive or nega- tive score woutWQkBOS. All of these definitions require a threshold and/or some adaptation to gated activation functions. We de- scribe our approach in Appendix F.2. F.2 Adapting the definitions Thefunctional role definitions require a threshold and/or some adaptation to gated activation functions. We pro- ceed as follows: •We set the number of partition neurons to be 1000, which gives a variance of 0.0007 as a threshold. •Preliminary experiments show that (absolute) skew and kurtosis are highly correlated in practice, so we decide to focus on kurtosis to find prediction / suppression neurons. We then choose a kurtosis threshold for prediction/suppression , such that the | https://arxiv.org/abs/2505.17936v1 |
prediction/suppression class is disjoint from par- tition . This gives a (very high) excess kurtosis of 230.9736. •Entropy : Following Stolfo et al. (2024), we focus on the last layer, and we define the null space of WUas the subspace of model space spanned by its last 40 singular vectors. We find that two neurons have a particularly high proportion of their norm in this null space, and define these as entropy neurons. •Attention (de)activation : To ensure comparability across heads, we normalize woutandWQkBOS. Thus the scores can be intuitively understood as cosine similarities between these two vectors. We choose ±√ 2 2as a cutoff. We keep only those neu- rons that we did not already classify as partition or prediction/suppression. •In our case the neuron can be activated positively or negatively, so we cannot distinguish predic-tion from suppression a priori . Instead, we au- tomatically distinguish prediction andsuppression from each other by the sign of cos(win, wgate)· skew(cos(wout, WU))(as opposed to just the sign of the skew). The quantity cos(win, wgate)indicates the typical sign of the activation a priori . Even though this is not very trustworthy it gives some interesting results. •The same problem occurs for the distinction of attention activation and deactivation. As before, we multiply the original quantity woutWQkBOS bycos(win, wgate)and only then look at the sign. Note that here a positive sign means high attention on BOS, hence attention deactivation. It turns out that all relevant neurons are attention deactivation according to this metric. F.3 Results The contingency matrix in Table 2 is a systematic com- parison of our IO classes with Gurnee et al. (2024)’s functional roles. We first see again that Gurnee et al. (2024) assign a functional role to only a small proportion of all neu- rons. 349,521 of 352,256 neurons remain unclassified. In contrast, our IO classes are exhaustive and robustly identify functionalities like conditional depletion and enrichment that are explanatory for how transformers process language. We find that prediction neurons, suppression neurons and (less consistently) partition neurons mostly occur in the final layers, replicating Gurnee et al. (2024)’s findings. Most of these neurons are orthogonal output or pro- portional change. This is not unexpected, as these are some of the largest classes. Conversely, however, a majority of the (relatively few) depletion neurons have prediction or partition as functional role. The only two entropy neurons in OLMo-7B-0424 occur in the last layer and are conditional depletion neurons. G Details on case studies See Tables 3 and 4 for more details on the case studies of Section 6. 12 attention prediction suppression partition entropy deactivation other total depletion 73 0 51 02 14 117 243 at. depletion 114 0 61 00 3 429 604 c. depletion 68 1 24 20 12,344 12,439 at. c. depletion 19 0 13 00 12 44 orthogonal output 826 203 516 00 190,832 192,377 proportional change 111 206 139 00 1 23,358 23,814 at. proportional change 25 0 16 02 85 128 c. enrichment 48 0 179 00 1121,446 121,673 at. c. enrichment 14 0 0 00 660 674 | https://arxiv.org/abs/2505.17936v1 |
enrichment 6 0 0 00 18 24 at. enrichment 15 0 1 00 220 236 total 1,319 410 1,000 24 21 349,521 352,256 Table 2: Contingency table of IO classes (rows) vs Gurnee et al. (2024)’s functional roles (columns) for OLMo-7B- 0424. c = conditional. at = atypical. Cutoffs for prediction/suppression and partition were chosen as described in Appendix F.2. Many neurons with high attention deactivation score are also partition neurons; the left column unter “attention deactivation” counts only those that are not. OLMo-7B-0424 has no attention activation neurons with high enough score. Neuron IO category cos(wgate, win) cos( wgate, wout) cos( win, wout) 28.4737 enrichment 0.5290 0.5048 0.7060 28.9766 conditional enrichment 0.4764 0.4119 0.5982 31.9634 depletion -0.7164 0.7218 -0.8542 29.10900 conditional depletion 0.4988 -0.4992 -0.5775 30.10972 proportional change -0.4543 0.5814 -0.4182 29.4180 orthogonal output -0.0272 -0.4057 0.0669 Table 3: Overview of prediction/suppression neurons chosen for case studies in Section 6 H More case studies These are various neurons that popped out to us as pos- sibly interesting, for not very systematic reasons, for example because they strongly activated on a specific named entity. All of them are in OLMo-7B. We present them by IO class. For most of these case studies we did only a quick and dirty weight-based analysis. In some cases we also tried WE(input embeddings) instead of WU(unembeddings) for the logit-lens style analysis. H.1 Conditional enrichment neurons 0.1480 :wgate,−win,−woutall have tokens similar to box(when using WE). Activates on Xbox . 4.1940 :country appears in winamong many other things. When using WE,Philippines andManila appear inwout. Activates on Philippines . 4.3720 : gate seems country/government related. When using WE, we find wout, wgatecontain some coun- try names. Activates on Denmark . 4.4801 :Muhammad appears in the gate vector. Acti- vates on Muhammad . 4.5772 : predicts ianas in Egyptian . When using WE, all three weight vectors contain Egypt . Activates on Egypt . 4.6517 has a very Ireland (or Celtic nations) related gate vector. The interpretations of the other two weights are less obvious, but Irish andDublin appear in win among many other things, and UKandLondon appear in−wout(Ireland is emphatically notin the UK!) Whenusing WE,Ireland appears among the top tokens of all three weight vectors. Activates on Ireland . 4.6799 : When using WE,Vietnam is among the to- kens corresponding to −wout. Activates on Vietnam 4.7667 : all three weights related to consoles in differ- ent ways. Activates on Xbox 4.9983 :woutis related to electronic devices, wineither electronic devices or sports (surfing may belong to both), wgateis also mostly related to electronic devices. When using WE, we find woutcontains iPhone as a top token. Activates on iPhone . 4.10859 : When using WE, we find wgate, woutinclude Thailand as a top token, woutadditionally Buddha, Bud- dhist . Activates on Thailand . 4.10882 : When using WE, we find −woutcontains Italy,−win, wgateadditionally contain Rome . Activates onItaly. 4.10995 :Boston appears in gate and Massachusetts in−win. When using WE, we find −wout, wgatecon- tainMassachusetts andBoston ,−wincontains Boston . Activates on Massachusetts . 22.2589 :wgateand−winrecognize tokens like Islam, Muhammad and others related | https://arxiv.org/abs/2505.17936v1 |
to the Arabo-Islamic world. The same goes for −wout(as it is similar to win). Activates on Muhammad . 24.4880 : For all three weight vectors the first four to- kens (but not more) are Philippine-related (even though the gate vector is actually not very similar to the oth- ers). The gate vector also reacts to other geographical 13 Neuron, IO classwgate win wout Top activations 28.4737 enrichment≈wout ≈wout pos: review Reviewpos (13.75): Download EBOOK [...] Description of the book [...] \n-> Re- views neg (-2.25): The answer’s at the bot- tom of this-> post 28.9766 conditional enrichmentpos: well wellneg: far high≈wout pos: well wellpos (18.63): Could have saved myself some time. Oh-> , well neg (-3.66): Seek to understand them more -> fully 31.9634 depletion≈wout ≈ −wout neg: again Againpos (5.12): jumping off the roof of his Los Angeles apartment building .-> Meanwhile neg (-3.48): the areas of the doorjamb where the door -> often 29.10900 conditional depletionpos: today nowa- daysneg: these these≈ −wout pos: these Thesepos (12.79): social media tools change and come and go at the drop of a hat -> . neg (-2.18): la couleur de sa robe et -> le 30.10972 proportional change≈wout pos: when whenneg: timing datesneg: when whenpos (2.67): Take pleasure in the rest of the new year .-> You neg (-6.14): puts you on multiple web- pages at-> as soon as 29.4180 orthogonal outputpos: here thereinneg: there wepos: here inneg: ? pos: there therepos (14.41): hereor-> there neg (-2.31): without any consideration being issued or paid there -> for Table 4: Description of the weight vectors of the selected neurons, by top tokens or similarity to wout. The question mark, ?, signals unknown unicode characters. The last column presents the (shortened) text samples on which the respective neuron activates most strongly (positively or negatively). 14 names, which may have in common that they are associ- ated with non-”white” (Black, Asian or Latin) people in the US sense ( Singapore, Malaysian, Nigerian, Seoul, Pacific, Kerala, Bangkok , but also (Los) Angeles and Bronx ). Activates on Philippines . 24.6771 :wgate,−win,−woutall correspond to capital- ized first names. Activates on Muhammad . 25.2723 : Some tokens associated with winandwout are possible completions for th(th-ousand, th-ought, th-orn . When using WE, in all three weights there are a few thtokens, but also with phand similar. Activates onThailand . 25.10496 :−win,−woutcorrespond to tokens starting with v(upper or lower case, with or without preced- ing space). wgateon the other hand seems to react to appropriate endings for tokens starting in v:vol-atility, v-antage, v-intage, vel-ocity, V-ancouver . When using WE, we also find all three weight-vectors are very v- heavy. Activates on Vietnam . H.2 Depletion neurons 30.9996 : Downgrades weird tokens if present / pro- motes frequent English stopwords if absent. Also an attention deactivation neuron for 15 heads in layer 31. H.3 Proportional change neurons 25.7032 : Some tokens associated with wgateandwout are possible completions for xorex(X-avier, x-yz, ex- cel, ex-ercise . When using WE, both xandbox(with variants) appear in all three weight vectors. Activates onXbox . 25.8607 : All | https://arxiv.org/abs/2505.17936v1 |
three vectors correspond to tokens re- lated to cities. Moreover, −woutseems to correspond to non-city places, such as national governments or vil- lages. winis actually not that similar to wgate, wout(in terms of cosine similarities), but all three correspond to city-related tokens. When using WE, in all three weights there are a few city-related tokens. Activates on Paris . We may think of the two input directions as two largely independent ways of checking that “it’s about a city” (this is a recurring phenomenon that we describe in Section 7.2). When the gate activates but the linear input does not confirm it’s about a city, the output pro- motes closely related but non-city interpretations (for example Paris actually refers to the French government in some contexts). 29.8118 : Partition neuron, highest variance of all proportional change neurons. Also an attention deacti- vation neuron for 4 heads (0,2,11,15) in layer 30. 31.5490 : Activates on Muhammad .wgatereacts to various Asian names and Asian-sounding subwords, win to surnames as opposed to other English words starting with space and uppercase letter. woutcorresponds to more Asian stuff (mostly subwords) as opposed to English surnames. 31.6275 : Mostly promotes two-letter tokens (no pre- ceding space, typically uppercase). −wintypically low- ercase single letters. −wgatemostly lowercase two-lettertokens. “If no lowercase two-letter tokens, promote up- percase two-letter tokens proportionally to absence of lowercase single letters" ? 31.8342 : This is an -ot-neuron: wgateandwoutcorre- spond to -o(t)- suffixes, −winto various -ot-stuff. Judg- ing by the weight similarities, we expect that woutis typically activated negatively: downgrade -o(t)- suffixes if present in the residual stream. Activates on Egypt . H.4 Orthogonal output neurons 0.1758 : When using WE, all three weight vectors’ top tokens are famous web sites, including YouTube . Acti- vates on YouTube . 0.3338 : When using WE, we find especially wgateand −win, but also −woutare similar to smartphone-related tokens. Activates on iPhone . 0.3872 : When using WE, we find especially wgate, but also −winand−woutcorrespond to city names. Ac- tivates on Paris . 0.7829 : When using WE, we find win, woutand to a lesser extent wgatecorrespond in large part to software names. Activates on iTunes . 0.7966 : When using WE, the weight vectors mostly correspond to tokens starting with th. Activates on Thor . 29.2568 :woutAsian (Thai?) sounding syllables vs. (Asian) geographic names in English and other stuff; winreacts to Thailand and Asian (geography) stuff as opposed to (mostly) US stuff; wgatepretty much the same. Activates on Thailand . 29.3327 :wgatemostly reacts to city names ( Paris being the most important one), - wincountries and cities, especially in continental Europe ( France andParis on top) as opposed to stuff related to the former British Empire. Relevant is −woutwhich corresponds to pieces of geographical names and especially rivers in France (Se-ine, Rh-one / Rh-ine, Mar-ne, Mos-elle... Norm- andie, Nancy, commun... ).wgateand -winalso react to river(s) . Activates on Paris . 29.4101 :wgateandwinreact to YouTube (top token!), woutdowngrades it (almost bottom token) and promotes subscrib*, views, channels etc. Activates on YouTube . 29.6417 : Downgrades recording and similar. wgate | https://arxiv.org/abs/2505.17936v1 |
andwinare also similar and involve iTunes . Activates oniTunes . 29.9734 :wgatereacts to the East in a broad sense as opposed to the West ( Iran, Kaz-akhstan, Kash-mir, Ukraine ...),winmostly to male first names without pre- ceding space. woutseems to produce word pieces that could begin a foreign name. Activates on Muhammad . 30.2667 :wgatereacts to suffixes (for adjectives de- rived from place names) like en, ian, ians , basically the same for winandwout. Activates on Muhammad . 30.3143 :wgatereacts to words related to entities that are authoritative for various reasons ( officials, au- thorities, according, researchers, spokesman, investi- gators ...).−winreacts to uncertainty ( reportedly, ac- cording... allegedly... accused ).−woutis again police, authorities, officials, court but with no preceding space. 15 Activates on Philippines . What authorities and uncer- tainty have to do with the Philippines is unclear. 30.3883 :wgateand−winreact to Virginia and Afghanistan , among others (in the case of wgate: as op- posed to other geographical names with no preceding space associated with the South and the sea); −woutis activated and promotes all variants of af(and ghan ) but downgrades Virginia etc. Activates on Afghanistan . 30.4577 : Seems to be related to rugby: wgateand slightly less obviously winreact to rugby-related tokens (midfielder, quarterback ...);woutpromotes different to- kens that upon reflection could be related to rugby as well. Activates on Ireland . 30.5372 : Promotes natural and related, downgrades insttokens. winreacts to wildlife etc. as opposed to institute etc,wgatereacts to institute as opposed to natu- ral. Activates on Massachusetts (in which situation it promotes Institute , which makes sense because of MIT). 30.8535 :−woutisonein all variants, wgatetoo,win splits one, ones and the equivalent Chinese characters, on the positive side, from One, 1, ONE on the negative side (and many other things on both sides). Activates onXbox . Presumably this happens because One is a possible prediction ( Xbox One ), and presumably the output reinforces that. 31.2135 : orthogonal output, on the conditional en- richment side (weak conditional enrichment, one of the neurons on the vertical axis). wgatereacts to single letters or symbols as opposed to some English content words without preceding space; winandwoutmostly Chinese or Japanese characters as opposed to some Latin diacrit- ics and other weird stuff. Language choice? “If it’s not English and single letters are floating around, make sure to choose the right language / character set." 31.10424 :wgate,−win, woutcorrespond to score in the top tokens, which is downgraded if present. Activates onParis . No idea what’s happening here. I Results across models These final figures show our analyses of IO functionali- ties by layer (Section 5) for all the models we investi- gated. We note a few additional patterns that appear only in some of these models: •In Yi and the OLMo models, the prevalence of conditional enrichment neurons starts even earlier, at the very first layer. A particularly interesting example is Yi: In layer 0 an enormous 68% of all neurons are conditional enrichment, then almost none, then there is a second wave around layers 11-17 (out of 32) which have around 25% of | https://arxiv.org/abs/2505.17936v1 |
con- ditional enrichment neurons each. •In some models, especially the OLMo ones, there is a non-negligible number of conditional depletion neurons. They tend to appear in middle-to-late lay- ers, shortly after the conditional enrichment wave. The clearest example is OLMo-1B, with a peakof 1418 conditional depletion neurons out of 8192 (17%) in layer 9 out of 16. 16 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 154365 3018 2390 1927 861 325 165 138 127 160 180 219 190 53 8 9022 99 581 1368 1435 1256 937 685 594 543 649 558 567 466 37 2123798 5051 4976 4286 4748 5516 6026 6044 6228 6038 5932 6033 6107 7118 8083 77681 2 215 541 1051 1023 1034 1306 1225 1418 1370 1294 1234 504 22 72allenai/OLMo-1B-hf enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 311397 2867 7379 6966 6223 7206 7480 7661 7076 6445 5526 4821 4279 3926 3876 3768 3420 3312 3488 3692 3030 3305 3075 3090 2342 2515 1719 1066 404 50 39 22834 120 270 497 601 1022 1156 921 839 893 710 697 642 685 887 923 773 636 687 866 1155 1162 1215 1183 1012 1107 997 793 504 59 92 6768941 7981 3325 3439 4069 2501 2117 2217 2881 3458 4532 5145 5817 6018 5733 5791 6324 6536 6258 5632 6095 5618 5838 5740 6823 6434 7447 8636 9748 10764 10730 97883 0 6 78 78 202 222 178 193 205 232 335 261 365 484 521 482 506 554 789 706 897 858 949 786 882 739 457 265 67 51 89allenai/OLMo-7B-0424-hf enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 2596 358 2062 1979 4780 2604 3723 2960 2368 1897 1889 1993 1764 1812 2000 2168 1209 2019 999 195 76 43 16 15 34 3042 80 113 75 176 372 351 395 426 439 821 668 954 654 683 475 272 175 213 204 158 116 75 49 102 2868928 8720 6989 7073 4153 6013 5023 5642 6274 6684 6280 6348 6284 6520 6355 6309 7527 6881 7839 8659 8866 8961 9059 9085 8957 8693gemma-2-2b enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 4187 49 190 511 6065 6703 5744 6302 7229 6665 4871 | https://arxiv.org/abs/2505.17936v1 |
3698 3403 4349 4358 3417 3897 4923 4789 4265 3766 4413 3639 3702 3387 2881 2975 3778 2997 2976 1738 735 271 147 83 93 89 54 47 15 77 10174 22 20 78 91 137 239 176 350 363 660 613 836 985 1565 1318 1447 1820 1249 1397 1583 1085 979 759 747 477 423 305 203 217 228 271 200 185 163 157 173 209 346 501 600 88314061 14247 14089 13668 8138 7388 8242 7767 6635 7155 8561 9802 9916 8791 8123 9279 8822 7234 8073 8381 8677 8432 9330 9501 9881 10632 10647 9908 10881 10934 12209 13218 13766 13916 14004 13991 13958 13951 13746 13596 13308 12905gemma-2-9b enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31328 80 36 185 790 1695 2565 3257 3120 3335 3432 3620 3390 3436 2854 2565 1818 1307 776 415 278 98 29 11 0 1 2 0 3 4 1 7108 44 51 90 231 332 498 737 1129 1380 1574 1627 1708 1534 1464 1290 962 699 416 341 264 133 84 36 34 42 103 162 211 352 553 61010148 10853 10909 10714 9931 8934 7880 6921 6571 6134 5818 5589 5723 5850 6548 6969 8059 8898 9710 10159 10368 10708 10836 10910 10937 10922 10840 10742 10601 10390 10068 9983Llama-2-7b enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31165 324 339 686 2001 2672 3512 4070 4348 4233 3956 4267 4567 4115 4393 4131 3208 2505 1060 610 347 277 133 65 27 19 21 18 15 12 29 5755 83 117 195 289 471 594 783 1016 1020 935 1090 1171 748 667 575 542 543 338 237 175 175 135 73 82 87 131 244 368 559 1025 128414106 13919 13848 13417 11986 11098 10126 9395 8853 8944 9355 8879 8421 9322 9106 9455 10422 11118 12798 13364 13702 13784 13999 14105 14148 14147 14078 13972 13790 13534 12714 11951meta-llama/Llama-3.1-8B enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletionFigure 7: Distribution of neurons by layer and category for a range of models 17 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15210 859 1727 2102 2451 2044 2179 1996 2053 1957 1532 1086 256 73 13 237950 7217 6349 5959 5581 5942 5695 5826 5939 6087 6557 6948 7722 7909 7901 7394meta-llama/Llama-3.2-1B enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion | https://arxiv.org/abs/2505.17936v1 |
atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27176 597 812 1495 3516 3495 3860 3778 3801 3420 3386 3644 3282 3361 3447 3057 1591 1004 627 497 165 78 37 44 39 19 35 3415 71 52 67 77 102 148 312 366 421 542 474 297 173 89 103 87 73 65 71 62 54 67 134 249 507 753 8918000 7516 7309 6599 4567 4546 4127 4048 3966 4286 4203 3992 4551 4579 4557 4955 6441 7062 7456 7582 7921 8023 8045 7954 7845 7553 7088 6597meta-llama/Llama-3.2-3B enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31119 134 162 278 815 1843 4189 4936 5222 4245 5156 4728 5258 5088 5296 4241 3725 3275 2761 1364 322 37 5 4 1 3 7 3 2 2 4 1357 62 108 179 229 328 457 640 793 1102 1375 1888 1608 1430 1234 919 590 584 487 391 234 151 110 71 54 59 89 182 328 427 1000 142414136 14124 14034 13844 13215 12088 9615 8672 8208 8844 7629 7564 7292 7622 7620 9010 9821 10259 10862 12404 13672 14053 14144 14204 14225 14198 14172 14061 13862 13664 12858 110132 8 14 24 56 46 47 51 43 44 62 50 50 53 71 73 77 99 104 72 47 44 36 32 28 24 14 27 27 33 100 1112mistral-7b enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 230 158 181 588 614 841 1103 1098 1062 900 612 604 477 432 310 561 75 1 20 26 1 0 52 04864 4639 4562 4190 4079 3862 3573 3547 3524 3725 4023 3971 4154 4240 4274 4009 4441 4636 4697 4568 4552 4723 4594 4015Qwen/Qwen2.5-0.5B enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 2757 68 71 241 443 912 2011 5894 4427 8721 8175 5821 5787 5741 4751 3945 3189 2634 2703 1861 686 48 3 1 5 5 3 2020 57 50 29 30 12 136 317 228 232 184 484 585 767 502 562 518 444 320 249 199 175 108 134 70 63 222 76818791 18794 18815 18670 18464 18010 16762 12643 14206 9963 10496 12561 12503 12327 13565 14295 15148 15728 15724 16581 17799 18519 18627 18664 | https://arxiv.org/abs/2505.17936v1 |
18729 18743 18510 1660836 2 4 0 3 1 10 35 38 12 22 13 17 37 27 12 17 18 23 35 26 27 16 15 18 28 43 819Qwen/Qwen2.5-7B enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletion 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 317522 98 9 34 77 181 287 365 253 696 1200 2322 3115 2886 2205 2534 2133 1907 1211 706 273 71 18 7 8 14 1 2 1 3 2 5547 128 12 25 36 70 108 128 87 122 148 255 551 671 835 1016 1082 1326 1182 603 410 256 221 130 80 29 22 13 43 136 230 5162137 10688 10978 10938 10889 10729 10588 10481 10653 10150 9626 8355 7214 7319 7583 7183 7360 7183 8214 9457 10108 10502 10562 10771 10843 10931 10953 10978 10937 10724 10339 10076yi-6b enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletionFigure 8: Continuation of Figure 7. Including a copy of Figure 3 (Llama-3.2-3B) for convenience. 18 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 0123456789101112131415 0123456789101112131415 01234567891011121314151 01cos(win,wout) Layerallenai/OLMo-1B-hf 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 012345678910111213141516171819202122232425262728293031 012345678910111213141516171819202122232425262728293031 0123456789101112131415161718192021222324252627282930311 01cos(win,wout) Layerallenai/OLMo-7B-0424-hf 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 012345678910111213141516171819202122232425 012345678910111213141516171819202122232425 0123456789101112131415161718192021222324251 01cos(win,wout) Layergemma-2-2b 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 01234567891011121314151617181920212223242526272829303132333435363738394041 01234567891011121314151617181920212223242526272829303132333435363738394041 012345678910111213141516171819202122232425262728293031323334353637383940411 01cos(win,wout) Layergemma-2-9b 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 012345678910111213141516171819202122232425262728293031 012345678910111213141516171819202122232425262728293031 0123456789101112131415161718192021222324252627282930311 01cos(win,wout) LayerLlama-2-7b 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 012345678910111213141516171819202122232425262728293031 012345678910111213141516171819202122232425262728293031 0123456789101112131415161718192021222324252627282930311 01cos(win,wout) Layermeta-llama/Llama-3.1-8BFigure 9: Boxplots for the distribution of weight cosine similarities in each layer. For cos(wgate, win)and cos(wgate, wout)we show the absolute value since their sign does not carry any information on its own. 19 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 0123456789101112131415 0123456789101112131415 01234567891011121314151 01cos(win,wout) Layermeta-llama/Llama-3.2-1B 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 0123456789101112131415161718192021222324252627 0123456789101112131415161718192021222324252627 01234567891011121314151617181920212223242526271 01cos(win,wout) Layermeta-llama/Llama-3.2-3B 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 012345678910111213141516171819202122232425262728293031 012345678910111213141516171819202122232425262728293031 0123456789101112131415161718192021222324252627282930311 01cos(win,wout) Layermistral-7b 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 01234567891011121314151617181920212223 01234567891011121314151617181920212223 012345678910111213141516171819202122231 01cos(win,wout) LayerQwen/Qwen2.5-0.5B 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 0123456789101112131415161718192021222324252627 0123456789101112131415161718192021222324252627 01234567891011121314151617181920212223242526271 01cos(win,wout) LayerQwen/Qwen2.5-7B 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 012345678910111213141516171819202122232425262728293031 012345678910111213141516171819202122232425262728293031 0123456789101112131415161718192021222324252627282930311 01cos(win,wout) Layeryi-6bFigure 10: Continuation of Figure 9. Including a copy of Figure 4 (Llama-3.2-3B) for convenience. 20 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 0 11 01Layer 12 1 0 1Layer 13 1 0 1Layer 14 1 0 1Layer 15 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)allenai/OLMo-1B-hfFigure 11 21 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 01Layer 24 Layer 25 Layer 26 Layer 27 1 0 11 01Layer 28 1 0 1Layer 29 1 0 1Layer 30 1 0 1Layer 31 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)allenai/OLMo-7B-0424-hfFigure 12 22 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 | https://arxiv.org/abs/2505.17936v1 |
Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 0 11 01Layer 24 1 0 1Layer 25 1 0 1 1 0 1 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)gemma-2-2bFigure 13 23 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 01Layer 24 Layer 25 Layer 26 Layer 27 1 01Layer 28 Layer 29 Layer 30 Layer 31 1 01Layer 32 Layer 33 Layer 34 Layer 35 1 01Layer 36 Layer 37 Layer 38 Layer 39 1 0 11 01Layer 40 1 0 1Layer 41 1 0 11 0 1 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)gemma-2-9bFigure 14 24 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 01Layer 24 Layer 25 Layer 26 Layer 27 1 0 11 01Layer 28 1 0 1Layer 29 1 0 1Layer 30 1 0 1Layer 31 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)Llama-2-7bFigure 15 25 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 01Layer 24 Layer 25 Layer 26 Layer 27 1 0 11 01Layer 28 1 0 1Layer 29 1 0 1Layer 30 1 0 1Layer 31 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)meta-llama/Llama-3.1-8BFigure 16 26 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 0 11 01Layer 12 1 0 1Layer 13 1 0 1Layer 14 1 0 1Layer 15 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)meta-llama/Llama-3.2-1BFigure 17 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 0 11 01Layer 24 1 0 1Layer 25 1 0 1Layer 26 1 0 1Layer 27 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)meta-llama/Llama-3.2-3B Figure 18: Llama-3.2-3B 27 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 | https://arxiv.org/abs/2505.17936v1 |
Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 01Layer 24 Layer 25 Layer 26 Layer 27 1 0 11 01Layer 28 1 0 1Layer 29 1 0 1Layer 30 1 0 1Layer 31 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)mistral-7bFigure 19 28 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 0 11 01Layer 20 1 0 1Layer 21 1 0 1Layer 22 1 0 1Layer 23 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)Qwen/Qwen2.5-0.5BFigure 20 29 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 0 11 01Layer 24 1 0 1Layer 25 1 0 1Layer 26 1 0 1Layer 27 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)Qwen/Qwen2.5-7BFigure 21 30 1 01Layer 0 Layer 1 Layer 2 Layer 3 1 01Layer 4 Layer 5 Layer 6 Layer 7 1 01Layer 8 Layer 9 Layer 10 Layer 11 1 01Layer 12 Layer 13 Layer 14 Layer 15 1 01Layer 16 Layer 17 Layer 18 Layer 19 1 01Layer 20 Layer 21 Layer 22 Layer 23 1 01Layer 24 Layer 25 Layer 26 Layer 27 1 0 11 01Layer 28 1 0 1Layer 29 1 0 1Layer 30 1 0 1Layer 31 1.00 0.75 0.50 0.25 0.000.250.500.751.00 cos(wgate,win) cos(wgate,wout)cos(win,wout)yi-6bFigure 22 31 | https://arxiv.org/abs/2505.17936v1 |
arXiv:2505.17950v1 [cs.CL] 23 May 2025Volume x(x), x—xx. http://dx.doi.org/xxx-xxx-xxx Handling Symbolic Language in Student Texts: A Comparative Study of NLP Embedding Models Tom Bleckmann1, Paul Tschisgale2* Abstract Recent advancements in Natural Language Processing (NLP) have facilitated the analysis of student-generated lan- guage products in learning analytics (LA), particularly through the use of NLP embedding models. Y et when it comes to science-related language, symbolic expressions such as equations and formulas introduce challenges that current embedding models struggle to address. Existing studies and applications often either overlook these challenges or remove symbolic expressions altogether, potentially leading to biased findings and diminished performance of LA applications. This study therefore explores how contemporary embedding models differ in their capability to process and interpret science-related symbolic expressions. To this end, various embedding models are evaluated using physics-specific symbolic expressions drawn from authentic student responses, with performance assessed via two approaches: similarity-based analyses and integration into a machine learning pipeline. Our findings reveal significant differences in model performance, with OpenAI’s GPT-text-embedding-3-large outperforming all other examined models, though its advantage over other models was moderate rather than decisive. Beyond performance, additional factors such as cost, regulatory compliance, and model transparency are discussed as key considerations for model selection. Overall, this study underscores the importance for LA researchers and practitioners of carefully selecting NLP embedding models when working with science-related language products that include symbolic expressions. Notes for Practice •The use of NLP-based approaches for analyzing science-related language products is challenged by the presence of symbolic expressions, such as formulas and equations. Since these expressions can represent central aspects of students’ ideas, effectively handling them is crucial for effective learning analytics applications. •This study shows that commonly used NLP embedding models differ in their capability to handle physics- specific symbolic expressions, with proprietary models appearing to achieve the best performance. •While differences in capabilities may seem small at first glance, even modest improvements in learning analytics applications due to the choice of a particular embedding model can have a meaningful impact in large-scale applications—potentially enhancing feedback quality for thousands of students in widespread deployments. Keywords natural language processing, symbolic expressions, text embeddings, science language Submitted: xxx — Accepted: xxx — Published: xxx 1IDMP - Physics Education Group, Leibniz University Hannover, Hannover, Germany. email icon: bleckmann@idmp.uni-hannover.de 2Leibniz Institute for Science and Mathematics Education, Kiel, Germany. email icon: tschisgale@leibniz-ipn.de 1. Introduction and Literature Review With the increasing digitalization of education, vast amounts of textual data are being generated, offering valuable insights into students’ learning processes. However, automatically extracting and interpreting this information from textual data remains a significant challenge. Recent advancements in artificial intelligence (AI), particularly in Natural Language Processing (NLP), have substantially improved the processing and analysis of textual data in learning analytics (LA) contexts (Ferreira-Mello, André, Pinheiro, Costa, & Romero, 2019; Liu et al., 2025), with NLP techniques now widely used to examine the large volumes of language products generated by students (Zhai, Yin, Pellegrino, Haudek, & Shi, 2020). A systematic literature review conducted by Ferreira-Mello et al. (2024) indicates that NLP embedding models have been the predominant approach in | https://arxiv.org/abs/2505.17950v1 |
1 NLP-based LA research between 2021 and 2023, generally outperforming other text analysis methods. The general principle behind NLP embedding models is to transform words, sentences, or even entire documents into high-dimensional numerical vectors—known as embeddings —that capture their semantic content (Qiu et al., 2021). Embedding models typically rely on neural network architectures that are trained on large collections of text. They generate contextualized embeddings, meaning that they capture how the meaning of a word depends on its surrounding context (Ethayarajh, 2019). As a result, the generated embeddings position semantically similar inputs closer together in the embedding space, allowing for nuanced comparisons between textual elements. This embedding space typically has hundreds or even thousands of dimensions, capturing latent features that would be difficult to identify with traditional text analysis techniques. Embeddings are then typically utilized in further downstream tasks within learning analytics, such as applying machine learning algorithms for classification or pattern recognition. NLP embedding models have been successfully applied to analyze various types of student-generated language products, including argumentations (Martin, Kranz, Wulff, & Graulich, 2023), concept maps (Bleckmann & Friege, 2023), constructed responses (Gombert et al., 2023), and problem-solving approaches (Tschisgale, Wulff, & Kubsch, 2023). While these NLP- based analyses of students’ textual responses have yielded promising results, the language used in science-related contexts (e.g., in scientific arguments, explanations, and problem solutions) differs significantly from everyday language (Lemke, 1990), particularly as it often includes symbolic expressions, such as formulas, equations, and standard scientific notations (Brookes & Etkina, 2009; Ferreira, 2022; Meadows, Zhou, & Freitas, 2022). These symbolic expressions may even serve as the core elements within language products, representing fundamental scientific concepts and relationships while enabling precise and formal communication of disciplinary knowledge (Treagust, Chittleborough, & Mamiala, 2003). For example, in physics, when a student writes “ Etotal=const. holds”, they succinctly encapsulate the conceptual idea of energy conservation; omitting the symbolic expression would strip the sentence of its essential meaning. Although symbolic expressions are often intuitive for professionals in the sciences, their integration into natural language poses challenges for students (Marais & Jordaan, 2000; De Lozano & Cardenas, 2002)—and similarly for NLP embedding models. Such models struggle with mixtures of text and symbolic expressions not only because their training data rarely includes such combinations but also because it would require explicit training to contextually encode symbolic expressions (Ferreira, 2022). Although some studies explicitly report difficulties in handling symbolic expressions within NLP-based analyses, many others may not even be aware of an issue. Examples from supervised machine learning settings, particularly about automated assessment, illustrate such issues with symbolic representations: Botelho, Baral, Erickson, Benachamardi, and Heffernan (2023) identified a strong relationship between the presence of mathematical expressions in student responses and increased classification error rates, while Bleckmann (2024) observed lower predictive accuracies for textual elements containing symbolic expressions. Similarly in an unsupervised machine learning setting, Tschisgale et al. (2023) found that textual elements containing symbolic expressions were largely grouped into one particular cluster, even when more conceptually appropriate clusters existed, suggesting that the utilized embedding model was unable to adequately capture the meaning | https://arxiv.org/abs/2505.17950v1 |
of the symbolic expressions. To mitigate such issues, some studies have opted to exclude symbolic expressions from textual data (Shang, Huang, Zeng, Zhang, & Wang, 2022). However, both approaches—omitting symbolic expressions or neglecting the information they convey—risk losing potentially valuable information. Since symbolic expressions, such as formulas and equations, often encapsulate key scientific concepts, disregarding them in textual data analysis may lead to systematic errors. In particular, students who more heavily rely on symbolic expressions in their responses may be disproportionately affected, resulting in biased assessments that systematically underestimate their performance or conceptual understanding. Thus, there is a clear need for approaches that can effectively process symbolic expressions in science-related textual data. Since these approaches primarily rely on NLP embedding models, an essential first step is to better understand how different models handle textual data interwoven with symbolic expressions. Therefore, this study aims to compare the performance of various NLP embedding models in processing such symbolic expressions. We focus on symbolic expressions in a physics context, given the field’s strong reliance on the interplay between natural language and symbolic expressions (Wulff, 2024). Moreover, rather than using curated, well-formatted symbolic expressions, we focus on those found in authentic student responses. This approach accounts for the notational variability in student responses (e.g., “E=m*g*h”, “E(pot)=gmh”, and “Epot=Mg ∆h” all represent the same physics idea but differ in their notation), thereby addressing an additional potential challenge for NLP embedding models. Based on all these considerations, this study aims to answer the following research question: To what extent can different NLP embedding models accurately process and interpret physics-specific symbolic expres- sions as they appear in authentic student responses? 2 Table 1. Overview of the NLP embedding models that are compared in this study. Model name Short description Dimension German_Semantic_STS_V2 German sentence-transformers model 1024 paraphrase-multilingual- mpnet-base-v2Multilingual sentence-transformers model (Reimers & Gurevych, 2019)768 Gbert-large German BERT language model (Chan, Schweter, & Möller, 2020)1024 G-SciEdBERT German BERT language model finetuned for science education (Latif, Lee, Neumann, Kastorff, & Zhai, 2024)768 GPT-text-embedding-3-large Multilingual third-generation embedding model from OpenAI3072 GPT-text-embedding-ada-002 Multilingual second-generation embedding model from OpenAI1536 2. Methods To address the research question, we adopted two approaches: First, we conducted a similarity-based analysis to examine the extent to which embedding models relate physics-specific symbolic expressions with their literal translations and related physics concepts. In the second approach, we evaluated the performance of selected embedding models within a machine learning pipeline. For both approaches, we used textual data collected from German students, which included symbolic expressions and was generated in a physics context. Although symbolic expressions such as formulas and equations are generally similar across languages, they were often embedded within German text rather than appearing in isolation. Therefore, the selection of embedding models investigated in this study, summarized in Table 1, includes both German-specific and multilingual models. This selection also comprises a mix of open-source and proprietary models, aiming to critically examine the frequently observed performance advantage of proprietary models over open-source alternatives across a range of physics-related tasks (Feng et al., 2025; Wang et al., 2024). Furthermore, one of the selected | https://arxiv.org/abs/2505.17950v1 |
embedding models is specifically optimized for science education. 2.1 Evaluation of Embedding Model Performance via Similarities 2.1.1 Data Sources and Preparation In the first approach, we examined textual responses from German students to two physics tasks (Bleckmann, 2024; Tschisgale et al., 2023). In one task, students were given an incomplete concept map in which the concepts and arrows between concepts were predefined, and students having to label these arrows, representing the relations between concepts (Bleckmann, 2024). In the other task, students had to describe their approach to solving a physics problem, i.e., outlining the ideas they would use and the steps they would make when actually solving the problem (Tschisgale et al., 2023). In both tasks, students’ responses were visually checked for symbolic expressions, which were then extracted by hand without their surrounding contextual information (e.g., the sentence in which they appeared)1. Consequently, the upcoming analyses focus on the pure symbolic expressions found in students’ written responses. In this way, a corpus of N=100symbolic expressions was generated, with an equal distribution between the two task types: 50% from the concept map task and 50% from the problem-solving task. To assess the extent to which different embedding models effectively process students’ symbolic expressions SEi, five categories of expression-text pairs (SEi,·)were constructed: 1.Category LT: Pairs (SEi,LTi)consisting of literal translations LT iof symbolic expressions SE i. 2.Category ILT: Pairs (SEi,ILT i)containing incorrect literal translations ILT iof symbolic expressions SEi, created by modifying one or, in some cases, few key words from the corresponding correct literal translation LT i. 3.Category RC: Pairs(SEi,RCi)incorporating the primary related concept RCifor each symbolic expression SEi, provided that it could be determined in a straightforward and unambiguous manner. 1While contextualized embeddings are designed to leverage surrounding context, the present approach deliberately omits contextual information to establish a lower bound for NLP embedding models’ capabilities to handle symbolic expressions. In practical applications where contextual information are available (e.g., entire sentences in which symbolic expressions are embedded), model performance would likely be higher. This design choice allows for a clearer assessment of embedding models’ inherent capabilities in processing symbolic expressions when no contextual information is available 3 Table 2. Three exemplary symbolic expressions (SE) from the data corpus, including their literal translations (LT), incorrect literal translations (ILT), related concepts (RC), incorrect related concepts (IRC), and off-topic statements (OT); all translated from German into English by the authors. For some symbolic expressions (as in example 3), we refrained from stating (incorrect) related concepts if no clear physics concept was identifiable or the decision was highly ambiguous. Category Example 1 Example 2 Example 3 SE “m·g·h= 1/2·m·v²” “V=AXT” “v=sqrt(g*r)” LT “mass times gravitational acceleration times height equals half mass times ve- locity squared”“velocity equals accelera- tion times time”“velocity equals square root of gravitational accel- eration times radius” ILT “mass times gravitational acceleration times height equals half mass times ac- celeration squared”“velocity is constant” “acceleration equals square root of gravita- tional acceleration times radius” RC “law of conservation of energy”“uniformly accelerated motion”NA IRC “law of conservation of momentum”“uniform motion” NA OT “apple pie recipe” “apple pie | https://arxiv.org/abs/2505.17950v1 |
recipe” “apple pie recipe” 4.Category IRC: Pairs (SEi,IRC i)including incorrect related concepts IRC ithat are somewhat related to the correct related concept RC ibut not directly associated with the symbolic expression SE iitself. 5.Category OT: Pairs (SEi,OTi)where OTi=“apple pie recipe” , serving as an off-topic (OT) reference that was entirely unrelated to physics and hence also unrelated to all symbolic expressions SE i. The authors of this study have formulated these expression-text pairs for each symbolic expression. Examples are provided in Table 2. 2.1.2 Computing Similarities of Pairs For each NLP embedding model M(see Table 1), all symbolic expressions SEiand their corresponding textual counterparts LTi,ILT i,RCi,IRC i,and OT iwere mapped to their respective d-dimension-al embedding representations: eM SEi,eM LTi,eM ILTi,eM RCi, eM IRC i, and eM OTi. The embedding dimension ddepends on the chosen embedding model M(see Table 1). To assess the capability of different embedding models to capture the semantic meaning of symbolic expressions SEi, we computed a similarity measure for each expression-text pair (SEi,·)based on their respective model-dependent embeddings. Intuitively, two embeddings (which are simply high-dimensional vectors) from the same embedding model Mare considered more similar if the angle between them is smaller. This is because embedding models are trained to place semantically similar terms closer together in embedding space, which naturally results in smaller angles between their corresponding embedding vectors. This idea is illustrated and exemplified in Fig. 1. A widely used metric for quantifying this similarity in high-dimensional embedding spaces is cosine similarity , which is basically the cosine of the angle between two embeddings (Hirschle, 2022). Specifically, given two embeddings eM A= (eM A,1,..., eM A,d),eM B= (eM B,1,..., eM B,d), generated by the embedding model Mand serving as semantic representations of the textual statements “ A” and “ B”, the cosine similarity Scosis given by Scos(eM A,eM B) =cos ∡(eM A,eM B) =⟨eM A,eM B⟩ ∥eM A∥·∥eM B∥∈[−1,1], where∡(eM A,eM B)denotes the angle between the embedding vectors, and ⟨eM A,eM B⟩is the scalar product of the embedding vectors given by ⟨eM A,eM B⟩=d ∑ k=1eM A,k·eM B,k and∥·∥=p ⟨·,·⟩is the Euclidean norm. 4 Figure 1. Illustration of the expected embedding structure if a model Mis capable of accurately processing and interpreting symbolic expressions. Using Example 1 from Table 1, the corresponding embedding vectors eMare shown in a simplified two-dimensional embedding space (for illustration purposes). The plot includes vectors for the specific symbolic expression SEi, its related concept RCi, an incorrect related concept IRC i, and an off-topic statement OTi. The angles α,β, and γbetween embedding vectors reflect semantic similarity. Specifically, if the embedding model Mcan accurately process and interpret a symbolic expression SE i, then the angle αbetween its embedding vector eM SEiand that of a related concept eM RCishould be smaller than the angle βto the embedding of an incorrect related concept eM IRC i, and much smaller than the angle γto the embedding of an off-topic statement eM OTi. d1d2 eM RCi eM IRC ieM SEi eM OTiRCi: “law of conservation of energy“ IRC i: “law of conservation of momentum“SEi: “m·g·h= 1/2·m·v²” OTi: “apple | https://arxiv.org/abs/2505.17950v1 |
pie recipe“ γα β A cosine similarity of 1indicates that two embeddings point in the same direction within the embedding space, representing maximum similarity (e.g., synonymous terms), while a value of −1indicates that they point in opposite directions, suggesting strong dissimilarity or antonymy. A value of 0implies that the embeddings are orthogonal, suggesting that there is no semantic relationship between the corresponding textual statements (which might be the case for the off-topic category). 2.1.3 Statistical Analyses We aimed to investigate how embedding model-dependent cosine similarity values can be used to distinguish between correct and incorrect expression-text pairs. In other words, we examined whether for specific embedding models higher similarity scores were assigned to correct pairs (e.g., “V=AXT” as symbolic expression and “velocity equals acceleration times time” as a correct literal translation) and lower scores to incorrect pairs (e.g., “V=AXT” and “velocity is constant” as an incorrect literal translation), thereby indicating an embedding model’s capability to capture meaningful semantic relationships with regard to symbolic expressions. More precisely, two complementary analyses were performed. The first analysis utilized receiver operating characteristic (ROC) curves, as they allow quantifying the extent to which higher similarity scores are more strongly associated with correct pairs and lower scores with incorrect pairs, while specifically focusing on the separability of these two categories. In particular, ROC curves assess how effectively different similarity score thresholds can distinguish between correct and incorrect expression-text pairs (i.e., LT vs. ILT, and RC vs. IRC) (Brown & Davis, 2006). In our context, a ROC curve depicts the true positive rate against the false positive rate across various thresholds of cosine similarity Scos. Values exceeding the threshold are classified as correct (LT or RC), while values below the threshold are classified as incorrect (ILT or IRC). This method provides a visual impression of an embedding model’s capability to differentiate between correct and incorrect cases solely based on cosine similarity values. To quantify this capability, area under the curve ( AUC∈[0,1]) is computed for each embedding model, with AUC =1representing a perfect classifier, AUC =0.5 indicating random guessing, and AUC =0 indicating a classifier that systematically misclassifies all instances. The second analysis focused on individual symbolic expressions rather than the whole distribution of similarity values. If an embedding model Meffectively captures the semantics of symbolic expressions, we expect that the similarity between the embedding of a given symbolic expression SEiand the embedding of its corresponding literal translation LTi(or related concept RCi) should be greater than its similarity to the embedding of an incorrect literal translation ILT i(or an incorrect related concept IRC i), i.e.: Scos(eM SEi,eM LTi)>Scos(eM SEi,eM ILTi) (1a) Scos(eM SEi,eM RCi)>Scos(eM SEi,eM IRC i), (1b) 5 or rewritten in terms of the difference ∆SM cosin model-dependent cosine similarity values: ∆SM cos(LT vs. ILT ):=Scos(eM SEi,eM LTi)−Scos(eM SEi,eM ILTi)>0 (2a) ∆SM cos(RC vs. IRC ):=Scos(eM SEi,eM RCi)−Scos(eM SEi,eM IRC i)>0, (2b) A nonparametric statistical test that assesses whether similarity values of correct expression-text pairs tend to be greater than similarity values of corresponding incorrect expression-text pairs is the paired one-sided Wilcoxon signed-rank test (Woolson, 2005). This test | https://arxiv.org/abs/2505.17950v1 |
is a paired difference test, meaning it evaluates the distribution of differences by subtracting the similarity values of two paired categories associated with the same symbolic expression (see Eq. (2a) and Eq. (2b)). The test then assesses whether the distribution of these differences is symmetric about zero (null hypothesis). We adopt a one-sided alternative hypothesis, where a significant result indicates that similarity values associated with correct categories (LT, RC) are systematically higher than those of their corresponding incorrect categories (ILT, IRC). Furthermore, we computed the matched-pairs rank biserial correlation coefficient as a measure of effect size (Kerby, 2014; King, Rosopa, & Minium, 2011). 2.2 Evaluation of Embedding Models within a Machine Learning Pipeline The second approach aimed to examine how the choice of a particular embedding model affects the performance of a supervised machine learning classifier that was trained on the respective model-specific embeddings. For this purpose, we used data from the physics concept mapping task, where students labeled predefined connections between concepts (cf. section 2.1.1) The concept maps resulted in a dataset of N=3322 textual elements or so called propositions (concept A - linking words - concept B), the majority of which were purely textual ( 83%) (e. g., velocity - increases with - free fall), while a subset incorporated symbolic expressions to varying degrees ( 17%) (e. g., force - F = m*a - acceleration) . Each proposition was evaluated for its correctness and level of detail using a rating scheme consisting of four categories (wrong answers, superficial answers, simple but more directed answers, detailed answers). After the propositions were preprocessed, different classification models (e.g., random forest, SVM or boosting approaches) were trained. The aim was to find a model that could automatically classify the propositions into one of the four rating categories. To assess how well the models perform this task, all proportions were evaluated by human raters in the four categories and this was considered the gold standard. In the training process, the data was split into a training, validation and test data set using a stratified cross-validation strategy and a hyperparameter optimization was performed. The German_Semantic_STS_V2 embedding model was used to transform the propositions that were available in natural language into sentence embeddings. Overall, it was found that a support vector machine (SVM) classifier (Bishop, 2006) provided the best Cohen’s κon the dataset. In this context, Cohen’s κquantifies the agreement between the classifier’s predictions and the true category labels, while adjusting for chance agreement (Cohen, 1960). In the present study, we investigate the effect of substituting the original embedding model with alternative models (see Table 1) on the classification performance of the trained SVM measured by Cohen’s κ. 3. Results The distributions of computed similarity values across embedding models are visualized in Fig. 2, where each embed- ding model is represented by multiple boxplots corresponding to the five categories exemplified in Table 2. Evidently and on average, the cosine similarity between symbolic expressions and their correct literal translations or related con- cepts seems to be higher, albeit sometimes only slightly, than the similarity with incorrect literal translations | https://arxiv.org/abs/2505.17950v1 |
or related concepts, respectively. This may be regarded as a first indicator that the embedding models are, to some extent, capable of interpreting symbolic expressions meaningfully. Interestingly, only the three models German_Semantics_STS_V2 , paraphrase-multilingual-mpnet-base-v2 , and GPT-text-embedding-3-large assign noticeably lower similarity scores to the off-topic statement (OT) compared to the more meaningful comparisons. Among these, only the latter two assign values close to zero, suggesting that they successfully detect a lack of semantic similarity between the symbolic expressions and the off-topic statement “apple pie recipe”. 3.1 Evaluation of Embedding Model Performance via Similarities We aimed to investigate how embedding model-dependent cosine similarity values can be used to distinguish between correct and incorrect expression-text pairs. To this end, two complementary analysis were conducted. The first analysis involved generating receiver operating characteristic (ROC) curves (see Fig. 3). Moreover, the area under curve (AUC) was computed for each embedding model as a measure of separability of correct and incorrect expression-text pairs. This analysis revealed that, for distinguishing correct from incorrect literal translations (LT vs. ILT; see Fig. 3, left), GPT-text-embedding-3-large is the most effective embedding model ( AUC =0.73), although the AUC value only indi- cates moderate separability. Both German_Semantic_STS_VS (AUC =0.64) andGPT-text-embedding-ada-002 6 Figure 2. Boxplot visualization of cosine similarity values across embedding models. For each embedding model M, there are five boxplots representing the distributions of similarity values Scos(eM SEi,eM Ci)for expression-text pairs of the five categories C∈ {LT,ILT,RC,IRC,OT}with LT: literal translations, ILT: incorrect literal translations, RC: related concepts, IRC: incorrect related concepts, OT: off-topic. 7 Figure 3. Receiver Operating Characteristic (ROC) curves illustrating the capability of different embedding models to distinguish between correct and incorrect categories of expression-text pairs (left: LT vs. ILT; right: RC vs. IRC) based solely on model-dependent cosine similarity values. Curves closer to the top-left corner indicate better classification performance. The corresponding AUC values, ranked from highest to lowest, are displayed in the legend next to each embedding model. (AUC =0.61) perform only slightly above random guessing. The remaining models performed close to random guessing (AUC =0.57/0.53/0.50). For distinguishing correct from incorrect related concepts (RC vs. IRC; see Fig. 3, right), GPT-text-embedding-3 -large again outperformed the other models ( AUC =0.74). In particular, the remaining models do not demonstrate substantial improvement over random guessing (AUC =0.49,..., 0.58). The second analysis focused on examining similarity values at the level of individual expression-text pairs by evaluating the differences in cosine similarity across embedding models for correct vs. incorrect literal translations (LT vs. ILT; see Eq. (2a)) and correct vs. incorrect relate concepts (RC vs. IRC; see Eq. (2b)). These embedding model-dependent distributions are visualized via boxplots in Fig. 4. These boxplots already reveal that there are overall differences between the embedding models, and in particular, that some embedding models appear to struggle with reliably distinguishing between correct and incorrect literal translations as well as between correct and incorrect related concepts. Differences between embedding models based on those distributions were further quantified: The results presented in Table 3 show the proportion of instances in which the similarity between a symbolic expression | https://arxiv.org/abs/2505.17950v1 |
SEiand the correct literal translation LTi(or correct related concept RCi) exceeds the expression’s similarity with the corresponding incorrect literal translation ILT i(or incorrect related concept IRC i), as formally represented in Equations (1a) and (1b). To assess statistical significance, we conducted a paired one-sided Wilcoxon signed-rank test and computed the matched-pairs rank biserial correlation coefficient as a measure of effect size. The findings demonstrate that the GPT-text-embedding-3-large model consistently outperforms all other em- bedding models in both the LT vs. ILT (prop. =91%,d=0.95) and RC vs. IRC comparison (prop. =83%,d=0.82). However, the performance ranking of the remaining models depends on the comparison. In the case of literal transla- tions (LT vs. ILT), the next best performing models are GPT-text-embedding-ada-002 (prop. =71%,d=0.56) andGerman_Semantic_STS_V2 (prop. =66%,d=0.60), followed by paraphrase-multilingual-mpnet -base-v2 (prop. =63%,d=0.36). The Gbert-large (prop. =46%,d=0.04) and G-SciEdBERT (prop. =41%, d=−0.09) models exhibited the weakest performance, with non-significant results. For the comparison of related concepts based on correctness (RC vs. IRC), G-SciEdBERT emerges as the second-best model (prop. =68%,d=0.40), followed byGbert-large (prop. =57%,d=0.29) and GPT-text-embedding-ada-002 (prop. =58%,d=0.27). In con- trast, the lowest-performing models for this comparison are German_Semantic_STS_V2 (prop. =55%,d=0.14) and paraphrase-multilingual-mpnet-base-v2 (prop. =49%, d=−0.03). 3.2 Evaluation of Embedding Models within a Machine Learning Pipeline We compared different embedding models by evaluating their impact on the classification performance (measured by Cohen’s κ) of a support vector machine (SVM) trained on the model-specific embeddings in categorizing students’ responses into four 8 Figure 4. Boxplot visualization of the distribution of cosine similarity differences ∆SM cosacross the evaluated embedding models M, as defined in Eq. (2a) and Eq. (2b). Left: correct literal translations (LT) vs. incorrect literal translations (ILT). Right: Correct related concepts (RC) vs. incorrect related concepts (IRC). Better embedding model performance in handling symbolic expressions is indicated by a greater portion of the distribution lying above the horizontal zero line. Table 3. Results of the instance-based comparisons of similarity values. Abbreviations: prop. = proportion of instances for which Eq. (1a) or (1b) holds; p=p-value of a paired one-sided Wilcoxon signed-rank test; d= effect size given by matched-pairs rank biserial correlation coefficient; n.s.: non-significant, *: p<0.05, **: p<0.01, ***: p<0.001. LT vs. ILT RC vs. IRC Model prop. p d prop. p d German_Semantic_STS_V2 66% *** 0.60 55% n.s. 0.14 paraph.-multi.-mpnet-base-v2 63% ** 0.36 49% n.s. -0.03 Gbert-large 46% n.s. 0.04 57% ** 0.29 G-SciEdBERT 41% n.s. -0.09 68% ** 0.40 GPT-text-embedding-3-large 91% *** 0.95 83% *** 0.82 GPT-text-embedding-ada-002 71% *** 0.56 58% * 0.27 9 Figure 5. Performance of the SVM classifier, measured by Cohen’s κ, for solely textual student responses (left) and responses including symbolic expressions (right), depending on the NLP embedding model used. categories based on correctness and level of detail. Figure 5 illustrates how Cohen’s κdepends on the underlying embedding model, in particularly how it differs between solely textual responses and responses including symbolic expressions. As expected, the average classification performance across embedding models is better for solely textual responses ( ¯κ=0.72) than for responses including symbolic expressions ( ¯κ=0.60). In more detail, the results indicate that paraphrase-multilingual-mpnet-base-v2 andGbert-large achieved performance comparable | https://arxiv.org/abs/2505.17950v1 |
to the originally used model ( German_Semantic_STS_V2 ) for both types of student responses. However, the performance of the G-SciEdBERT model in classifying both types of student responses was significantly lower, consistent with the results presented in Section 3.1. Overall, the GPT-text-embedding-ada-002 model provided a slight improvement in performance across both response types, while the GPT-text-embedding-3-large model yielded the largest improvement, with ∆κ= +0.10for solely textual responses and ∆κ= +0.08for responses including symbolic expressions. These findings highlight that the choice of embedding model does influence classification performance. While the differences may appear modest at first glance, in large-scale LA applications, even small improvements can make a meaningful difference. For example, an increase of 0.10 in κcan roughly be thought of correctly classifying one additional student response out of every ten—potentially affecting feedback quality for thousands of students in large deployments. 4. Discussion This study revealed substantial differences among NLP embedding models in their ability to process physics-specific symbolic expressions in authentic student responses. Both the similarity-based approach and the more application-oriented machine learn- ing pipeline indicated that some embedding models consistently outperformed others. Notably, OpenAI’s subscription-based model GPT-text-embedding-3-large —which, at the time of the study, was the most recent available—demonstrated superior performance compared to all other models tested, particularly in handling symbolic expressions as well as purely textual input (see Fig. 5, left). This performance advantage is likely attributable to the model’s broader training corpus—which presumably includes a greater proportion of symbolic expressions—and to its more advanced embedding architecture compared to the other models. Interestingly, even the G-SciEdBERT embedding model fine-tuned for science education tasks did not appear to effectively handle symbolic expressions, which is likely due to its fine-tuning on more conceptual science tasks rather than tasks emphasizing usage of symbolic expressions (Latif et al., 2024). These findings have important implications for both researchers and practitioners in the field of LA. In many educational contexts, textual data often contain symbolic expressions that encode valuable diagnostic information. Ignoring such expressions may introduce systematic biases into research results or lead to misguided conclusions and suboptimal decisions in practical LA applications. Although differences in embedding model performance may appear modest, our more practical machine 10 learning pipeline approach revealed that GPT-text-embedding-3-large achieved an increase of 0.08 in Cohen’s κ compared to the average performance in handling symbolic expressions. Such improvements can have substantial consequences in large-scale applications. For instance, in a hypothetical deployment scenario of the investigated machine learning workflow involving 10,000 student responses, this difference in performance would correspond to approximately 800 additional responses being accurately classified. While NLP embedding models will continue to evolve rapidly, the core insight from this study remains significant: there are substantial and practically relevant differences in how well these models handle symbolic expressions in students’ responses. This highlights the importance for both researchers and practitioners to be mindful of these differences when choosing an embedding model, especially in contexts where symbolic expressions may play a central role. However, the choice of an embedding model should not be based solely on performance, as other considerations must be taken | https://arxiv.org/abs/2505.17950v1 |
into account. One such consideration is cost: while a one-time payment for generating embeddings for a fixed text corpus in a research scenario may be feasible, continuous usage in large-scale LA applications could result in substantial financial costs. Another critical consideration is regulatory compliance, including data protection and privacy concerns, which may lead institutions or developers to prefer locally hosted models. Additionally, reliance on non-local models, such as those from OpenAI, creates a dependency on external providers. Changes in model architectures, pricing structures, or API terms could impact future applications, posing challenges for the long-term planning of research projects or educational applications. In light of these aspects, researchers and practitioners in learning analytics working with textual data that include symbolic expressions must carefully weigh their priorities. hose seeking moderately improved performance in handling symbolic expressions may opt for proprietary models like GPT-text-embedding-3-large , accepting trade-offs such as cost, regulatory constraints, and external dependencies. Alternatively, open-source models offer greater transparency and control but currently exhibit reduced capabilities in processing symbolic expressions. This trade-off underscores a pressing need for open-source embedding models that are explicitly trained on scientific texts and capable of robustly handling symbolic expressions. In this regard, further research is also needed on explicit approaches (Ferreira, 2022), such as mathematical language processing, that directly aim to understand symbolic expressions rather than treating this capability as a side effect of embedding models primarily designed for general language understanding. This study has several limitations that also give rise to possible further research directions. First, our analysis focused exclusively on physics, given its strong reliance on symbolic expressions. While our findings still provide valuable insights, they may not be fully generalizable to other scientific disciplines with different symbolic structures such as chemistry (e.g., molecular formulas) or pure mathematics (e.g., simple letters as abstract algebraic objects). Second, this study did not investigate the robustness of embedding models to variations in notation. For example, formulas can appear in different notational forms (e.g., alternative representations of fractions, summation symbols, or variable names), and it remains unclear how well different embedding models handle this variability. Future research should systematically examine which notation is processed effectively and which present challenges to better assess model performance and understand their limitations in handling symbolic expressions. Lastly, the analyses conducted in this study focused on symbolic expressions, either without contextual information (in the similarity-based analysis) or with only limited contextual information (in the machine learning pipeline analysis). However, the embedding models used are designed to produce contextualized embeddings and tend to perform best when rich contextual information is available. Therefore, the reported findings regarding the embedding models’ capability to handle symbolic expressions should be interpreted as a lower-bound estimate of their potential performance. In practical applications where symbolic expressions are embedded within entire texts—thereby providing additional context—it is reasonable to expect improved performance. Nonetheless, this remains an empirical question, and further research is needed to investigate how contextual information may enhance—or possibly even hinder—the processing of symbolic expressions by embedding models. 5. Conclusion In summary, this study underscores the importance for LA researchers | https://arxiv.org/abs/2505.17950v1 |
and practitioners of carefully selecting NLP embedding models for applications involving science-related language products, particularly those containing symbolic expressions such as formulas and equations. Our results reveal meaningful differences in the capabilities of current embedding models to handle physics-specific symbolic expressions, with proprietary models—especially GPT-text-embedding-3-large — demonstrating the best overall performance, though its advantage over other models was moderate rather than decisive. However, even such modest differences can translate into meaningful improvements in large-scale LA applications, potentially enhancing automated feedback and support for thousands of learners. Nonetheless, embedding model selection must also consider practical factors such as cost, data privacy, and long-term accessibility. Looking ahead, the development of NLP embedding models (or other technologies) that can robustly and transparently process both textual and symbolic information concurrently will be essential for advancing automated assessment based on science-related language products in LA. The evaluation framework presented in this study offers a foundation for systematically benchmarking future embedding models and guiding 11 their application in LA contexts that involve scientific language. Author Contributions Both authors contributed equally to this study and the corresponding manuscript. Accordingly, they share first authorship. Acknowledgments In the process of refining the manuscript, language editing was conducted with the assistance of ChatGPT, a language model developed by OpenAI. ChatGPT was employed to enhance the clarity, coherence, and style of the text while maintaining the integrity of the original content. Declaration of Conflicting Interest The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding This research was conducted using data from the WinnerS project (supported by the Leibniz Association, Germany under grant no. K194/2015) and data from the LernMINT Research Training Group (funded by the Lower Saxony Ministry of Science and Education under project no. 51410078). References Bishop, C. M. (2006). Pattern recognition and machine learning . New York: Springer. Bleckmann, T. (2024). Formatives Assessment auf Basis von maschinellem Lernen: Eine Studie über automatisiertes Feedback zu Concept Maps aus dem Bereich Mechanik . Logos Verlag Berlin. doi: 10.30819/5842 Bleckmann, T., & Friege, G. (2023, September). Concept maps for formative assessment: Creation and implementation of an automatic and intelligent evaluation method. Knowledge Management & E-Learning: An International Journal , 433–447. doi: 10.34105/j.kmel.2023.15.025 Botelho, A., Baral, S., Erickson, J. A., Benachamardi, P., & Heffernan, N. T. (2023, June). Leveraging natural language processing to support automated assessment and feedback for student open responses in mathematics. Journal of Computer Assisted Learning ,39(3), 823–840. doi: 10.1111/jcal.12793 Brookes, D. T., & Etkina, E. (2009, June). “Force,” ontology, and language. Physical Review Special Topics - Physics Education Research ,5(1), 010110. doi: 10.1103/PhysRevSTPER.5.010110 Brown, C. D., & Davis, H. T. (2006, January). Receiver operating characteristics curves and related decision measures: A tutorial. Chemometrics and Intelligent Laboratory Systems ,80(1), 24–38. doi: 10.1016/j.chemolab.2005.05.004 Chan, B., Schweter, S., & Möller, T. (2020). German’s next language model. In Proceedings of the 28th International Conference on Computational Linguistics (pp. 6788–6796). Barcelona, Spain (Online): International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.598 Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement ,20(1). | https://arxiv.org/abs/2505.17950v1 |
De Lozano, S. R., & Cardenas, M. (2002). Some learning problems concerning the use of symbolic language in physics. Science and Education ,11(6), 589–599. doi: 10.1023/A:1019643420896 Ethayarajh, K. (2019). How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. arXiv. doi: 10.48550/ARXIV.1909.00512 Feng, K., Zhao, Y ., Liu, Y ., Yang, T., Zhao, C., Sous, J., & Cohan, A. (2025). PHYSICS: Benchmarking foundation models on university-level physics problem solving. arXiv. doi: 10.48550/ARXIV.2503.21821 Ferreira, D. (2022). Mathematical language processing: Deep learning representations and inference over mathematical text (Unpublished doctoral dissertation). University of Manchester, Manchester. Ferreira-Mello, R., Freitas, E., Cabral, L., Dawn, F., Rodrigues, L., Rakovic, M., . . . Gasevic, D. (2024, October). Words of wisdom: A journey through the realm of NLP for learning analytics - a systematic literature review. Journal of Learning Analytics , 1–24. doi: 10.18608/jla.2024.8403 Ferreira-Mello, R., André, M., Pinheiro, A., Costa, E., & Romero, C. (2019, November). Text mining in education. WIREs Data Mining and Knowledge Discovery ,9(6), e1332. doi: 10.1002/widm.1332 Gombert, S., Di Mitri, D., Karademir, O., Kubsch, M., Kolbe, H., Tautz, S., . . . Drachsler, H. (2023, June). Coding energy knowledge in constructed responses with explainable NLP models. Journal of Computer Assisted Learning ,39(3), 767–786. doi: 10.1111/jcal.12767 12 Hirschle, J. (2022). Deep Natural Language Processing: Einstieg in Word Embedding, Sequence-to-Sequence-Modelle und Transformer mit Python . München: Hanser. Kerby, D. S. (2014, January). The simple difference formula: An approach to teaching nonparametric correlation. Comprehen- sive Psychology ,3, 11.IT.3.1. doi: 10.2466/11.IT.3.1 King, B. M., Rosopa, P. J., & Minium, E. W. (2011). Statistical reasoning in the behavioral sciences (6th ed.). Wiley. Latif, E., Lee, G.-G., Neumann, K., Kastorff, T., & Zhai, X. (2024). G-SciEdBERT: A contextualized LLM for science assessment tasks in German. arXiv. doi: 10.48550/ARXIV.2402.06584 Lemke, J. L. (1990). Talking science: Language, learning, and values (J. Green, Ed.). New Jersey: Ablex Publishing Corporation. Liu, X., Zambrano, A. F., Baker, R. S., Barany, A., Ocumpaugh, J., Zhang, J., . . . Wei, Z. (2025, March). Qualitative coding with GPT-4: Where it works better. Journal of Learning Analytics ,12(1), 169–185. doi: 10.18608/jla.2025.8575 Marais, P., & Jordaan, F. (2000, October). Are we taking symbolic language for granted? Journal of Chemical Education , 77(10), 1355. doi: 10.1021/ed077p1355 Martin, P. P., Kranz, D., Wulff, P., & Graulich, N. (2023, September). Exploring new depths: Applying machine learning for the analysis of student argumentation in chemistry. Journal of Research in Science Teaching , tea.21903. doi: 10.1002/tea.21903 Meadows, J., Zhou, Z., & Freitas, A. (2022). PhysNLU: A language resource for evaluating natural language understanding and explanation coherence in physics. arXiv. doi: 10.48550/ARXIV.2201.04275 Qiu, X., Sun, T., Xu, Y ., Shao, Y ., Dai, N., & Huang, X. (2021). Pre-trained models for natural language processing: A survey. Science China Technological Sciences ,63(10), 1872–1897. doi: 10.1007/s11431-020-1647-3 Reimers, N., & Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using siamese BERT-networks. arXiv:1908.10084v1 . doi: 10.48550/ARXIV.1908.10084 Shang, J., Huang, J., Zeng, S., Zhang, J., & Wang, H. (2022, May). Representation and extraction of physics knowledge based on knowledge graph and embedding-Combined text classification for cooperative | https://arxiv.org/abs/2505.17950v1 |
learning. In 2022 IEEE 25th International Conference on Computer Supported Cooperative Work in Design (CSCWD) (pp. 1053–1058). Hangzhou, China: IEEE. doi: 10.1109/CSCWD54268.2022.9776230 Treagust, D., Chittleborough, G., & Mamiala, T. (2003, November). The role of submicroscopic and symbolic rep- resentations in chemical explanations. International Journal of Science Education ,25(11), 1353–1368. doi: 10.1080/0950069032000070306 Tschisgale, P., Wulff, P., & Kubsch, M. (2023, September). Integrating artificial intelligence-based methods into qualitative research in physics education research: A case for computational grounded theory. Physical Review Physics Education Research ,19(2), 020123. doi: 10.1103/PhysRevPhysEducRes.19.020123 Wang, X., Hu, Z., Lu, P., Zhu, Y ., Zhang, J., Subramaniam, S., . . . Wang, W. (2024, June). SciBench: Evaluating college- level scientific problem-solving abilities of large language models (No. arXiv:2307.10635). arXiv. doi: 10.48550/ arXiv.2307.10635 Woolson, R. (2005). Wilcoxon signed-rank test. In Encyclopedia of Biostatistics (2nd ed.). John Wiley & Sons, Ltd. Wulff, P. (2024, March). Physics language and language use in physics—What do we know and how AI might enhance language-related research and instruction. European Journal of Physics ,45(2), 023001. doi: 10.1088/1361-6404/ad0f9c Zhai, X., Yin, Y ., Pellegrino, J. W., Haudek, K. C., & Shi, L. (2020, January). Applying machine learning in science assessment: A systematic review. Studies in Science Education ,56(1), 111–151. doi: 10.1080/03057267.2020.1735757 13 | https://arxiv.org/abs/2505.17950v1 |
arXiv:2505.17952v1 [cs.CL] 23 May 2025Beyond Distillation: Pushing the Limits of Medical LLM Reasoning with Minimalist Rule-Based RL Che Liu1∗, Haozhe Wang2∗, Jiazhen Pan3∗, Zhongwei Wan4,Yong Dai5,Fangzhen Lin2, Wenjia Bai1,Daniel Rueckert1,3,Rossella Arcucci1 1Imperial College London,2HKUST,3Technical University of Munich , 4Ohio State University,5Fudan University Bche.liu21@imperial.ac.uk Project page: https://cheliu-computation.github.io/AlphaMed/ Abstract Improving performance on complex tasks and enabling interpretable decision making in large language models (LLMs), especially for clinical applications, re- quires effective reasoning. Yet this remains challenging without supervised fine- tuning (SFT) on costly chain-of-thought (CoT) data distilled from closed-source models (e.g., GPT-4o). In this work, we present AlphaMed , the first medical LLM to show that reasoning capability can emerge purely through reinforcement learning (RL), using minimalist rule-based rewards on public multiple-choice QA datasets, without relying on SFT or distilled CoT data. AlphaMed achieves state- of-the-art results on six medical QA benchmarks, outperforming models trained with conventional SFT+RL pipelines. On challenging benchmarks (e.g., MedX- pert), AlphaMed even surpasses larger or closed-source models such as DeepSeek- V3-671B and Claude-3.5-Sonnet. To understand the factors behind this success, we conduct a comprehensive data-centric analysis guided by three questions: (i) Can minimalist rule-based RL incentivize reasoning without distilled CoT super- vision? (ii)How do dataset quantity and diversity impact reasoning? (iii)How does question difficulty shape the emergence and generalization of reasoning? Our findings show that dataset informativeness is a key driver of reasoning per- formance, and that minimalist RL on informative, multiple-choice QA data is ef- fective at inducing reasoning without CoT supervision. We also observe divergent trends across benchmarks, underscoring limitations in current evaluation and the need for more challenging, reasoning-oriented medical QA benchmarks. The code and pretrained model weights will be publicly released upon acceptance. 1 Introduction Recently, the reasoning capabilities of large language models (LLMs) have advanced significantly, achieving impressive results in tasks requiring complex reasoning, such as mathematical problem solving, code generation, and general-purpose benchmarks [1–4]. These developments highlight the potential of LLMs to generalize and perform multi-step reasoning across domains. In the medical domain, reasoning is particularly crucial. Clinical natural language processing (NLP) tasks often require interpreting nuanced patient information, integrating knowledge from diverse sources, and making informed decisions [5–7]. More importantly, reasoning provides a valuable lens into the model’s decision-making process, allowing researchers and clinicians to examine how conclusions ∗Equal Contribution Preprint. Under review. are derived. This improves the interpretability and transparency of AI outputs, which are essential for clinical trust [8, 9]. Currently, most medical LLMs acquire reasoning capabilities through supervised fine-tuning (SFT) on chain-of-thought (CoT) datasets, often followed by reinforcement learning (RL) for further re- finement. However, this pipeline heavily relies on an initial SFT stage using costly CoT data, which are either manually crafted or distilled from closed-source commercial models such as GPT- 4o [10, 11]. This dependence not only incurs substantial annotation and distillation costs but also introduces scalability and accessibility challenges, as it ties model development to expensive and external resources. These limitations motivate a critical question: Can we achieve medical reasoning through minimalist rule-based RL without relying on distilled CoT data? To address this question, we propose | https://arxiv.org/abs/2505.17952v1 |
AlphaMed , the first work designed to incentivize reasoning capability solely through minimalist rule-based RL, going beyond conventional approaches that rely on SFT with CoT data. Instead of depending on distilled CoT data supervision, AlphaMed is trained directly via simple rule-based rewards derived from multiple-choice QA datasets. Our key contribu- tions are as follows: • We show that minimalist rule-based RL can incentivize reasoning ability in medical LLMs without relying on distilled CoT data, achieving superior performance. We further analyze how dataset quantity, diversity, and especially informativeness impact reasoning perfor- mance. We empirically find that higher informativeness enhances reasoning performance, while less-informative data limits gains. • We show that reasoning can be incentivized even with lower-difficulty data and further enhanced by harder examples. While high-difficulty samples benefit challenging bench- marks like MedXpert, a mix of difficulty levels is essential for robust generalization. Non- monotonic trends across benchmarks suggest that current evaluations may be insufficient to assess medical LLM reasoning. • Building on these insights, we introduce AlphaMed, a medical LLM trained solely via minimalist rule-based RL without any SFT on distilled CoT data, and demonstrate that it achieves state-of-the-art performance across six mainstream medical QA benchmarks, out- performing models that use complex training strategies with CoT data and even surpassing larger or closed-source models such as DeepSeek-V3-671B and GPT-4o. 2 Related Work Supervised Fine-Tuning for Reasoning in LLMs. Large language models can acquire complex reasoning skills through SFT on CoT data. For example, [12] showed that training models to gen- erate step-by-step reasoning paths significantly improves performance on math and logic problems. [13] scaled this approach by incorporating a broad range of CoT examples into instruction tun- ing across diverse tasks. [14] proposed STaR, where a model bootstraps its own reasoning traces to reduce reliance on human-annotated CoT. However, recent work [15] suggests that SFT often encourages memorization of training rationales rather than true reasoning generalization, limiting robustness in out-of-distribution or unfamiliar tasks. Moreover, obtaining high-quality CoT data is costly, requiring either expert annotations or distillation from proprietary models, posing significant challenges to scalability and adaptability [16]. Reinforcement Learning with Preference Data after SFT. InstructGPT [17] introduced rein- forcement learning with human preferences (RLHF) to align model behavior with user intent. Sub- sequent research has shown that RL can enhance generalization [16, 18] and better capture nuanced human preferences beyond rote memorization [15]. Among RL algorithms, Proximal Policy Opti- mization (PPO) is widely used, but it is highly resource-intensive—requiring learned reward models that are often sensitive to noise, difficult to interpret, and occasionally misaligned with intended objectives [19]. To address these limitations, Direct Preference Optimization (DPO) [19] eliminates the need for an explicit reward model by directly optimizing over preference pairs. However, DPO still relies on high-quality preference annotations, which are particularly challenging to construct in 2 the medical domain due to clinical ambiguity and a lack of universal agreement on what constitutes a “better” response [20]. Recently, DeepSeek-R1-Zero [21] demonstrated that reasoning behavior can be effectively elicited without CoT supervision or preference annotations, instead by leveraging final answers (e.g., multiple-choice accuracy) | https://arxiv.org/abs/2505.17952v1 |
as rule-based supervision signals [16, 18, 22, 23]. Open-Source Medical LLMs. Open-source medical LLMs have emerged as promising tools for domain-specific clinical reasoning, yet most remain heavily dependent on supervised data or hand- crafted feedback. HuatuoGPT [24] was instruction-tuned on ChatGPT-distilled medical dialogues. BioMistral [25] adapted the Mistral architecture to biomedical question answering through contin- ued pretraining [26] and domain-specific instruction tuning. OpenBioLLM [20] and UltraMedical [27] utilized DPO-based preference optimization, but their preference pairs were directly distilled from closed-source models, making supervision ambiguous and potentially inconsistent with expert clinical reasoning. Since human verification of each distilled example is prohibitively costly and impractical, there is no guarantee that the reasoning process reflected in the supervision is valid. HuatuoGPT-o1 [28] further incorporated PPO using a self-trained 3B reward model and relied on CoT data distilled from OpenAI o1. However, this approach is resource-intensive and tightly cou- pled to the quality and coverage of proprietary data, limiting its scalability and generalizability. m1 [29] also adopts SFT on distilled chain-of-thought data, where step-by-step reasoning traces are generated by external large reasoning model, thus still relying on distilled CoT data. 3 Preliminaries Group Relative Policy Optimization (GRPO) Given a question-answer pair (q, a), the behaviour policy πoldgenerates a set of Gcandidate completions {oi}G i=1for each question q. Each response receives a scalar reward ri, which may be derived from human preference comparisons or automated scoring heuristics; in this work, we use a rule-based reward. The relative quality of each response is assessed within the group through normalization. The training objective is: JGRPO(θ) =E(q,a)∼D,{oi}G i=1∼πold(·|q)" 1 GGX i=11 |oi||oi|X t=1 min ri,t(θ)ˆAi,t,clip(ri,t(θ),1−ϵ,1 +ϵ)ˆAi,t# (1) where the group-normalized advantage ˆAi,tand the token-level importance weight ri,t(θ)are de- fined as: ˆAi,t=ri−mean ({rj}G j=1) std({rj}G j=1), r i,t(θ) =πθ(oi,t|q, oi, < t) πold(oi,t|q, oi, < t). Here, ϵis a hyperparameter controlling the tolerance for policy deviation. The clip function pre- vents large updates by ensuring that the ratio between the current and reference policy stays within a predefined range. Specifically, it clips the importance weight ri,t(θ)to the interval [1−ϵ,1 +ϵ], thereby stabilizing training and mitigating the risk of policy collapse. This objective encourages the model to improve token probabilities for completions with above-average rewards, while stabilizing updates via a clipped importance weight similar to PPO [30]. Rule-based Reward Modelling To enable minimalist RL without relying on external verifiers or human-provided rewards, we adopt a simple rule-based approach consistent with [21]. This method directly evaluates the correctness of the model’s output using binary feedback, eliminating the need for a separate reward model: ri=1,ifis_answer_correct (ˆyi, y) 0,otherwise(2) Here, yis the ground-truth answer and ˆyidenotes the model-generated prediction from the i-th output oi. This straightforward reward mechanism provides a clear supervision signal grounded in task accuracy. By leveraging structured outputs (e.g., multiple-choice answers), we enable effective RL without manually written rationales or preference annotations. 3 4 AlphaMed 4.1 Training Configuration We aim to elicit medical reasoning behavior purely through rule-based RL, without relying on SFT with CoT data or RL with rewards from external verifiers. To ensure a fair comparison with HuatuoGPT-o1 [28], we | https://arxiv.org/abs/2505.17952v1 |
adopt Llama3.1-8B-Instruct andLlama3.1-70B-Instruct as back- bone models. All experiments are conducted under full parameter tuning with a batch size of 512, meaning each batch contains 64 QA pairs and each question generates 8 candidate answers, trained for 300 steps. We use verl2[31], a framework designed for rule-based RL. A simple binary reward function, defined in Eq. 2, assigns 1 if the model’s response ends with a correctly formatted boxed answer matching the ground truth (e.g., \boxed{C} ), and 0 otherwise. The model is optimized us- ing the GRPO objective described in Eq. 1. We train the 8B model on 8 Nvidia A800-80G GPUs and the 70B model on 64 A800-80G GPUs. 4.2 Evaluation Configuration Datasets. We evaluate our models on six medical QA benchmarks, using accuracy as the evalu- ation metric across all datasets. These include MedQA-USMLE [32] (MedQA), MedMCQA [33] (MedMCQA), PubMedQA [34] (PubMedQA), MMLU-Pro medical subsets [35] (MMLU-ProM), GPQA medical subsets [36] (GPQA-M), and the most recent and challenging large-scale dataset, MedXpertQA [37] (MedXpert). Details are provided in Sec. A.2. Based on their levels of challenge [38], we categorize MedQA, MedMCQA, and PubMedQA [32– 34] as normal , while MMLU-ProM and GPQA-M [39, 36] are classified as hard , as they primarily target advanced expert-level knowledge. Finally, MedXpert [37] is designated as hard+ , as the original work explicitly highlights its focus on complex clinical reasoning and expert-level decision making, positioning it as one of the most challenging benchmarks to date. Baseline Methods. We compare against a broad range of general and medical-specific LLM baselines. General-purpose base instruct models include Qwen2.5-7B/32B/72B and Llama3.1-8B/70B . Medical-specific models cover MedLlama3 ,OpenBioLLM [40],MMed and MMed-S [41],Med42 [42], and UltraMedical [27], which leverage distilled preference data and RL following SFT. HuatuoGPT-o1 [28] is trained on CoT data distilled from GPT-4o using model- based RL with a large (3B) reward model. m1[29] is similarly trained with extensive CoT distilled from DeepSeekR1 [21] via SFT. 5 Experiments 5.1 Data Curation Initial Data Collection. Following [29], we collect the training splits of three large-scale pub- lic multiple-choice medical QA datasets: MedQA [43], MedMCQA [44], and PubMedQA [34]34. MedQA [43] contains expert-level clinical questions from the USMLE. MedMCQA [44] in- cludes factoid and reasoning questions from Indian medical entrance exams (AIIMS, NEET). Pub- MedQA [34] focuses on biomedical research question answering. Notably, its training split is au- tomatically generated by a machine learning model that heuristically converts biomedical research article abstract into yes/no questions and assigns answers based on negation cues. The dataset statis- tics are summarized in Sec. A.1. Quantifying Data Difficulty. To quantify question difficulty, we perform inference using Llama3.1-8B-Instruct [45]. For each question, we generate five reasoning completions with the following prompt: “Please reason step by step, and put the final answer in \boxed{}" . We then calculate the proportion of correct predictions among the five outputs, which serves as a proxy for the question’s difficulty. Based on this proportion, we categorize ques- tions into six difficulty levels (L1–L6). Specifically, L1 includes questions where all five comple- 2https://github.com/volcengine/verl 3We use the official training splits of | https://arxiv.org/abs/2505.17952v1 |
all three datasets. 4For PubMedQA [34], only questions with definitive answer labels (i.e., A/B/C) are retained. 4 Figure 1: Performance comparison on six medical QA benchmarks. Our models are initialized withLlama3.1-8B-Instruct [45] and trained using minimalist rule-based RL on one of three balanced subsets: MedQA-Sub ,MedMCQA-Sub , orPubMedQA-Sub (shown as blue, green, and orange bars, respectively). Despite using only 1,200 examples per subset, all variants of our model achieve substantial improvements over the base Llama3.1-8B-Instruct and match or surpass the strong baseline HuatuoGPT-o1-8B across all benchmarks. tions are correct, L2 where four are correct, and so on, with L6 representing questions where all five completions are incorrect. The difficulty level distribution of each train set as shown in Tab. 2 Figure 2: Dataset analysis and training dynamics. Left: Ratio of effective queries over training steps; each curve corresponds to models trained on a specific subset. Middle: Training reward per step for models trained on each subset. Right: Distribution of question lengths (number of tokens) in MedQA, MedMCQA, and PubMedQA [43, 44, 34]. 5.2 RQ1: Can Minimalist RL Incentivize Medical Reasoning Without Distilled-CoT SFT? To investigate whether minimalist rule-based RL can incentivize medical reasoning in LLMs with- out relying on SFT with distilled CoT data, we conduct a pilot study by sampling 200 exam- ples from each difficulty level to construct three balanced subsets (1,200 samples each) from three public medical QA datasets: MedQA-Sub ,MedMCQA-Sub , and PubMedQA-Sub . We use Llama3.1-8B-Instruct as the backbone model and train it separately on each subset using min- imalist RL. As shown in Fig. 1, all models trained on these subsets achieve substantial gains over the original backbone across all six benchmarks (e.g., +15.5% on MedQA, +8.8% on MedX- pert). Remarkably, all variants trained on different subsets perform comparably to or even sur- passHuatuoGPT-o1-8B [46], a strong baseline trained via SFT on CoT data distilled from GPT- 4o [47] and further fine-tuned with RL using a 3B reward model. Notably, on MedXpert [37], the most challenging benchmark, all three variants outperform HuatuoGPT-o1-8B [46]. These re- sults demonstrate that reasoning capability can be effectively incentivized through minimalist RL on small-scale, low-cost multiple-choice QA data, without relying on SFT with distilled CoT data, and can even outperform models trained with more complex strategies. Surprisingly, multistep reasoning (e.g.,Step 1..., Step 2... ; see Fig. 11, 12, 13) sponta- neously emerges in the model’s output, which derives the final answer through sequential analysis, despite being supervised only on the final choice, without intermediate reasoning traces like dis- tilled CoT data [29, 46]. This emergent behavior shows that minimalist rule-based RL not only boosts performance but also encourages structured reasoning, offering valuable interpretability into the model’s decision-making. Performance Variation and Training Dynamics Across Subsets. We observe clear performance differences among training subsets, consistently ranking as MedQA-Sub >MedMCQA-Sub > 5 Figure 3: Effect of data quantity. Average ac- curacy across six medical QA benchmarks as the number of samples per level increases from 200 to 400, resulting in the total subset size growing from 1,200 to 2,400 examples. Scaling MedQA-Sub andMedMCQA-Sub leads | https://arxiv.org/abs/2505.17952v1 |
to con- sistent performance gains, highlighting the value of informative data. In contrast, PubMedQA-Sub shows no improvement, reflecting the limitations of low-informative data sources. Figure 4: Effect of data diversity. Aver- age accuracy across six medical QA bench- marks when models are trained individu- ally on single or combined subsets. Adding MedMCQA-Sub toMedQA-Sub boosts perfor- mance, while further adding PubMedQA-Sub reduces it, suggesting that less informative data can negate the benefits of increased di- versity. PubMedQA-Sub . To understand this variation, we explore the training dynamics of models trained on each subset. As depicted in Fig. 2 (left), following [16], the ratio of effective queries is computed as1−#solved all +#solved none #unique queries, where “solved all” and “solved none” denote batches in which all re- sponses are either correct or incorrect. Models trained on PubMedQA-Sub exhibit a rapid decline in the effective query ratio, indicating premature saturation and a reduction in effective samples from the batch. The training reward in Fig. 2 (middle) further supports this: the PubMedQA-Sub variant starts with a higher initial reward and increases rapidly, suggesting that the data is easy to learn at the start, but quickly saturates after about 20 steps. In contrast, the MedQA-Sub andMedMCQA-Sub models improve steadily throughout training. Dataset Informativeness as a Key Driver. To further investigate these dynamics, we analyze the question length distributions in the source datasets of each subset, as shown in Fig. 2 (right). No- tably, MedQA [43] exhibits a significantly longer question length distribution compared to MedM- CQA [44] and PubMedQA [34], this ordering closely matches the observed performance of model variants trained on the respective subsets. These differences are linked to dataset construction mech- anisms: PubMedQA [34] is automatically curated from biomedical literature, often resulting in noisier and less informative questions; MedMCQA [44] is based on human-authored medical school entrance exams, providing more reliable and informative samples; MedQA [43] is sourced from the USMLE, a challenging licensing exam, and thus contains the most informative and well-structured questions. Altogether, our findings suggest that question length serves as a practical proxy for dataset informativeness in medical QA. High-informativeness, exam-certified data provide more stable and effective learning signals for minimalist RL, whereas noisy, automatically curated data may offer lower informativeness and thus hinder the acquisition of reasoning ability. Finding 1.1: Minimalist rule-based RL enables medical reasoning in LLM beyond reliance on SFT with distilled CoT data. Finding 1.2: Dataset informativeness is critical for training success. LLM trained on low informative or noisy data exhibit degraded performance. Question length serves as a practical proxy for informativeness in medical QA. 5.3 RQ2: Impact of Dataset Quantity and Diversity Effect of Dataset Quantity. To investigate the effect of training data size, we increase the number of samples per difficulty level from 200 to 400 for each of the three subsets, resulting in the total 6 Figure 5: Performance on six benchmarks when training on subsets with increasing difficulty levels (L1 to L6). Each blue dot represents a separately trained model on a subset that includes all data up to the indicated difficulty level; new | https://arxiv.org/abs/2505.17952v1 |
data are incorporated only through separate training runs, not incrementally during training. While performance on MedXpert [37] increases consistently, trends on other benchmarks vary. Final models trained on the full set (L1–L6) generally achieve comparable or superior performance to HuatuoGPT-o1-8B [46]. Figure 6: Performance on six benchmarks when training with distinct difficulty groups: easy (L1+L2), medium (L3+L4), and hard (L5+L6). While harder training data improves MedXpert [37] accuracy, performance on other benchmarks declines, suggesting that relying solely on difficult sam- ples may impair general reasoning ability. number of samples in each subset increasing from 1,200 to 2,400. As shown in Fig. 3, we report the average accuracy across six benchmarks. Scaling MedQA-Sub improves accuracy from 58.96% to 59.88%, and MedMCQA-Sub improves from 57.41% to 58.76%, demonstrating that increasing high-informative data benefits model performance. In contrast, scaling PubMedQA-Sub yields no improvement (55.71% →55.58%), suggesting that adding more low-informative or noisy samples may degrade performance rather than enhance it. Effect of Dataset Diversity. We further examine the effect of dataset diversity by progressively combining subsets. As shown in Fig. 4, adding MedMCQA-Sub toMedQA-Sub further improves performance, highlighting the benefit of combining diverse and informative datasets. However, incorporating PubMedQA-Sub reverses the upward trend and leads to a decline in performance, indicating that noisy and less informative data not only fail to contribute but may also harm reasoning ability. 7 Figure 7: Comparison of AlphaMed(8B) with prior models on MMLU-ProM [35] and MedXpert [37]. Despite its smaller scale and use of minimalist RL, AlphaMed(8B) outper- forms the larger model QwQ-32B [48] and other baselines. Figure 8: AlphaMed(70B) achieves supe- rior performance over Claude-3.5-Sonnet [49], GPT-4o [47], and DeepSeek-V3 (671B) [50] on MMLU-ProM [35] and MedXpert [37], showcasing its strong reasoning ability. Finding 2: Performance improves with increased data quantity and diversity only when the additional samples are informative; low-quality data harms the learning of reasoning ability. 5.4 RQ3: Impact of Dataset Quality We analyze how increasing training difficulty affects performance across six benchmarks, as shown in Fig. 5. MedQA, MedMCQA, and PubMedQA [43, 44, 34] exhibit inverse U-shaped trends, per- formance peaks with moderate difficulty (L1–L4) and declines with harder samples (L5–L6), sug- gesting diminishing returns from high-difficulty data. In contrast, MMLU-ProM [39] and GPQA- M [36] show oscillating patterns, while MedXpert [37] improves steadily with increasing difficulty, highlighting the value of harder samples for complex tasks. To validate this, we train models on three difficulty groups (easy: L1+L2, medium: L3+L4, hard: L5+L6; Fig. 6). On MedXpert [37], models trained on hard samples perform best, confirming their role in promoting advanced reason- ing. For other benchmarks, training on easy and medium levels yields better generalization, while hard-only training underperforms. Emerging Reasoning Capability from Simple Data, Indicating Benchmark Limits. Inter- estingly, models trained only on L1+L2 (a total of 2,400 samples) already match or surpass HuatuoGPT-o1-8B [46] on several benchmarks. As shown in Fig. 5, even on MedXpert, only train- ing with L1 data exceeds HuatuoGPT-o1-8B [46], with further gains from adding more levels, in- dicating that reasoning can emerge from simple data. These findings underscore the | https://arxiv.org/abs/2505.17952v1 |
importance of balanced training difficulty to support broad generalization. They also reveal a potential pitfall: if high benchmark scores can be achieved without exposure to difficult samples, such scores may not reflect genuine reasoning ability, raising concerns about the adequacy of current benchmark designs. Finding 3.1: Mixed difficulty training is crucial for generalizable reasoning. Finding 3.2: Current benchmarks may insufficient to capture true reasoning progress. 5.5 Main Results Building on the above findings which highlight the importance of dataset quantity, diversity, in- formativeness, and mixed difficulty for incentivizing reasoning, we construct our final training set accordingly. Specifically, we include all samples from MedQA [43] due to its high informativeness, and sample 1,600 QA pairs from each difficulty level of MedMCQA [44] to match the overall scale of MedQA [43]. PubMedQA [34] is excluded due to its limited informativeness and the perfor- mance degradation observed when it is included, as discussed in RQ1 and RQ2. The final training set comprises 19,178 QA pairs. This dataset is used to train our final models: AlphaMed(8B) , based on Llama3.1-8B-Instruct , and AlphaMed(70B) , based on Llama3.1-70B-Instruct , both optimized using minimalist rule-based RL. Since MedQA [43] and MedMCQA [44] are used for training, we treat PubMedQA [34], MMLU-ProM [39], GPQA-M [51], and MedXpert [37] as out-of-domain (OOD) benchmarks. We present the full results in Tab. 1. Across both model scales, AlphaMed consistently outperforms all compared methods on both in-domain and OOD benchmarks, using only minimalist rule-based RL and multiple-choice QA supervision. Remarkably, this advantage holds even against models trained with more complex strategies [46, 27], including SFT on distilled CoT data [46, 27, 29] 8 Model MedQA MedMCQA PubMedQA MMLU-ProM GPQA-M MedXpert In-Domain Out-of-Domain Challenge Level Normal Normal Normal Hard Hard Hard+ < 10B LLMs Llama-3.1-8B-Instruct 58.72 56.21 75.21 58.74 42.73 13.02 Qwen2.5-7B-Instruct 61.51 56.56 71.30 61.17 42.56 12.15 Qwen2.5-7B-Instruct+64.49 56.11 72.60 62.15 52.56 13.18 MedLlama3-8B-v1 55.07 34.74 52.70 27.43 30.77 11.04 MedLlama3-8B-v2 59.39 59.34 75.50 55.11 36.41 13.46 MMed-8B†‡54.28 52.71 63.40 48.27 34.87 13.73 MMedS-8B†‡57.19 47.29 77.50 33.55 22.05 17.39 MMed-8B-EnIns†‡60.33 58.09 63.80 51.60 45.90 18.56 Med42-8B‡59.78 56.35 76.00 55.64 48.21 14.63 OpenBioLLM-8B†‡♢55.30 54.63 70.10 49.32 41.03 14.29 UltraMedical-8B-3†‡♢71.09 59.22 71.00 61.50 50.00 15.25 UltraMedical-8B-3.1†‡♢75.73 63.78 79.20 64.30 48.72 17.39 HuatuoGPT-o1-8B†‡♢72.60 60.40 79.20 63.71 55.38 16.84 m1-7B†‡75.81 62.54 75.80 65.86 53.08 19.81 AlphaMed(8B) 76.19 64.47 80.40 66.67 58.44 22.14 > 10B LLMs Llama-3.1-70B-Instruct 78.42 72.53 78.52 74.50 55.73 21.32 QwQ-32B 78.62 69.71 77.85 65.23 56.92 21.05 Qwen2.5-32B-Instruct 75.26 64.83 68.00 74.72 63.85 13.87 Qwen2.5-32B-Instruct+74.86 64.33 68.90 74.72 64.87 14.56 Qwen2.5-72B-Instruct 74.55 66.60 70.80 66.06 62.05 14.91 Qwen2.5-72B-Instruct+76.43 66.15 71.30 69.77 63.85 19.65 Med42-70B‡51.14 62.28 78.10 54.53 50.77 16.29 OpenBioLLM-70B†‡♢75.10 74.23 79.30 71.92 50.77 21.33 UltraMedical-70B-3†‡♢83.90 72.94 80.00 73.94 58.72 21.67 HuatuoGPT-o1-70B†‡♢83.30 73.60 80.60 76.09 66.67 26.36 m1-32B†‡83.50 67.34 77.60 77.94 66.67 25.53 AlphaMed(70B) 87.52 75.09 80.90 79.56 77.46 32.56 Table 1: Combined performance of models on six medical QA benchmarks with varying levels of challenge. In-domain and out-of-domain tasks, as well as challenge levels (Normal, Hard, Hard+), are indicated below the task names. m1denotes models that use test-time scaling during inference . +: using | https://arxiv.org/abs/2505.17952v1 |
CoT prompting during inference;†: trained with distilled CoT data from stronger models (e.g., GPT-4o);‡: trained with external datasets beyond MedQA and MedMCQA;♢: trained via RL with verifier reward models or distilled preference data from powerful models (e.g., GPT-4o). AlphaMed (Ours) is trained solely with minimalist rule-based RL on multi-choice QA, without any SFT on distilled CoT data, preference data, or rewards from verifiers. and methods enhanced with test-time scaling [29]. Notably, AlphaMed(8B) surpasses the larger reasoning model QwQ-32B [48] on challenging OOD benchmarks, as shown in Fig. 7. At the 70B scale, AlphaMed(70B) outperforms even closed-source models such as GPT-4o [47] and Claude-3.5-Sonnet [49], as well as the open-source DeepSeek-V3 (671B parameters) [50], as shown in Fig. 8. These results show that minimalist rule-based RL, trained with a well-constructed multiple-choice QA dataset, enables effective and scalable medical reasoning in LLMs without re- lying on distilled CoT supervision. 6 Conclusion We present AlphaMed, the first work to demonstrate that reasoning capabilities can emerge solely through minimalist rule-based RL, without relying on SFT with distilled CoT data. By lever- aging only multiple-choice QA datasets, AlphaMed achieves state-of-the-art performance across six diverse and challenging medical QA benchmarks, surpassing models trained with conventional SFT+RL pipelines, and even outperforming closed-source models (e.g., GPT-4o [47]. Through comprehensive data-centric analyses, we show that reasoning ability can be effectively incentivized by selecting data based on informativeness. We further find that increasing the number of informa- tive training samples improves performance, and that varying difficulty levels contribute differently across benchmarks, underscoring the importance of mixing difficulty to promote generalizable rea- soning. A well-curated dataset with high informativeness and diverse difficulty levels is key to 9 advancing reasoning, without requiring handcrafted rationales or distilled data from closed models. Our findings also reveal a critical caveat: while challenging benchmarks benefit from harder train- ing samples, others exhibit mixed or plateauing trends, suggesting that existing benchmarks may be insufficient to evaluate progress of reasoning ability. This highlights the need for more challeng- ing, reasoning-oriented benchmarks. Altogether, AlphaMed not only establishes a strong medical LLM, but also offers insights into how models reach final predictions through emergent reasoning, encouraging further exploration of interpretable systems in medical NLP. 10 References [1] Y . Qin, X. Li, H. Zou, Y . Liu, S. Xia, Z. Huang, Y . Ye, W. Yuan, H. Liu, Y . Li et al. , “O1 replication journey: A strategic progress report–part 1,” arXiv preprint arXiv:2410.18982 , 2024. [2] Z. Zeng, Q. Cheng, Z. Yin, B. Wang, S. Li, Y . Zhou, Q. Guo, X. Huang, and X. Qiu, “Scaling of search and learning: A roadmap to reproduce o1 from reinforcement learning perspective,” arXiv preprint arXiv:2412.14135 , 2024. [3] J. Wang, M. Fang, Z. Wan, M. Wen, J. Zhu, A. Liu, Z. Gong, Y . Song, L. Chen, L. M. Ni et al. , “Openr: An open source framework for advanced reasoning with large language models,” arXiv preprint arXiv:2410.09671 , 2024. [4] M. Y . Guan, M. Joglekar, E. Wallace, S. Jain, B. Barak, A. Heylar, R. Dias, A. Vallone, H. Ren, J. | https://arxiv.org/abs/2505.17952v1 |
Wei, H. W. Chung, S. Toyer, J. Heidecke, A. Beutel, and A. Glaese, “Deliberative alignment: Reasoning enables safer language models,” OpenAI Blog , 2024. [Online]. Available: https://openai.com/index/deliberative-alignment/ [5] K. Saab, T. Tu, W.-H. Weng, R. Tanno, D. Stutz, E. Wulczyn, F. Zhang, T. Strother, C. Park, E. Vedadi et al. , “Capabilities of gemini models in medicine,” arXiv preprint arXiv:2404.18416 , 2024. [6] J. Chen, C. Gui, A. Gao, K. Ji, X. Wang, X. Wan, and B. Wang, “Cod, towards an interpretable medical agent using chain of diagnosis,” arXiv preprint arXiv:2407.13301 , 2024. [7] V . L. Patel, J. F. Arocha, and J. Zhang, “Thinking and reasoning in medicine,” The Cambridge handbook of thinking and reasoning , vol. 14, pp. 727–750, 2005. [8] S. Xu, Y . Zhou, Z. Liu, Z. Wu, T. Zhong, H. Zhao, Y . Li, H. Jiang, Y . Pan, J. Chen et al. , “Towards next- generation medical agent: How o1 is reshaping decision-making in medical scenarios,” arXiv preprint arXiv:2411.14461 , 2024. [9] M.-H. Temsah, A. Jamal, K. Alhasan, A. A. Temsah, and K. H. Malki, “Openai o1-preview vs. chatgpt in healthcare: A new frontier in medical ai reasoning,” Cureus , vol. 16, no. 10, p. e70640, 2024. [10] Y . Xie, J. Wu, H. Tu, S. Yang, B. Zhao, Y . Zong, Q. Jin, C. Xie, and Y . Zhou, “A preliminary study of o1 in medicine: Are we closer to an ai doctor?” arXiv preprint arXiv:2409.15277 , 2024. [11] J. Chen, X. Wang, K. Ji, A. Gao, F. Jiang, S. Chen, H. Zhang, D. Song, W. Xie, C. Kong et al. , “Huatuogpt- ii, one-stage training for medical adaption of llms,” arXiv preprint arXiv:2311.09774 , 2023. [12] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V . Le, and D. Zhou, “Chain- of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Pro- cessing Systems , vol. 35, pp. 24 824–24 837, 2022. [13] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y . Tay, W. Fedus, Y . Li, X. Wang, J. Wei et al. , “Scaling instruction-finetuned language models,” arXiv preprint arXiv:2210.11416 , 2022. [14] E. Zelikman, Y . Wu, J. Mu, and N. D. Goodman, “Star: Bootstrapping reasoning with reasoning,” Ad- vances in Neural Information Processing Systems , vol. 35, pp. 15 476–15 488, 2022. [15] T. Chu, Y . Zhai, J. Yang, S. Tong, S. Xie, D. Schuurmans, Q. V . Le, S. Levine, and Y . Ma, “Sft memorizes, rl generalizes: A comparative study of foundation model post-training,” arXiv preprint arXiv:2501.17161 , 2025. [16] H. Wang, C. Qu, Z. Huang, W. Chu, F. Lin, and W. Chen, “Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning,” arXiv preprint arXiv:2504.08837 , 2025. [17] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al. , “Training language models to follow instructions with human feedback,” Advances in Neural Information Processing Systems , vol. 35, pp. 27 730–27 744, 2022. [18] | https://arxiv.org/abs/2505.17952v1 |
H. Wang, L. Li, C. Qu, F. Zhu, W. Xu, W. Chu, and F. Lin, “Learning autonomous code integration for math language models,” arXiv preprint arXiv:2502.00691 , 2025. [19] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn, “Direct preference opti- mization: Your language model is secretly a reward model,” Advances in Neural Information Processing Systems , vol. 36, 2023. [20] A. Ura, “Openbiollm-70b: Advancing open-source biomedical llms with direct preference optimization,” Hugging Face Blog , 2024, available at https://huggingface.co/blog/aaditya/openbiollm. 11 [21] D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi et al. , “Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning,” arXiv preprint arXiv:2501.12948 , 2025. [22] H. Zeng, D. Jiang, H. Wang, P. Nie, X. Chen, and W. Chen, “Acecoder: Acing coder rl via automated test-case synthesis,” arXiv preprint arXiv:2502.01718 , 2025. [23] J. Pan, C. Liu, J. Wu, F. Liu, J. Zhu, H. B. Li, C. Chen, C. Ouyang, and D. Rueckert, “Medvlm-r1: Incentivizing medical reasoning capability of vision-language models (vlms) via reinforcement learning,” arXiv preprint arXiv:2502.19634 , 2025. [24] H. Zhang, J. Chen, F. Jiang, F. Yu, Z. Chen, J. Li, G. Chen, X. Wu, Z. Zhang, Q. Xiao et al. , “Huatuogpt, towards taming language model to be a doctor,” arXiv preprint arXiv:2305.15075 , 2023. [25] Y . Labrak, A. Bazoge, E. Morin, P.-A. Gourraud, M. Rouvier, and R. Dufour, “Biomistral: A collection of open-source pretrained large language models for medical domains,” arXiv preprint arXiv:2402.10373 , 2024. [26] Z. Ke, Y . Shao, H. Lin, T. Konishi, G. Kim, and B. Liu, “Continual pre-training of language models,” arXiv preprint arXiv:2302.03241 , 2023. [27] K. Zhang, S. Zeng, E. Hua, N. Ding, Z.-R. Chen, Z. Ma, H. Li, G. Cui, B. Qi, X. Zhu et al. , “Ultramedical: Building specialized generalists in biomedicine,” Advances in Neural Information Processing Systems , vol. 37, pp. 26 045–26 081, 2024. [28] J. Chen, Z. Cai, K. Ji, X. Wang, W. Liu, R. Wang, J. Hou, and B. Wang, “Huatuogpt-o1: Towards medical complex reasoning with llms,” arXiv preprint arXiv:2412.18925 , 2024. [29] X. Huang, J. Wu, H. Liu, X. Tang, and Y . Zhou, “m1: Unleash the potential of test-time scaling for medical reasoning with large language models,” arXiv preprint arXiv:2504.00869 , 2025. [30] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algo- rithms,” arXiv preprint arXiv:1707.06347 , 2017. [31] G. Sheng, C. Zhang, Z. Ye, X. Wu, W. Zhang, R. Zhang, Y . Peng, H. Lin, and C. Wu, “Hybridflow: A flexible and efficient rlhf framework,” arXiv preprint arXiv:2409.19256 , 2024. [32] D. Jin, E. Pan, N. Oufattole, W.-H. Weng, H. Fang, and P. Szolovits, “What disease does this patient have? a large-scale open domain question answering dataset from medical exams,” Applied Sciences , vol. 11, no. 14, p. 6421, 2021. [33] A. Pal, L. K. Umapathi, and M. Sankarasubbu, “Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering,” in Conference on health, inference, and learning | https://arxiv.org/abs/2505.17952v1 |
. PMLR, 2022, pp. 248–260. [34] Q. Jin, B. Dhingra, Z. Liu, W. W. Cohen, and X. Lu, “Pubmedqa: A dataset for biomedical research question answering,” arXiv preprint arXiv:1909.06146 , 2019. [35] Y . Wang, X. Ma, G. Zhang, Y . Ni, A. Chandra, S. Guo, W. Ren, A. Arulraj, X. He, Z. Jiang et al. , “Mmlu- pro: A more robust and challenging multi-task language understanding benchmark,” in The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. [36] D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y . Pang, J. Dirani, J. Michael, and S. R. Bowman, “Gpqa: A graduate-level google-proof q&a benchmark,” in First Conference on Language Modeling , 2024. [37] Y . Zuo, S. Qu, Y . Li, Z. Chen, X. Zhu, E. Hua, K. Zhang, N. Ding, and B. Zhou, “Medxpertqa: Bench- marking expert-level medical reasoning and understanding,” arXiv preprint arXiv:2501.18362 , 2025. [38] X. Tang, D. Shao, J. Sohn, J. Chen, J. Zhang, J. Xiang, F. Wu, Y . Zhao, C. Wu, W. Shi et al. , “Medagents- bench: Benchmarking thinking models and agent frameworks for complex medical reasoning,” arXiv preprint arXiv:2503.07459 , 2025. [39] Y . Wang, X. Ma, G. Zhang, Y . Ni, A. Chandra, S. Guo, W. Ren, A. Arulraj, X. He, Z. Jiang et al. , “Mmlu- pro: A more robust and challenging multi-task language understanding benchmark,” arXiv preprint arXiv:2406.01574 , 2024. [40] M. S. A. Pal and M. Sankarasubbu, “Openbiollms: Advancing open-source large language models for healthcare and life sciences,” 2024. 12 [41] P. Qiu, C. Wu, X. Zhang, W. Lin, H. Wang, Y . Zhang, Y . Wang, and W. Xie, “Towards building multilin- gual language model for medicine,” Nature Communications , vol. 15, no. 1, p. 8384, 2024. [42] C. Christophe, P. K. Kanithi, P. Munjal, T. Raha, N. Hayat, R. Rajan, A. Al-Mahrooqi, A. Gupta, M. U. Salman, G. Gosal et al. , “Med42–evaluating fine-tuning strategies for medical llms: Full-parameter vs. parameter-efficient approaches,” arXiv preprint arXiv:2404.14779 , 2024. [43] D. Jin, E. Pan, N. Oufattole, W.-H. Weng, H. Fang, and P. Szolovits, “What disease does this patient have? a large-scale open domain question answering dataset from medical exams,” Applied Sciences , vol. 11, no. 14, p. 6421, 2021. [44] A. Pal, L. K. Umapathi, and M. Sankarasubbu, “Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering,” in Conference on Health, Inference, and Learning . PMLR, 2022, pp. 248–260. [45] A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Yang, A. Fan et al. , “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783 , 2024. [46] J. Chen, Z. Cai, K. Ji, X. Wang, W. Liu, R. Wang, J. Hou, and B. Wang, “Huatuogpt-o1, towards medical complex reasoning with llms,” arXiv preprint arXiv:2412.18925 , 2024. [47] A. Hurst, A. Lerer, A. P. Goucher, A. Perelman, A. Ramesh, A. Clark, A. Ostrow, A. Welihinda, A. Hayes, A. Radford et al. , “Gpt-4o system card,” arXiv preprint arXiv:2410.21276 , 2024. [48] Q. Team, | https://arxiv.org/abs/2505.17952v1 |
“Qwq: Reflect deeply on the boundaries of the unknown,” November 2024. [Online]. Available: https://qwenlm.github.io/blog/qwq-32b-preview/ [49] Anthropic, “The claude 3 model family: Opus, sonnet, haiku,” https://www-cdn.anthropic.com/ de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf, 2024. [50] A. Liu, B. Feng, B. Xue, B. Wang, B. Wu, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan et al. , “Deepseek- v3 technical report,” arXiv preprint arXiv:2412.19437 , 2024. [51] D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y . Pang, J. Dirani, J. Michael, and S. R. Bowman, “Gpqa: A graduate-level google-proof q&a benchmark,” arXiv preprint arXiv:2311.12022 , 2023. [52] Q. Team, “Qwen2.5: A party of foundation models,” September 2024. [Online]. Available: https://qwenlm.github.io/blog/qwen2.5/ 13 Limitations and Future Work Although AlphaMed achieves impressive results on multiple-choice QA tasks, its capabilities remain constrained by the closed-form nature of these benchmarks. Our evaluations are primarily conducted on existing mainstream medical QA datasets, all of which are close-ended and may not fully capture the spectrum of real-world clinical reasoning. Due to limitations in the current research landscape, it is challenging to systematically assess our model’s performance on open-ended QA tasks, which not only lack well-established benchmarks but are also inherently subjective, often requiring human evaluation for meaningful assessment. In future work, we aim to design and release open-ended benchmarks that involve human-in-the-loop evaluation, enabling more comprehensive and nuanced assessments of reasoning and decision-making in medical LLMs. Broader Impact This work demonstrates that the reasoning capability of medical LLMs can be effectively incen- tivized using only multiple-choice QA data with minimalist rule-based RL, removing the need for SFT on costly distilled CoT data. By eliminating reliance on manual annotation and closed-source supervision, our approach substantially reduces the human effort and resources required for devel- oping high-performing clinical models. However, the emerging reasoning processes in LLMs are inherently difficult to evaluate, as there is often no single “ground truth” reasoning path—especially in medicine, where multiple valid clinical justifications may exist for a single decision. Nonethe- less, exposing these intermediate reasoning steps provides an important opportunity to observe and audit model behavior, ultimately encouraging the development of more transparent and trustworthy medical LLMs. A Appendix A.1 Difficulty Level Distribution To explore how the difficulty level of training data affects model performance, we annotate each sample by its response consistency across five inference passes of Llama3.1-8B-Instruct [45]. Specifically, L1 denotes samples where the model answers all attempts correctly (easy), while L6 in- cludes those where all predictions are incorrect (hard). Intermediate levels (L2–L5) indicate varying degrees of partial correctness. Tab. A.1 summarizes the distribution across MedQA5, MedMCQA6, and PubMedQA7. Table 2: Difficulty Level Distribution. L1 indicates samples where Llama3.1-8B-Instruct [45] predicts correctly in all 5 inference attempts (easiest), while L6 corresponds to samples where all predictions are incorrect (hardest). Intermediate levels (L2–L5) reflect partial correctness across attempts. Dataset Total L1 L2 L3 L4 L5 L6 MedQA 10,178 1,970 1,471 934 697 713 4,393 MedMCQA 182,822 63,292 25,736 14,498 9,922 10,088 59,286 PubMedQA 211,268 97,790 41,604 18,596 10,759 9,217 33,303 A.2 Details of Evaluation Datasets To thoroughly assess performance across varying levels of challenge, we evaluate | https://arxiv.org/abs/2505.17952v1 |
on six medical QA benchmarks, grouped by challenge level: Normal challenge level 5https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options-hf 6https://huggingface.co/datasets/openlifescienceai/medmcqa 7https://huggingface.co/datasets/qiaojin/PubMedQA 14 • MedQA [43]: A benchmark derived from US medical licensing exam questions, assessing clinical knowledge across a wide range of topics. Evaluation is based on the standard test split. • MedMCQA [44]: A medical QA dataset based on entrance exams, designed to test foun- dational medical knowledge through multiple-choice questions. The official test split is used. • PubMedQA [34]: A biomedical question answering dataset where models choose from three fixed options, yes, no, or maybe, based on associated research abstracts, emphasizing factual understanding in biomedical literature. The official test split is used. Hard challenge level • MMLU-ProM [39]: MMLU-ProM is the medical category subset of a broad multitask benchmark, focusing on professional-level medicine and related domains. Evaluation is conducted using the standard split established in [46]. • GPQA-M [36]: It represents the biomedical subset of a graduate-level QA benchmark, featuring expert-curated questions intentionally designed to resist superficial retrieval and demand deep analytical reasoning. The evaluation follows the split from [46]. Hard+ challenge level • MedXpert [37]: A challenging benchmark designed to assess expert-level medical knowl- edge, clinical understanding, and complex reasoning. It covers diverse specialties and body systems, incorporates board-style exam questions, and is curated through expert review to ensure high difficulty, accuracy, and relevance to real-world medical decision-making. A.3 Effect of LLM Backbones To assess the generality of our proposed training pipeline and data design, we further apply the same minimalist rule-based RL approach, originally used for Llama3.1-8B-Instruct , to Qwen2.5-7B-Instruct [52]. After training, the resulting AlphaMed(7B) model achieves con- sistent improvements across all six benchmarks, as shown in Fig. 9. Notably, the gains are particu- larly substantial on the more challenging datasets, MMLU-ProM [35], GPQA-M [36], and MedX- pert [37], demonstrating the robustness of our training strategy in enhancing medical reasoning. These results demonstrate that minimalist rule-based RL can incentivize reasoning capabilities and boost performance, exhibiting robustness across different backbone models. Figure 9: Performance comparison across six medical QA benchmarks. AlphaMed(7B) is initialized fromQwen2.5-7B-Instruct [52] and trained using our constructed training set and minimalist rule-based RL pipeline. It achieves consistent improvements over the base model on all benchmarks. A.4 Success on Small LLM To further evaluate the effectiveness of our minimalist RL pipeline, we apply it to a small language model,Qwen2.5-3B-Instruct [52]. As shown in Fig. 10, our approach consistently improves performance across all six medical benchmarks, including substantial gains on MedQA (+11.55%) , GPQA-M (+19.19%) , and MedXpert (+4.10%) . These results demonstrate that our RL framework can effectively incentivize reasoning capabilities even in smaller-scale models, and is not limited to large foundation models. 15 Figure 10: Performance comparison across six medical QA benchmarks. AlphaMed(3B) is initial- ized from Qwen2.5-3B-Instruct [52] and trained with our constructed dataset using a minimalist rule-based RL pipeline. It achieves consistent gains over the base model. A.5 Qualitative Results We present three examples predicted by our model trained with minimalist RL, demonstrating in- terpretable step by step clinical reasoning across diverse case types. In Fig. 11, the model | https://arxiv.org/abs/2505.17952v1 |
correctly identifies inappropriate and potentially harmful options (e.g., use of NOACs in patients with me- chanical heart valves) and adheres to guidelines by recommending bridging strategies based on patient risk factors and procedural context. In Fig. 12, it performs multi step numerical reasoning to derive absolute risk reduction (ARR) and relative risk (RR), showcasing its ability to integrate clini- cal knowledge with quantitative interpretation. In Fig. 13, the model applies structured reasoning to diagnose croup in a pediatric patient, identifying clinical features, linking them to pathophysiology, and reviewing radiographic findings, despite being supervised only on the final answer choice. This highlights the model’s capacity for guideline aligned reasoning and emergent interpretability, even without supervision on intermediate reasoning traces. 16 Figure 11: Question and answer pair for Case 1. Cyan text highlights the final predicted choices. Green highlight are used to emphasize reasoning steps and key clinically key information. 17 Figure 12: Question and answer pair for Case 2. Cyan text highlights the final predicted choices. Green highlight are used to emphasize reasoning steps and key clinically key information. 18 Figure 13: Question and answer pair for Case 3. Cyan text highlights the final predicted choices. Green highlight are used to emphasize reasoning steps and key clinically key information. 19 | https://arxiv.org/abs/2505.17952v1 |
arXiv:2505.17964v1 [cs.CL] 23 May 2025Counting Cycles with Deepseek Jiashun Jin, Tracy Ke, Bingcheng Sui, and Zhenggang Wang∗ May 26, 2025 Abstract Despite recent progress, AI still struggles on advanced mathematics. We consider a difficult open problem: How to derive a Computationally Efficient Equivalent Form (CEEF) for the cycle count statistic? The CEEF problem does not have known general solutions, and requires delicate combinatorics and tedious calculations. Such a task is hard to accomplish by humans but is an ideal example where AI can be very helpful. We solve the problem by combining a novel approach we propose and the powerful coding skills of AI. Our results use delicate graph theory and contain new formulas for general cases that have not been discovered before. We find that, while AI is unable to solve the problem all by itself, it is able to solve it if we provide it with a clear strategy, a step-by-step guidance and carefully written prompts. For simplicity, we focus our study on DeepSeek-R1 but we also investigate other AI approaches. 1 Introduction How to use AI tools to solve mathematical problems is a problem of major interest. The problem has received a lot of attention (e.g., [ 12,25,5]) and motivated a new research area: AI4Math [ 6]. However, except for a few interesting papers (e.g., [ 21,1]), most existing works in this area focus on mathematical problems with known solutions [23,17], where the focus is to evaluate the performance of AI, but not to use AI to solve open research problems. For this reason, how to use AI tools to solve difficult research-level problems without known solutions remains a largely open but important ∗For contact, use jiashun@stat.cmu.edu or zke@fas.harvard.edu 1 problem in AI4Math, and major progress in this direction will not only be a milestone for the reasoning models but will also change mathematical practice. We focus on a long-standing open problem: how to derive a Computationally EfficientEquivalentForm (CEEF) for the cycle count statistics. The CEEF problem involvesdelicateandextremelytediouscombinatorics, anditishardtosolvebyhuman. However, by a novel humAI approach (meaning a combination of the intelligence by human and AI), we are able to solve the problem. In detail, for a symmetric A∈Rn×nand integer m≥3, the order- mcycle count (CC) statistic is Cm=X i1,i2,...,im(dist)Ai1i2Ai2i3. . . A imi1,where ‘dist’ stands for ‘distinct’ .(1) AsCmdoes not depend on the main diagonal of A, we assume Aii= 0for1≤i≤n for simplicity. In the special case where Ais the adjacency matrix of an undirected network (we call this the binary case below), Cm/(2m)is the number of m-cycles in the network [ 13]. This is why we call Cmthe cycle count statistic. Cycle count statistics are useful in network analysis [ 8,13,14,15], covariance matrix testing [ 18], and estimating the spectrum for noisy low-rank matrices [ 16]. Especially, high-order cycle count statistics are of interest, for they may yield better testing power or estimation accuracy than lower-order ones; see Section 4. How to compute Cmis known as a challenging problem. A brute-force approach is to compute Cmusing m-layers of for-loops (see (1)), but the resultant complexity isO(nm). For a | https://arxiv.org/abs/2505.17964v1 |
less expensive approach, we may use the combinatoric approach [10,20], where we express Cmas the linear combination of finitely many terms, each has a computation cost much smaller than O(nm). For example, when m= 8,C8is a linear combination of 44terms (see Table 1). To compute these terms, one term (marked in blue) has a complexity of O(n4), all other terms have a complexity of O(n3). So using this formula, we can compute Cmwith a complexity of O(n4), which is much lower than that of the brute-force approach. Table 1: The formula of C8from Python code, obtained by our approach. +36 1′ n·((A◦A◦A◦A◦A◦A◦A◦A)·1n) -96 1′ n·(((A◦A)·1n)◦((A◦A◦A◦A◦A◦A)·1n)) -36 1′ n·(((A◦A◦A◦A)·1n)◦((A◦A◦A◦A)·1n)) -112 1′ n·((A◦A◦A◦A◦((A◦A)·(A◦A)))·1n) +32 1′ n·((A◦A◦A◦A◦A◦((A·A)·A))·1n) +72 1′ n·((((A◦A)·1n)◦((A◦A)·1n))◦((A◦A◦A◦A)·1n)) +16 1′ n·(((A◦A)·((A◦A)·1n))◦((A◦A◦A◦A)·1n)) +80 1′ n·((A◦A◦A◦A◦(A·A)◦(A·A))·1n) +32 1′ n·(((A◦A)·1n)◦((A◦A◦A◦A)·((A◦A)·1n))) +192 1′ n·((A◦A◦A◦((A◦A◦(A·A))·A))·1n) +32 1′ n·((A◦A◦A◦((A·A)·(A◦A◦A)))·1n) +4 1′ n·((A◦A◦A◦((A·(A◦A◦A))·A))·1n) +64 1′ n·(((A◦A)·1n)◦((A◦A◦((A◦A)·(A◦A)))·1n)) +5 1′ n·((A◦A◦((A◦A)·((A◦A)·(A◦A))))·1n)+22 Σi1i2i3i4(A◦A)i1i2Ai1i3Ai1i4Ai2i3Ai2i4(A◦A)i3i4 -16 1′ n·(((A◦A◦A◦A)·1n)◦((A◦((A·A)·A))·1n)) -64 1′ n·(((A◦(A·A))·1n)◦((A◦A◦A◦(A·A))·1n)) -8 1′ n·((A◦A◦A◦(A·d((A◦(A·A))·1n)·A))·1n) -64 1′ n·((A◦A◦A◦((A·A)·A))·((A◦A)·1n)) -16 1′ n·((A◦A◦A◦((A·A)·d((A◦A)·1n)·A))·1n) -12 1′ n·(((((A◦A)·1n)◦((A◦A)·1n))◦((A◦A)·1n))◦((A◦A)·1n)) -16 1′ n·((((A◦A)·((A◦A)·1n))◦((A◦A)·1n))◦((A◦A)·1n)) -96 1′ n·(((A◦A)·1n)◦((A◦A◦(A·A)◦(A·A))·1n)) -4 1′ n·(((A◦A)·((A◦A)·((A◦A)·1n)))◦((A◦A)·1n)) -24 1′ n·((A◦A◦(A·d((A◦A)·1n)·A)◦(A·A))·1n) -32 1′ n·((A◦A◦((A◦((A·A)·A))·(A◦A)))·1n) -64 1′ n·((A◦A◦((A◦(A·A))·(A◦(A·A))))·1n) -16 1′ n·((A◦A◦(((A·A)◦(A·A))·(A◦A)))·1n) +8 1′ n·((A◦A◦A◦((((A·A)·A)·A)·A))·1n)+16 1′ n·((((A◦A)·1n)◦((A◦A)·1n))◦((A◦((A·A)·A))·1n)) +8 1′ n·(((A◦A)·((A◦A)·1n))◦((A◦((A·A)·A))·1n)) +16 1′ n·((((A◦A)·1n)◦((A◦(A·A))·1n))◦((A◦(A·A))·1n)) +24 1′ n·((A◦A◦(((A·A)·A)·A)◦(A·A))·1n) +12 1′ n·((A◦A◦((A·A)·A)◦(A·(A·A)))·1n) +8 1′ n·(((A◦A)·1n)◦((A◦((A·A)·A))·((A◦A)·1n))) +16 1′ n·(((A◦(A·A))·((A◦A)·1n))◦((A◦(A·A))·1n)) +4 1′ n·(((A◦A)·1n)◦((A◦((A·A)·d((A◦A)·1n)·A))·1n)) +4 1′ n·(((A◦A)·((A◦(A·A))·1n))◦((A◦(A·A))·1n)) +24 1′ n·((A◦((A·A)·A)◦(A·A)◦(A·A))·1n) +2 1′ n·((A◦(((A·A)◦(A·A)◦(A·A))·A))·1n) -8 1′ n·(((A◦A)·1n)◦((A◦((((A·A)·A)·A)·A))·1n)) -8 1′ n·(((A◦(((A·A)·A)·A))·1n)◦((A◦(A·A))·1n)) -4 1′ n·(((A◦((A·A)·A))·1n)◦((A◦((A·A)·A))·1n)) +1 1′ n·((A◦(((((( A·A)·A)·A)·A)·A)·A))·1n) 2 In Table 1, 43 terms have a succinct expressive algebraic (SEA) form, and one remaining term (marked in blue) is a Full-Sum (FS) term (meaning no constraint on the4indices over which we take the sum). In this paper, we may encounter many terms like these 43 terms; we call them SEAterms. Moreover, we call a FS term an Incompressible FS (IFS) term if we cannot rewrite it as a FS term with fewer layers of sum. The FS term marked in blue in Table 1 is seen to be an IFS term. To solve the CEEF problem, the goal is derive a formula where we write Cmas a linear combination of finite terms, each is a SEA term or an IFS term (ideally, we hope all terms are SEA terms; this is possible when m≤7, but impossible when m≥8). When mis relatively small, we can derive such a formula explicitly by hand . For binary matrix A, [10] solved the case of m= 4,5, [4] solved the case of m= 6, [19] solved the case of m= 7, and (in a remarkable paper) [ 20] solved the case of 3≤m≤13. For general matrix A, [13] solved the case of m= 3,4. Unfortunately, for larger m, deriving the formula by hand is no longer feasible. For instance, when m= 12, the formula for Cmis a linear combination of 1900 terms. To derive the formula in such cases, we need delicate and extremely tedious calculations, and it is easy to make mistakes (e.g., as pointed out by [ 20], [10] made a mistake for the case of m= 7). For these reasons, such a task is simply too hard for human, and the problem | https://arxiv.org/abs/2505.17964v1 |
remains a long-lasting unsolved problem. Recent developments of AI provide a great opportunity. Especially, for problems thatneeddelicateandtediouscombinatorics, AIhasadvantagesoverhuman. However, if we directly ask AI to generate the desired formula for a given m, it usually outputs a result far from correct. We may use multi-shot learning (e.g., [ 3,24]), where we provide AI with the formulas for smaller m. But since the formula rapidly gets more complicated as mincrease, such an approach does not work either. To address the challenge, we propose a novel humAI approach, where we combine the strengths of human and AI. While there are many AI tools, we focus our study on DeepSeek-R1 (DS) for two reasons. First, our purpose is not to compare different AI tools, but to pick an AI tool that can help us accomplish. Second, compared with other AI tools, DS is competitive in solving math problems (e.g., [ 7]). Note that while we focus on DS, we also investigate several other LLMs (see Section 3). Our approach has two steps. Fix a set of mdistinct indices S={i1, i2, . . . , i m} and let Gbe the (undirected simple) graph with edges between i1&i2, i2&i3, . . . , i m&i1, but nowhere else. In the first step, we identify all multi-graphs induced by Gthrough a merging process on S. We then divide all multi-graphs into different classes according to isomorphism and pick one from each class. This gives rise to a list of multi-graphs 3 denoted by L. In Theorem 2.4, we show that Cmis a linear combination of a list of FS terms, where (1) each FS term corresponds to a multi-graph in L, and (2) each linear coefficient (i.e., each coefficient in the linear combination) has an explicit form determined by the Möbius function [ 22]. In this step, DS made a remarkable contribution in (a) generate a script to correctly identity all multi-graphs , (b) present each multi-graph visually, (c) check isomorphism, and (d) hint that the Möbius function can be useful to find the linear coefficients. See Section 2.1 for details. In the second step, we relate each FS term to a labeled multi-graph, where each layer in the sum corresponds to a node in the graph. We propose a recursive pruning algorithm where ‘pruning a node’ is equivalent to ‘reducing a layer in the sum without changing the value of the sum’. Theorem 2.10 shows that when the pruning process terminates, we have either (1) only one node left and each FS term is converted to a SEA term, or (2) at least 4nodes are left and each FS term is converted to an IFS term. In this step, DS also made a remarkable contribution in (a) writing a script to implement our algorithm (which is hard to implement without advanced coding skills), and (b) successfully converting each FS term to a SEA term or an IFS term as expected. We also investigate our humAI approach with several other representative LLMs by feeding in the same prompts we use for DS, up to some minor changes. | https://arxiv.org/abs/2505.17964v1 |
Among these LLMs, we find that GPT-4.1 and Gemini-2.5-Pro are able to accomplish the task as the DS, but others can not. See Table 3. In summary, using our humAI approach, we are able to solve the CEEF problem (a long-standing open problem). See Supplement for the formula for m= 3,4, . . . , 12. We find that AI is unable to accomplish the task independently, but it is able to do so if we provide a clear strategy, a step by step guidance, and carefully written prompts. Typically, AI is unable to output the results as expected in one shot: we need to refine the prompt a few times before AI is able to output the correct results. Our contribution is two-fold. First, we show that AI can be a powerful research assistant for human in tackling difficult open math problems. There are very few existing works on using AI to tackle research-level math problems [ 1]. In such a difficult setting, whether AI can be really useful remains largely unclear. Our paper assures that, even in such a difficult setting, AI can be useful with appropriate guidance and prompts, and we showcase how to use AI properly. Second, we solve a long-standing open problem by finding a computationally efficient formula for each Cm,m≥3. Compared with existing works (e.g., [ 20,11]), we use novel graph theory which has not been used before, and make a timely contribution by finding the formulas that are badly needed in areas such as network analysis [ 8,13,14,16], estimating spiked eigenvalues [16], and matrix testing [18]. 4 Remark 1 . The cycle count statistic Cmis closely related to the statistic trace (Am), but the former has several appealing properties which the latter does not have. For example, in network setting, we frequently assume that for a low-rank matrix Ω,E[A] = Ω−diag(Ω). Then if the network is sparse and a mild regularity condition holds, E[Cm]≈trace (Ωm)andVar(Cm)≈Cm. These properties make the cycle count statistic very useful in problems such as network testing and network goodness-of-fit, among others [13, 14, 15]. Content and notations . Section 2 presents the main theorems and algorithms. Section 3 presents the details of using AI. Section 4 contains a small numerical study where we demonstrate the importance of high-order cycle count statistics. Section 5 is a short discussion. All proofs can be found in Supplement. In this paper, ◦ denotes the Hadamard (or entry-wise) product. For n >1,Indenotes the n×n identity matrix and 1n∈Rndenotes the vector of all ones. For any vector v∈Rd, d(v)denotes the n×ndiagonal matrix where the i-th diagonal entry is vi. 2 Main results In this section, we present the main theorems and algorithms. The discussion on AI is deferred to Section 3. Our approach has two steps. In the first step, we introduce a merging process to decompose Cmas a linear combination of finitely many FS terms. In the second step, we introduce a pruning process and convert each FS term to either a SEA term or an IFS term. 2.1 Decomposing the cycle count statistics via the | https://arxiv.org/abs/2505.17964v1 |
merging process In graph theory [ 10,2], an undirected graph is a multi-graph if it permits multiple edges, and a simple graph otherwise. Fix m≥3and a set of mdistinct indices S={i1, i2, . . . , i m}. LetG=G(S)denote the graph with simple edges between i1&i2, i2&i3, ...,im&i1, but nowhere else. Definition 2.1. (Partition ). Fix k≤m. We call σ={S1, S2, . . . , S k}ak-partition of SifS1, S2, . . . , S kare pairwise disjoint non-empty subsets of Ssatisfying ∪k j=1Sj=S. We call each Sja block. For two partitions πandσ, we say π⪯σif each block of πis a subset of a block of σ, and we say π≺σifπ⪯σbutπ̸=σ. The size of a k-partition σ(denoted by |σ|) is defined as k. Note that a special partition is ˆ0={{i1},{i2}, . . . ,{im}}, which is the finest partition. Let Π(S)denote the set of all partitions. Note that “ ⪯”defines a partial 5 orderinΠ(S), so we have a partially ordered set (poset) (Π(S),⪯). The Möbius function is a useful tool for analyzing posets [ 22]. In our case, the Möbius function µ on(Π(S),⪯)is defined as follows. Definition 2.2. (The Möbius function ). For any π, σ∈Π(S)withπ≺σ,µ(π, π) = 1, andµ(π, σ)is defined recursively by µ(π, σ) =−P π⪯τ≺σµ(π, τ). The partitions induce a merging process as follows. Fix a partition σ={S1, S2, . . . , S k} ∈Π(S)and let Gbe the graph above. For each 1≤j≤k, we merge all nodes ofGinSjto one node while keeping the edges unchanged. Note that the merging process (a) maps S={i1, i2, . . . , i m}to a subset which we denote by {j1, j2, . . . , j k} for simplicity, (b) maps Gto an (induced) multi-graph which we may denote by Gσ(see Table 2), and (c) maps Ai1i2Ai2i3. . . A imi1toΠ1≤a<b≤k:w(a,b)>0Aw(a,b) jajb, where w(a, b)is the edge weight between nodes aandbin the multi-graph Gσ(see Table 2). The merging process induces many multi-graphs. We exclude those with self-loops (see Remark 2). Note that as an induced multi-graph with only one node must have self-loops, we exclude all those with only one node. Now, first, we divide all these multi-graphs into different groups by size. Second, for those with the same size, we divide them into different classes such that two graphs belong to the same class if and only if they are isomorphic to each other. Third, for each 1≤k≤m, letting bm,kbe the number of classes for size- kmulti-graphs, we order all the classes lexicographically. Finally, in each class, we pick the first candidate (lexicographically). Denote these multi-graphs by {Gm,k,t: 1≤t≤bm,k}, where among all classes of size- kmulti-graphs, Gm,k,tis the candidate in the t-th class. Denote the partition that induces Gm,k,tbyσm,k,t, and suppose σm,k,t = {S1, S2, . . . , S k}. Introduce dm,k,t=size of class tof size- kmulti-graphs , h m,k,t= Πk i=1(|Si| −1)!(2) Note that both quantities do not depend on S. Suppose j1, j2, . . . , j kare the nodes ofGm,k,tand let dibe the node degree of ji. By the definition of the merging process, gi=|Si|, | https://arxiv.org/abs/2505.17964v1 |
and so equivalently, hm,k,t= Πk i=1(gi−1)! (3) Moreover, let (for short, w(a, b)is the edge weight between node aandbinGm,k,t) fm,k,t(A, j 1, j2, . . . , j k) = Π {1≤a<b≤k:w(a,b)>0}Aw(a,b) jajb. (4) 6 Lemma 2.3. For any m≥3, Cm= tr( Am)−m−1X k=2bm,kX t=1dm,k,tX j1,j2,...,jk(dist)fm,k,t(A, j 1, j2, . . . , j k) Lemma 2.3 is proved in Supplement. Lemma 2.3 is intuitive and thus relatively easy to understand. However, for each term on the RHS, since we require j1, j2, . . . , j k to be distinct, it is hard to convert it to a SEA or an IFS term as desired. To overcome the challenge, we have the following theorem, which is proved in Supplement. Recall thatµis the Möbius function and ˆ0is the finest partition. Theorem 2.4. Fixm≥3. Let dm,k,tandhm,k,tbe as in (2), and suppose Gm,k,tis the induced multi-graph from the partition σm,k,t. We have µ(ˆ0, σm,k,t) = (−1)m−k·hm,k,t and Cm=mX k=2bm,kX t=1am,k,tX j1,j2,...,jkfm,k,t(A, j 1, j2, . . . , j k), where am,k,t=dm,k,t·µ(ˆ0, σm,k,t) = (−1)m−k·dm,k,t·hm,k,t. The main change is, each termP j1,j2,...,jk(dist)fm,k,t(A, j 1, j2, . . . , j k)in Lemma 2.3 is now replaced by its FS term counterpartP j1,j2,...,jkfm,k,t(A, j 1, j2, . . . , j k). This is important, as we can frequently convert the latter to a succinct form, but it is hard to do so for the former. For example, for any n×nmatrices A, B, C, we can convertP j1,j2,j3Aj1j2Bj2j3Cj3j1succinctly to tr(ABC ), but it is unclear how to convertP j1,j2,j3(dist)Aj1j2Bj2j3Cj3j1to a succinct form. Our results are new and the proofs are non-trivial and use several techniques. First, we need to establish the precise one-to-one correspondence between the partitions, induced multi-graphs, and f(A, j 1, j2, . . . , j k). Second, even after Lemma 2.3 is proved, it is still non-trivial to prove Theorem 2.4: replacingP j1,j2,...,jk(dist)fm,k,t(A, j1, j2, . . . , j k)on the RHS of Lemma 2.3 byP j1,j2,...,jkfm,k,t(A, j 1, j2, . . . , j k)gives rises to non-trivial changes which we must analyze carefully. Figuring out the linear coefficients in Theorem 2.4 also requires non-trivial efforts: at first, we do not know the answer, but fortunately, AI hinted that the Möblus inversion technique may be relevant. Following the hint, we then solve the problem by combining careful analysis with existing results on the Möbius function. This demonstrates how valuable AI can be for research. Example 1 . When m= 4,bm,k= 1fork= 2,3,4. Table 2 presents the induced multi-graphs as well as the terms of dm,k,t,hm,k,t,am,k,tandfm,k,t,1≤t≤bm,k, 2≤k≤m. 7 Table 2: The key quantities for m= 4, with labeled simple graphs (LSG) included for illustration. Combining the table with Theorem 2.4, we get C4=tr(A4)−21′ n(A◦ A)21n+1′ n(A◦A◦A◦A)1n. (k, t) Partition M-Graph LSG (dmkt, hmkt, amkt)fmkt(A, j 1,···, jk) SEA form (4,1){{i1},{i2},{i3},{i4}} jk=ik,1≤k≤4j1j2 j3 j4AA AAj1j2 j3 j4 (1,1,1) Aj1j2Aj2j3Aj3j4Aj4j1 tr(A4) (3,1){{i1},{i2, i4},{i3}} j1=i1, j2=i2, j3=i32 2 j1j2 j3B=A◦A C=A◦A j1j2 j3 (2,1,−2) (Aj1j2)2(Aj2j3)21′ n(A◦A)21n (2,1){{i1, i3},{i2, i4}} j1=i1, j2=i24 j1 j2 B=A◦A◦A◦Aj1 j2(1,1,1) (Aj1j2)41′ n(A◦A◦A◦A)1n To implement Theorem 2.4, we | https://arxiv.org/abs/2505.17964v1 |
can (a) identify the set of multi-graphs {Gm,k,t: 1≤t≤bm,k}, (b) following (2)-(4) and Theorem 2.4, find the terms bm,kand (dm,k,t, hm,k,t, am,k,t)andfm,k,t(A, j 1, j2, . . . , j k), and (c) express Cmas the linear com- bination of many FS terms. All these are not easy tasks for human, but can be nicely done with AI. See Section 3 for details. Remark 2 . Recall that we exclude multi-graphs with self-loops from our study. This is because for any induced multi-graph, the merging process maps the product Ai1i2Ai2i3. . . A imi1to a quantity. And when the multi-graph has self-loops, the corresponding quantity is 0, as all diagonal entries of Aare0. Remark 3 . Our idea is readily extendable to broader settings, such as computing the number of paths between two given nodes, or computing cycles or other patterns in asymmetric matrices. 2.2 Converting each FS term to a SEA or an IFS term via a pruning process In Theorem 2.4, associated with each multi-graph Gm,k,t,1≤t≤bm,k,2≤k≤m, we have a k-layer full-sum (FS) termP j1,j2,...,jkf(A, j 1, j2, . . . , j k). Think of each layer as a node of the multi-graph. We propose a recursive pruning algorithm, where ‘pruning a node’ is equivalent to ‘reducing a layer in the sum while keeping the value of the sum unchanged’. To keep track both the multi-graphs and the corresponding sums in the pruning process, we introduce the notion of labeled multi-graph , which adds labels to all the nodes and edges of the multi-graph. Definition 2.5 (Labeled Multi-Graph (LMG)) .An LMG Gis a multi-graph in which each node ais labeled by a vector v(a)∈Rnand each edge between aandbis labeled 8 by a symmetric matrix M(a,b,s )∈Rn×n, for1≤s≤w(a, b), where w(a, b)is the total number of edges between aandb. Definition 2.6 (Full sum for an LMG) .Given an LMG G, the Full-Sum (FS) associated with Gis defined as FS(G) :=X 1≤j1,...,jk≤nY 1≤a≤kv(a) jaY 1≤a<b≤k:w(a,b)>0w(a,b)Y s=1M(a,b,s ) jajb. Given an LMG, we wish to design a pruning algorithm such that each time we prune an appropriate node , we can update the LMG in a way such that the value of the Full-Sum remains the same before and after the pruning. It turns out that the Type I and Type II pendant nodes below are the appropriate node for our pruning. x1234yzuABCDEFS(G)=∑1≤j1,…,j4≤nxj1yj2zj3uj4⋅Aj1j2Bj2j3Cj1j3Dj3j4Ej3j4 Figure 1: An LMG and its associated full sum (blue vectors: node labels, red matrices: edge labels). For a node ain an LMG, we say node bis an immediate neighbor of aif there are edges between aandb. Definition 2.7. A node is called a Type I pendant node if has exactly 1immediate neighbor and Type II pendant if it has exactly 2immediate neighbors. Also, we call each of the immediate neighbor of a pendant node the hingenode. We propose the following updating rules for Type I and Type II pruning, respec- tively. Updating rule for Type I pendant pruning . Fix a pendant and its hinge. Letvandube the node labels, respectively, and let M(1), . . . , M(s)be the | https://arxiv.org/abs/2505.17964v1 |
labels of edges between them. This step deletes the pendant node and updates the label of the hinge node to unew=u◦ (M(1)◦. . .◦M(s))·v . (5) Updating rule for Type II pendant pruning . Fix a pendant and its two hinges, and let y, x, zbe their node labels, respectively. Let Q(1), . . . , Q(s)be the labels of edges between the pendant and the first hinge, and R(1), . . . , R(t)the labels 9 of edges between pendant and the second hinge This step deletes the Bridge node and adds a new edge between the two hinge nodes with edge label E= (Q(1)◦. . .◦Q(s))·d(y)·(R(1)◦. . .◦R(t)). (6) In either updating rules, let GandGnewdenote the LMG before and after the update. We propose the following lemma, which is proved in Supplement. Lemma 2.8. Suppose the LMG has at least 1pendant. Fix a pendant, we apply Type I pruning if the pendant is Type I, and apply Type II pruning if the pendant is Type II. The pruning reduces the number of nodes by 1, but the Full-Sum before and after are the same: FS(G) = FS( Gnew). uvABunewunew=u∘[(A∘B)⋅v]HingePendantxyABzCDHingePendantxEzE=(A∘B)⋅d(y)⋅(C∘D) uvABunewunew=u∘[(A∘B)⋅v]HingePendantxyABzCDHingePendantxEzE=(A∘B)⋅d(y)⋅(C∘D) Figure 2: The pruning process and corresponding updating rule. Left: Type I. Right: Type II. We now revisit the FS terms in Theorem 2.4. We have the following definition. Definition 2.9 (Default LMG) .Given a multi-graph, the default LMG satisfies that v(a)=1nandM(a,b,s )=Afor all 1≤a < b≤kand1≤s≤w(a, b). Now, for each Gm,k,tin Theorem 2.4, the Full-Sum associated with its default LMG satisfies FS(Gm,k,t) =X j1,j2,...,jkf(A, j 1, j2, . . . , j k), (note the RHS is an FS term ) Therefore, each FS term in Section 2.1 equals to the Full Sum of the corresponding default LMG. Combining this with the updating rules above, we propose the following LMG Pruning algorithm. Fix a multi-graph. (a)Initialization . Set the initial LMG as the default LMG (see Definition 2.9). (b)Update. Pick either a Type I pendant node or a Type II pendant node, prune it and and update the LMG using the two updating rules above. 10 (c)Termination . Stops if only one node is left or no pendant node of either type is left. Fixing a multi-graph Gm,k,tin Theorem 2.4, recall that the corresponding FS term isP j1,j2,...,jkf(A, j 1, j2, . . . , j k). Suppose we apply the pruning algorithm to Gm,k,t. Theorem 2.10. When the algorithm stops, we only have two possible cases in the final LMG: (a) there is only one node left, or (b) no pendant node of either type is left. •Suppose we have the first case. Let vbe the label for the only one node there. ThenP j1,j2,...,jkf(A, j 1, j2, . . . , j k) =1′ nv, which is a SEA term. •Suppose we have the second case. In this case, we have at least 4nodes remaining, each of them has at least 3immediate neighbors, andP j1,j2,...,jkf(A, j 1, j2, . . . , j k)reduces to an FS term with ℓ-layer of sum, ℓ≤ ⌊m/2⌋. In the second | https://arxiv.org/abs/2505.17964v1 |
case, we conjecture that ℓis as small as possible, so the FS term reduces to an IFS term. Compared with existing literature [ 10,20], our algorithm and result are new. Theorem 2.10 is proved in Supplement. The pruning process involves complicated graphs and it is not easy to implement without exceptional coding skills. This is where the AI can help; see Section 3 for details. Example 2 . We use the 3-node multi-graph in Table 2 to illustrate how the pruning process works. The associated FS term isP j1,j2,j3(A◦A)j1j2(A◦A)j2j3. In the figure below, we first prune j3and then prune j1. The result is a one-node graph with node label v= (A◦A)21n. By Theorem 2.10, the pruning process reducesP j1,j2,j3(A◦A)j1j2(A◦A)j2j3to a SEA term 1′ nv=1′ n(A◦A)21n. j1 j2 j31n 1n 1n A AA APruning j3 j1 j21n (A◦A)·1n A APruning j1 j2(A◦A)21n Figure 3: An illustration of the pruning process. Example 3 . When m= 8,Cmis a linear combination of 44FS terms. For 43of them, our algorithm convert each of them equivalently to a SEA term. The remaining FS term has the form ofP j1,j2,j3,j4A2 j1j2Aj1j3Aj1j4Aj2j3Aj2j4A2 j3j4, where the corresponding LGM has no pendant of either form, so our algorithm stops at the initial step. In this case, the FS term is an IFS term (i.e., we can not rewrite it as an FS term with fewer layers of sum). 11 Remark 4 . In a remarkable paper, [ 20] provided code and formula for computing Cmwith mas large as 13, but (a) their approach is for the case where Ais binary, and it is unclear how to extend to the more general case we have, (b) it is unclear how to extend their approach for general m(even for m≤13, they did not provide a systematic way for converting each FS term to a SEA or an IFS term). We solve the problem for general (A, m)and our approach (especially the use of Möbius function and pruning algorithm) is very different from those in [20, 10, 4, 19, 13]. 3Implementation with AI: algorithm, results, vali- dation, and comparison We now have a clear strategy for deriving a formula as desired for each Cm, but the execution of our strategy is quite labor-extensive. For example, the formula for C12 has1900terms, and to derive the explicit formula in such a case is challenging for human. AIHumanFormula of ?Cm 1. Merging Process(a)Generate all partitions(b)Check graph isomorphism (c)Find linear coefficients2. Pruning Process(a)Identify pendants(b)Convert FS terms to SEA/IFS termsConventional tasks: 1(a)Unconventional tasks: 1(b), 1(c), 2(a), 2(b)Human Executable Python CodeAIResearcherAssistantCoderIntelligence Level Figure 4: The pipeline of our humAI approach for counting cycles. To overcome the challenge, we use AI, but (not surprisingly) a straightforward use of AI won’t work. In fact, we have conducted experiments where we ask LLMs to (a) reason in natural language, and (b) write a code to derive a formula Cmas desired (see full prompt in Supplement). Such an approach only works for every small m(e.g., m= 3,4,5), where correct answers exist in the literature. When m≥6, the desired formula either is not well-known or | https://arxiv.org/abs/2505.17964v1 |
does not exist in the literature, and LLMs fail to output a satisfactory result: It often guesses a few terms with incorrect coefficients and lacks a correct reasoning approach. This suggests that AI is unable to solve the CEEF problem independently. We propose a humAI approach which combines the strengths of human and AI. In this approach, first, we divide the CEEF problem into many steps, and outline a clear strategy for each step. This is mostly done by human (see Section 2). Second, for each step, we either design an algorithm or come up with a clear idea for an algorithm and ask AI to write a code for implementation. Finally, we ask AI for 12 executable Python code (the code takes mas input and outputs the SEA form of Cm in terms of latex code). See Figure 4 (top row). The three steps above require different levels of intelligence, which we call level-R (Researcher), level-A (Assistant), and level-C (Coder), respectively. See Figure 4 (bottom row). It is of interest to investigate the level of intelligence that AI has (see below for results). For simplicity, we focus on DeepSeek-R1 (DS-R1), a recent reasoning model with competitive performance in benchmark tasks [ 9], but we also investigate other LLMs. Below are the details. First, we prescribe the format for input and output. For input, we teach DS-R1 to express graphs in strings. For output, we ask DS-R1 to format the output as latex code that prints each term as either a SEA term or an IFS term. The following prompt describes the string format with an example: Graph Representation: A naive loop with m vertices is represented as: 1 [1,2]; 1 [1,m]; 1 [2,3]; 1 [3,4]; ... 1 [m-1,m]. Each c [a,b] means an edge between vertices a and b with multiplicity c. If the final graph has at least 4 vertices, denote the vertices count as v, output the graph and the corresponding string in the following form: First, outputP i1i2...iv. Then, start from the smallest index to the biggest index, for each remaining vertex with assigned_string_vertex not being 1n, output <assigned_string_vertex of this vertex>_{i_{vertex index}} . Then, use dictionary order, start from the smallest, for each two vertices with t≥1remaining edge between them, output <assigned_string_edge of edge 1 ◦...◦assigned_string_edge of edge t>_{i_{vertex index 1} i_{vertex index 2}} . Join the above three parts to get the final expression, which should be in the form of a big sum. Next, we divide our project into 5major tasks, 3for the merging process (1(a), 1(b), 1(c)), 2for the pruning process (2(a), 2(b)). See Figure 4). Among these tasks, according the difficulty level, 1(a) is a conventional task for AI, and others are unconventional tasks for AI. Task 1(a) is well-studied in the literature and AI is able to accomplish it without much struggle. For this task, we only provide a verbal description in the prompt (see full prompt in Supplement) and DS-R1 is able to find an appropriate algorithm on its own. This demonstrates that DS-R1 has level-A (Assistant) intelligence . The other | https://arxiv.org/abs/2505.17964v1 |
4tasks are unconventional for AI, and for each of them, AI needs detailed guidance and carefully written prompts. For reasons of space, we only briefly discuss these tasks. For Task 1(b), DS-R1 does not know what isomorphism means in the current setting, so we need to inform it with the precise definition first. After that, DS-R1 knows how to check isomorphism in a brute-force fashion, but the algorithm is very slow. To address the problem, we inform DS-R1 that for checking isomorphism, we only need to consider degree-preserved permutations. This significantly speeds up the computing. Such kind of communication between DS-R1 and us is very common 13 during our study: many times, DS-R1 struggles to understand some concepts, but after several round of revision in the prompts, it is able to write a code and produce the right answer. For Task 1(c), despite that the Möbius function (Definition 2.2) is well-known, we must teach DS-R1 how to relate it to our problem. As an example, we inform DS-R1 by prompts (see Supplement) that we only need consider the pairs of partitions µ(π, σ)with π=ˆ0(the finest partition). With a step-by-step guidance, DS-R1 is able to produce the right results. For Tasks 2(a)-2(b), since SEA and IFS terms are new concepts, we need to inform DS-R1 with the precise definition first. Our prompt starts with the prescribed input format (see above), followed by detailed instructions on the two types of pruning. Below is a small part of the prompt for Type I pendant pruning (see full prompt in Supplement): STEP 2: For each vertex starting from the last to the first, check if it has degree 1. If so, we call it a child_vertex, and the vertex connected to it is called the parent_vertex. Let there be n edges connecting them. We delete the child_vertex and collapse relevant strings onto its parent_vertex in the following manner: Step 2.1. Calculate the temp_string_step1. First, let temp_string_step1 be: <assigned_string_edge of edge 1 ◦...◦assigned_string_edge of edge n> ·<assigned_string_vertex of child_vertex> . Then, If the original assigned_string_vertex of the parent_vertex is not 1n, update temp_string_step1 to be: <assigned_string_vertex of parent_vertex> ◦<temp_string_step1> . Step 2.2. Update the assigned_string_vertex for the parent_vertex to be temp_string_step1. Step 2.3. Delete the child_vertex and all edges connecting to it. Immediately restart the whole while loop. Results and validation . The final result of our method is Python code for each Cm, where the input is an integer m≥3and the output is the desired formula as in Theorems 2.4 and 2.10, where each FS term is converted to either a SEA term or an IFS term. See Supplement for details. For validation, we use two approaches. In the first approach, we manually compare our derived formula with those derived by hand for m= 3,4, . . . , 8and those by [ 20] form= 3,4, . . . , 13(where Ais binary, so we need to first simplify our formula for this special case). In the second approach, for a relatively small n, we generate many n×nsymmetric matrices Aand compute Cmfor each in a brute-force | https://arxiv.org/abs/2505.17964v1 |
fashion. We then use the results to validate the formula. Based on long and tedious validation, we conclude that the output formulas by AI are correct. In summary, first, DS-R1 is able to accomplish conventional tasks, with correct answers. This demonstrates that DS-R1 has level-A (Assistant) intelligence . Second, DS-R1 is able to accomplish unconventional tasks, provided with a step by step guidance and carefully written prompts. This demonstrates that DS-R1 has level-C (Coder) intelligence for these unconventional tasks: it is able to accomplish the task, but needs a detailed step by step guidance. Somewhat surprisingly, DS-R1 also has preliminary level-R Researcher intelligence. One example is that, when we tried 14 to prove Theorem 2.4 for the first time, we didn’t realize it was connected to the Möbius function. DS-R1 discovered this connection independently and we solved the problem following its hint. Another example is, DS-R1 is able to detect inconsistency in our prompt. In one of our prompt, we provided DS-R1 with both a formula for the linear coefficients and an example for the linear coefficients, but unfortunately, the coefficients in the examples are not consistent with those in the definition. DS-R1 detected such an inconsistency. Using its hint, we revised the prompt, and this time, DS-R1 fully understood our point and provided the correct answer for our question. Investigation with other LLMs . While our focus is on DS-R1, it is of interest to investigate how other LLMs perform if we feed in them with the same prompts we used for DS-R1. For comparison, we consider 4other LLMs with the 5tasks in Figure 4. The results are in Table 3, where we note that additional to the 5tasks, we add two columns Syntax (1) and (2) corresponding to Task 1(a) and Task 2(a), respectively, where a Yesmeans that the corresponding LLM is able to write a code without non-trivial bugs for the task. Table 3: Comparison of 5LLMs on the 7tasks above (last column: total #of tasks accomplished) Reasoning Model Syntax(1) 1(a) 1(b) 1(c) Syntax(2) 2(a) 2(b) Total Deepseek-R1 Yes ✓ ✓ ✓ ✓ ✓ ✓ ✓ 7 Gemini-2.5-Pro-Exp Yes ✓ ✓ ✓ ✓ ✓ ✓ ✓ 7 GPT-4.1 No ✓ ✓ ✓ ✓ ✓ ✓ ✓ 7 Claude-3.7-Sonnet-Reasoning Yes ✓ ✓ ✓ ✗ ✓ ✓ ✗ 5 Llama-4-Maverick No ✓ ✗ ✗ ✗ ✓ ✗ ✗ 2 4 Applications to low-rank matrix detection Using our derived formula, we can revisit many problems in network testing and low-rank matrix detection (where cycle count statistics are very useful [ 13,14,18]) with better results. Consider A=a·Ω +z, where zis a symmetric matrix with zii= 0andzijiid∼N(0,1/n),1≤i < j≤n,Ω =eΩ−diag(eΩ)andeΩ=λ1ξ1ξ′ 1+λ2ξ2ξ′ 2, where ξ1andξ2are generated as follows: we first generate all their entries iidfrom N(0,1)and then normalize each vector to have a unit ℓ2-norm. We wish to test H0:a= 0vs.H1:a= 1(i.e., to test whether this is a hidden low-rank matrix). For a relatively small m,Cmwas shown to be a powerful test [ 13,14,18]. We now show that Cmwith a larger mhas an even better performance. For each Cm, we measure the performance by the | https://arxiv.org/abs/2505.17964v1 |
sum of Type I and Type II error (with the ideal threshold; the threshold that minimizes the sum) or SE for short. The results (for 100 15 replications) are as follows. For (λ1, λ2) = (1 .5,1),SE=.37, .21, .21, .13, .08form= 3,4,5,6,7. Similarly, for (λ1, λ2) = (1 .5,−1),(1,−1.5),(−1,−1.5), the results are SE = .57,0.26,0.30,0.1,0.1,SE = .69, .32, .37, .17, .13, and SE = .38, .33, .21, .09, .06, respectively. These demonstrate that the performance of Cmimproves as mincreases. 5 Discussion We solved a long-lasting open problem: how to find a Computationally Efficient Equivalent Form (CEEF) for the cycle count statistics. This is a combined effort of human and AI, so we call it a humAI approach. For a difficult math problem like CEEF, one should not expect that AI can solve the problem independently. However, we find that AI is able to solve the problem if we provide it with a clear strategy, a step-by-step guidance and carefully written prompts. Our work assures that AI can be a powerful assistant for solving research-level math problems, and AI is especially useful for problems that involve complicated combinatorics or tedious calculations (it is hard to solve these problems by human). Our idea is readily extendable to other problems such as counting the paths between two nodes in a network, counting cycles in directed networks, and so on and so forth. 16 References [1]Alberto Alfarano, François Charton, and Amaury Hayat. Global lyapunov func- tions: A long-standing open problem in mathematics with symbolic transformers. InAdvances in Neural Information Processing Systems 38 , 2024. Poster. [2] A.J. Bondy and U.S.R. Murty. Graph Theory with Applications . Wiley, 1991. [3]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc., 2020. [4]Y.C. Chang and H.L. Fu. The number of 6-cycles in a graph. Bulletin of the Institute of Combinatorics and its Applications , 39, 01 2003. [5]Alex Davies, Petar Veličković, Lars Buesing, Sam Blackwell, Daniel Zheng, Nenad Tomašev, Richard Tanburn, Peter Battaglia, Charles Blundell, András Juhász, Marc Lackenby, Geordie Williamson, Demis Hassabis, and Pushmeet Kohli. Advancing mathematics by guiding human intuition with AI. Nature, 600(7887):70–74, 2021. [6] Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Eliza- beth Ke, Kevin Liu, Linda Chen, Sunny Tran, Newman Cheng, Roman Wang, Nikhil Singh, Taylor L. Patti, Jayson Lynch, Avi Shporer, Nakul Verma, Eugene Wu, and Gilbert Strang. A neural network solves, explains, and generates univer- sity math problems by program synthesis and few-shot learning at human level. Proceedings of the National Academy of Sciences , 119(32):e2123433119, 2022. [7]Evgenii Evstafev. Token-hungry, yet precise: Deepseek | https://arxiv.org/abs/2505.17964v1 |
r1 highlights the need for multi-step reasoning over speed in math. ArXiv, 2024. [8]Chao Gao and John Lafferty. Testing for global network structure using small subgraph statistics. arXiv preprint arXiv:1710.00862 , 2017. 17 [9]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incen- tivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [10]F. Harary. Graph Theory . Addison-Wesley Series in Mathematics. Addison- Wesley Longman, Incorporated, 1969. [11]Frank Harary and Bennet Manvel. On the number of cycles in a graph. Matem- aticky casopis , 21(1):55–63, 1971. [12]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. In Advances in Neural Information Processing Systems 34 (Datasets & Benchmarks Track) , 2021. [13]Jiashun Jin, Zheng Tracy Ke, and Shengming Luo. Optimal adaptivity of signed- polygon statistics for network testing. The Annals of Statistics , 49(6):3408–3433, 2021. [14]Jiashun Jin, Zheng Tracy Ke, Shengming Luo, and Minzhe Wang. Optimal estimation of the number of network communities. Journal of the American Statistical Association , 118(543):2101–2116, 2023. [15]Jiashun Jin, Zheng Tracy Ke, Jiajun Tang, and Jingming Wang. Network goodness-of-fit for the block-model family. Journal of the American Statistical Association , pages 1–27, 2025. [16]Weihao Kong and Gregory Valiant. Spectrum estimation from samples. The Annals of Statistics , 45(6):2218–2247, 2017. [17]Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat, Gabriel Ebner, Aurélien Rodriguez, and Timothée Lacroix. Hypertree proof search for neural theorem proving. In Advances in Neural Information Processing Systems 35 , 2022. [18]Jun Li and Song Xi Chen. Two sample tests for high-dimensional covariance matrices. The Annals of Statistics , 40(2):908–940, 2012. [19]Nazanin Movarraei and Samina Boxwala. On the number of cycles in a graph. Open Journal of Discrete Mathematics , 6:41–49, 03 2016. 18 [20]Sergey Perepechko and Anton Voropaev. The number of fixed length cycles in an undirected graph. explicit formulae in case of small lengths. Mathematical Modeling and Computational Physics (MMCP2009) , 148, 2009. [21]Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli, and Alhussein Fawzi. Mathematical discoveries from program search with large language models. Nature, 625:468–475, 2024. [22]Richard P. Stanley. Enumerative Combinatorics . Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2 edition, 2011. [23]Trieu H. Trinh and et al. Solving olympiad geometry without human demonstra- tions.Nature, 625(7995):4760–482, 2024. [24]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 24824–24837. Curran Associates, Inc., 2022. [25]Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. miniF2F: A cross-system benchmark for formal olympiad-level mathematics. In International Conference on Learning Representations , 2022. 19 | https://arxiv.org/abs/2505.17964v1 |
Are Large Language Models Reliable AI Scientists? Assessing Reverse-Engineering of Black-Box Systems Jiayi Geng∗ Department of Computer Science Princeton University jiayig@princeton.eduHoward Chen∗ Department of Computer Science Princeton University howardchen@cs.princeton.edu Dilip Arumugam Department of Computer Science Princeton UniversityThomas L. Griffiths Department of Computer Science Department of Psychology Princeton University Abstract Using AI to create autonomous researchers has the potential to accelerate scientific discovery. A prerequisite for this vision is understanding how well an AI model can identify the underlying structure of a black-box system from its behavior. In this paper, we explore how well a large language model (LLM) learns to identify a black-box function from passively observed versus actively collected data. We investigate the reverse-engineering capabilities of LLMs across three distinct types of black-box systems, each chosen to represent different problem domains where future autonomous AI researchers may have considerable impact: programs, formal languages, and math equations. Through extensive experiments, we show that LLMs fail to extract information from observations, reaching a performance plateau that falls short of the ideal of Bayesian inference. However, we demonstrate that prompting LLMs to not only observe but also intervene—actively querying the black-box with specific inputs to observe the resulting output—improves perfor- mance by allowing LLMs to test edge cases and refine their beliefs. By providing the intervention data from one LLM to another, we show that this improvement is partly a result of engaging in the process of generating effective interventions, paralleling results in the literature on human learning. Further analysis reveals that engaging in intervention can help LLMs escape from two common failure modes: overcomplication , where the LLM falsely assumes prior knowledge about the black- box, and overlooking , where the LLM fails to incorporate observations. These insights provide practical guidance for helping LLMs more effectively reverse- engineer black-box systems, supporting their use in making new discoveries. Codes are available at https://github.com/JiayiGeng/reverse-engineering . 1 Introduction Developing intelligent systems to accelerate scientific discovery has been a long-standing goal of arti- ficial intelligence research [ 24,75]. Despite rapid progress in creating large language models (LLMs) for understanding text and solving problems such as math and coding, automating scientific processes poses a different kind of challenge. A core aspect of scientific discovery is reverse-engineering the ∗Equal contribution Preprint. Under review.arXiv:2505.17968v1 [cs.LG] 23 May 2025 Math Equation: Observations: x = [7, 2], [3, 5], y = second x = [1, 9], [6, 3], y = first, … <latexit sha1_base64="IQhaFQtCx+oprumjKhFHe03t1+c=">AAACSHicbZDNSxtBGMZnU+tHrBrr0cvQIESEZSZ+XgRBKD0qNCpkk2V2MpsMzn4w864Yli3953rp0Zt/Qy89KKU3Zzc5aOILA8/zvO98/YJUSQOEPDq1DwsfF5eWV+qrn9bWNxqbn69MkmkuOjxRib4JmBFKxqIDEpS4SbVgUaDEdXB7Xvav74Q2Mom/wzgVvYgNYxlKzsBGfsPv4FPsKRFCi7jtH/c+7efE3S/wHibuifXtife0HI5gt597oWY8p0WVFvUd5lN7AnEPPK807coclUafEvfQbzSJS6rC84JORRNN68JvPHiDhGeRiIErZkyXkhR6OdMguRJF3cuMSBm/ZUPRtTJmkTC9vAJR4B2bDHCYaLtiwFX6ekfOImPGUWAnIwYjM9srw/d63QzCk14u4zQDEfPJRWGmMCS4pIoHUgsOamwF41rat2I+YpYUWPZ1C4HOfnleXLVdeuTSy4Pm2defExzLaBt9QS1E0TE6Q9/QBeogjn6hP+gJPTu/nb/OP+f/ZLTmTBFuoTdVq70AOverPg==</latexit>U= 0.2x0.31+0.8x0.32 10.3 <latexit sha1_base64="O4tSELmr9RRb1/uaWsTveCmKJyY=">AAAB6HicbVDLSgNBEOyNrxhfUY9eBoPgKeyKRI8BQTwmYB6QLGF20puMmZ1dZmaFsAS8e/GgiFc/yZt/4+Rx0MSChqKqm+6uIBFcG9f9dnJr6xubW/ntws7u3v5B8fCoqeNUMWywWMSqHVCNgktsGG4EthOFNAoEtoLRzdRvPaLSPJb3ZpygH9GB5CFn1FipHvaKJbfszkBWibcgJVig1it+dfsxSyOUhgmqdcdzE+NnVBnOBE4K3VRjQtmIDrBjqaQRaj+bHTohZ1bpkzBWtqQhM/X3REYjrcdRYDsjaoZ62ZuK/3md1ITXfsZlkhqUbL4oTAUxMZl+TfpcITNibAllittbCRtSRZmx2RRsCN7yy6ukeVH2KmWvflmq3j7N48jDCZzCOXhwBVW4gxo0gAHCM7zCm/PgvDjvzse8NecsIjyGP3A+fwD0HY10</latexit>fConstruct QueryReceive ResponseAI Scientist <latexit sha1_base64="m3+JiMX+s6s650jldCSI6N/iR00=">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoMgHsKuiHoMCOIxonlAsobZyWwyZHZ2mekVwhLwB7x4UMSrX+TNv3HyOGhiQUNR1U13V5BIYdB1v53c0vLK6lp+vbCxubW9U9zdq5s41YzXWCxj3Qyo4VIoXkOBkjcTzWkUSN4IBldjv/HItRGxusdhwv2I9pQIBaNopbvw4aRTLLlldwKySLwZKcEM1U7xq92NWRpxhUxSY1qem6CfUY2CST4qtFPDE8oGtMdblioaceNnk1NH5MgqXRLG2pZCMlF/T2Q0MmYYBbYzotg3895Y/M9rpRhe+plQSYpcsemiMJUEYzL+m3SF5gzl0BLKtLC3EtanmjK06RRsCN78y4ukflr2zsve7Vmpcv00jSMPB3AIx+DBBVTgBqpQAwY9eIZXeHOk8+K8Ox/T1pwzi3Af/sD5/AENT44Q</latexit>f⇤Black Box Hypothesis <latexit sha1_base64="WBhftGS9IqSWegEQ7VLgCKI01h8=">AAACDnicdZBLSwMxFIXv1Fetr1GXbgZLwVWZEVGXBUFcVrAPaEvJZO60oZnMkGSEMlTcu/GvuHGhiFvX7vw3po+FtnogcDjnhuR+fsKZ0q77ZeWWlldW1/LrhY3Nre0de3evruJUUqzRmMey6ROFnAmsaaY5NhOJJPI5NvzBxbhv3KJULBY3ephgJyI9wUJGiTZR1y61UxGg9CWhmN39o1HXLrpldyJn0XgzU4SZql37sx3ENI1QaMqJUi3PTXQnI1IzynFUaKcKE0IHpIctYwWJUHWyyTojp2SSwAljaY7QziT9eSMjkVLDyDeTEdF9Nd+Nw7+6VqrD807GRJJqFHT6UJhyR8fOmI0TMIlU86ExhEpm/urQPjFotCFYMBC8+ZUXTf247J2WveuTYuXyfoojDwdwCEfgwRlU4AqqUAMKD/AEL/BqPVrP1pv1Ph3NWTOE+/BL1sc3WIif5g==</latexit>|{z}Formal LanguageProgramMath Equation Infer from observations and/or interventions.<latexit sha1_base64="t4ftgNdExanKtmnEoUjfTl051Vc=">AAACkXicbVFdb9MwFHXC1wgwOvbIi0VV1IkqskPZxkNRBRJC4mVIdJvUdJHjOq0150P2DaKKgvg5/B7e+Dc4SZH2wZUsnXvuudf2uXGhpAFC/jjunbv37j/Yeeg9evxk92lv79mpyUvNxYznKtfnMTNCyUzMQIIS54UWLI2VOIsvPzT1s29CG5lnX2FTiEXKVplMJGdgqaj3azDDExwqkcCQ+MGP7xG9qIj/usavMPGPbR50eajlag0HF1WYaMYrWrds7Q1YRO0E4o9HmEVBCw9HWE+I/8YbvBzagSO8iejBCIdhRwQNEfwjwmUOxgurq8proqYe1lGvT3zSBr4N6Bb00TZOot5v28nLVGTAFTNmTkkBi4ppkFyJ2gtLIwrGL9lKzC3MWCrMomodrfHAMkuc5NqeDHDLXu2oWGrMJo2tMmWwNjdrDfm/2ryE5HhRyawoQWS8uygpFYYcN+vBS6kFB7WxgHEt7VsxXzPrONgletYEevPLt8Fp4NNDn34Z96cff3Z27KDn6AUaIoqO0BR9Qidohriz64ydifPO3XffulP3fSd1na2F++hauJ//AhU1uVA=</latexit>{(x1,y1),(x2,y2),...}The Reverse-Engineering ProblemBayesian Inference# (x, y) pairsPerfLLM w/ InterventionLLM w/ Passive Observations Two Failure Modes Mitigated by InterventionProgram: Return the 3rd element of the list. Observations: x = [2, 3, 4, 5], y = 4 x = [1, 8, 3, 2], y = 3 [Ignores the observations] <latexit sha1_base64="fyYz1qDIcbGlAKGYjblsRmOtv1I=">AAACRHicbZBNaxNBGMdnq9Y21Tbq0ctgCEQMy0yMsZdCQRCPEcwLZJNldjKbDJl9YebZ0rCs9Kv14gfw1k/QSw+KeBUnmxxs4gMD/+f/f+btF6RKGiDkxtl78PDR/uODw8rRk6fHJ9Vnz/smyTQXPZ6oRA8DZoSSseiBBCWGqRYsCpQYBIsPq3xwIbSRSfwFlqkYR2wWy1ByBtbyq6N6D59hT4kQGsRtfb306SQn7tsCv8HEPbV9a917Ws7m8HqSe6FmPKdF6RYV5lN7AHHbTcz8Vik7TazPiPvOr9aIS8rCu4JuRA1tqutXv3vThGeRiIErZsyIkhTGOdMguRJFxcuMSBlfsJkYWRmzSJhxXkIocN06Uxwm2q4YcOn+uyNnkTHLKLCTEYO52c5W5v+yUQbh6TiXcZqBiPn6ojBTGBK8IoqnUgsOamkF41rat2I+ZxYSWO4VC4Fuf3lX9Fsu7bj0c7t2/vFqjeMAvUSvUANR9B6do0+oi3qIo2t0i36gn84358755fxej+45G4Qv0L1y/vwFVS6q3w==</latexit>a1=0.4,a2=0.6,r=0.5LLM Struggles to Utilize Passive Observations OvercomplicationOverlooking def func(lst): lst = sorted(lst) return lst[2] <latexit sha1_base64="faehwGuZS6XhArx+pgVvHK1VHno=">AAAB6HicbVDLSgNBEOyNrxhfUY9eBoPgKeyKqMeAIB4TMA9IljA76U3GzM4uM7NiWALevXhQxKuf5M2/cfI4aGJBQ1HVTXdXkAiujet+O7mV1bX1jfxmYWt7Z3evuH/Q0HGqGNZZLGLVCqhGwSXWDTcCW4lCGgUCm8HweuI3H1BpHss7M0rQj2hf8pAzaqxUe+wWS27ZnYIsE29OSjBHtVv86vRilkYoDRNU67bnJsbPqDKcCRwXOqnGhLIh7WPbUkkj1H42PXRMTqzSI2GsbElDpurviYxGWo+iwHZG1Az0ojcR//PaqQmv/IzLJDUo2WxRmApiYjL5mvS4QmbEyBLKFLe3EjagijJjsynYELzFl5dJ46zsXZS92nmpcvM0iyMPR3AMp+DBJVTgFqpQBwYIz/AKb8698+K8Ox+z1pwzj/AQ/sD5/AEPdI2G</latexit>x<latexit sha1_base64="qtpkiGJQxJfYm7q9gOJES97e08Y=">AAAB6HicbVBNS8NAEJ34WetX1aOXxSJ4KomIeiwI4rEF+wFtKJvtpF272YTdjRBCwbsXD4p49Sd589+4/Tho64OBx3szzMwLEsG1cd1vZ2V1bX1js7BV3N7Z3dsvHRw2dZwqhg0Wi1i1A6pRcIkNw43AdqKQRoHAVjC6mfitR1Sax/LeZAn6ER1IHnJGjZXqWa9UdivuFGSZeHNShjlqvdJXtx+zNEJpmKBadzw3MX5OleFM4LjYTTUmlI3oADuWShqh9vPpoWNyapU+CWNlSxoyVX9P5DTSOosC2xlRM9SL3kT8z+ukJrz2cy6T1KBks0VhKoiJyeRr0ucKmRGZJZQpbm8lbEgVZcZmU7QheIsvL5PmecW7rHj1i3L19mkWRwGO4QTOwIMrqMId1KABDBCe4RXenAfnxXl3PmatK848wiP4A+fzBxD4jYc=</latexit>yPassive ObservationActive InterventionIntervene<latexit sha1_base64="rpkhbQ1jpECDV4N+dV0FWDDWLHc=">AAAB6XicbVBNS8NAEJ34WetX1aOXxSJ6KomIeix68VjFfkAbymY7aZduNmF3I5bQf+DFgyJe/Ufe/Ddu2xy09cHA470ZZuYFieDauO63s7S8srq2Xtgobm5t7+yW9vYbOk4VwzqLRaxaAdUouMS64UZgK1FIo0BgMxjeTPzmIyrNY/lgRgn6Ee1LHnJGjZXun066pbJbcacgi8TLSRly1Lqlr04vZmmE0jBBtW57bmL8jCrDmcBxsZNqTCgb0j62LZU0Qu1n00vH5NgqPRLGypY0ZKr+nshopPUoCmxnRM1Az3sT8T+vnZrwys+4TFKDks0WhakgJiaTt0mPK2RGjCyhTHF7K2EDqigzNpyiDcGbf3mRNM4q3kXFuzsvV6/zOApwCEdwCh5cQhVuoQZ1YBDCM7zCmzN0Xpx352PWuuTkMwfwB87nD0i4jTM=</latexit>x0<latexit sha1_base64="LmtN1ns/wN9IyhXoTmYycTqXHIo=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbRU0lE1GPRi8cq9gPaUDbbSbt0swm7G6GE/gMvHhTx6j/y5r9x2+agrQ8GHu/NMDMvSATXxnW/ncLK6tr6RnGztLW9s7tX3j9o6jhVDBssFrFqB1Sj4BIbhhuB7UQhjQKBrWB0O/VbT6g0j+WjGSfoR3QgecgZNVZ6GJ/2yhW36s5AlomXkwrkqPfKX91+zNIIpWGCat3x3MT4GVWGM4GTUjfVmFA2ogPsWCpphNrPZpdOyIlV+iSMlS1pyEz9PZHRSOtxFNjOiJqhXvSm4n9eJzXhtZ9xmaQGJZsvClNBTEymb5M+V8iMGFtCmeL2VsKGVFFmbDglG4K3+PIyaZ5Xvcuqd39Rqd3kcRThCI7hDDy4ghrcQR0awCCEZ3iFN2fkvDjvzse8teDkM4fwB87nD0o9jTQ=</latexit>y0Randomly SampledFigure 1: Reverse-engineering. Left: Defining the problem. The AI scientist will obtain either passive observations from the black box or collect | https://arxiv.org/abs/2505.17968v1 |
data through active intervention to construct a hypothesis. Right (top): with only passive observations, the LLM cannot make effective use of the data and lags behind Bayesian inference by large margin; allowing the LLM to intervene improves performance. Right (bottom): effective intervention can mitigate two common failure modes: overcomplication and overlooking. underlying mechanism behind a black-box system, which requires capabilities beyond responding to a one-off query. In particular, reverse-engineering often involves 1) understanding a collection of observed data in order to develop hypotheses, 2) designing experiments to actively acquire informative data from the black-box to test those hypotheses, and 3) describing and communicating the results. Existing work using LLMs for automating scientific processes either focuses on static observational data [ 57,62] or emulates scientific workflows using “LLM scientists” with many moving parts [23,60]. In contrast, research in related fields has used carefully controlled tasks to evaluate whether machine learning systems can perform key aspects of reverse-engineering, including inductive reasoning [ 58], learning causal features from passive data [ 35], and optimal experimental design [12,21]. This work is often informed by research in cognitive science, which has studied how humans engage in active learning using methods in which the source ( i.e.passive observation or active experimentation) and content of data can be differentiated [ 46,47]. However, such controlled methodologies have not yet been applied to evaluating state-of-the-art LLMs, leaving fundamental questions unanswered: “ How well can LLMs make inferences from passive observations? ” and “ Can they actively collect data to refine their hypotheses? ”. To answer these questions, we systematically study LLMs on three reverse-engineering tasks inspired by the cognitive-science literature and selected to mimic challenges that arise in scientific settings: reconstructing list-mapping programs [ 58], formal languages [ 48], and math equations [ 21]. Through extensive experiments, we show that LLMs are limited in their ability to make inferences from observations, leading to performance plateaus when compared to Bayesian models. However, allowing LLMs to perform interventions—generating test cases or queries to collect new, informative data—can significantly improve their performance. Through further experiments in which the results of the interventions conducted by one LLM become observational data for another, we show that the benefits of intervention seem to come from the LLM testing and refining its own beliefs rather than simply collecting higher-quality data. This is similar to a phenomenon observed in human learning, where people show limited benefit from interventions generated by others [ 46,47]. Further investigation reveals that generating interventions seems to help LLMs overcome two failure modes: 1) overcomplication , where the LLM tends to construct overly-complex hypotheses, and 2) overlooking , where the LLM neglects observations or draws overly-generic conclusions without careful checking. Our contributions are as follows: 2 •Drawing inspiration from controlled studies of human cognition, we formalize reverse- engineering as a core problem for assessing the scientific discovery capabilities of LLMs and design three black-box tasks that can be used in such assessment. •We demonstrate empirically that frontier LLMs still struggle, relative to Bayesian inference, at reverse-engineering these black boxes when provided with only passive observations. | https://arxiv.org/abs/2505.17968v1 |
•We show that LLMs can perform interventions to obtain more informative data, and that effective intervention mitigates the failure modes of overcomplication andoverlooking . •We show that performance degrades when repurposing the LLM’s intervention data as obser- vations, pinpointing the mechanism behind the improvements it produces and highlighting a potential pitfall for exchanging knowledge among LLMs. 2 Related Work Inductive Inference Some of the earliest work on reverse-engineering appears under the label of inductive inference for “hypothesizing a general rule from examples” [ 3]. Classic instances of this problem include work on identifying the underlying structure of a finite-state automaton through observations of its input-output behavior [ 55,56]. While this problem typically considers passive observations, seminal work on active learning focuses on analyzing the benefits of actively querying inputs to solicit the most-informative outputs from the unknown function of interest [ 40,2,61]. The key distinction between these seminal works and ours is the attention towards LLMs and assessing their capacity for successfully identifying different types of black boxes from input-output examples. Bayesian Optimal Experiment Design An adjacent line of work considers the sequential design of experiments which maximally yield information gain about an unknown parameter of interest [ 39,17, 12,21]; one may interpret these methods as studying a non-LLM-focused, Bayesian analogue of the reverse-engineering problem we formulate in the subsequent section, where a learner begins with a prior distribution over the black box in question and must maximally reduce epistemic uncertainty [ 19] with a given budget of experiments. To the extent that LLMs may implicitly engage with an underlying approximate posterior inference scheme [ 78,28,82,20,49], the reverse-engineering capabilities studied in this work can be tied to this Bayesian optimal experiment design problem. Reinforcement Learning The fundamentals of the reverse-engineering problem also connect with various ideas studied in the context of reinforcement learning (RL) [ 71]. Any model-based RL agent [ 69,70,9,66] naturally engages with a particular instance of the reverse-engineering problem where the black-box function in question is the transition function and/or reward function of a Markov Decision Process (MDP) [ 6,53]. The distinction explored in this work between a LLM that passively observes versus actively intervenes on the black box in question has a direct connection to the exploration challenge in RL, which has profound impact on an agent’s ability to recover an accurate model of the world [ 73,18,67,52]; while recent work [ 5] has studied how to improve exploration with LLMs, this paper focuses on assessing the innate capabilities of LLMs to actively query informative data. The KWIK learning framework of Li et al. [38] provides a theoretical analysis for reverse-engineering a MDP transition function when a learner must either confidently estimate the environment dynamics or say “I don’t know” [ 74,37,59,72,1]. Finally, there is a connection between intervention for effective reverse-engineering and meta RL [ 41], with recent work showing that passive learning can be effective with LLMs once there is an effective exploration strategy capable of yielding high-quality observations [ 35]; naturally, the latter problem is precisely what we demonstrate interventions allow LLMs to solve for | https://arxiv.org/abs/2505.17968v1 |
themselves in reverse-engineering tasks. LLMs for Automating the Scientific Process With the rapid advances in LLMs, recent work has explored using them to automate different parts of the scientific process such as ideation [ 63], assistance [ 26], writing research papers [ 44,65], or emulating AI scientists in simulated environments [60]. Additionally, multi-modal and multi-agent AI models have driven significant progress in applications such as protein science [ 51], while frameworks like MatPolit [ 50] integrate human cognitive insights to accelerate discoveries in materials science. These works utilize the abundant knowledge stored in the LLMs to directly tackle real-world complexity in science [ 54]. However, the complexity of these settings and the resulting agents make it hard to disentangle the consequences of 3 all the engineering choices that go into these systems. Our work instead focuses on using simple and controllable black boxes to study the core capabilities of the LLMs themselves. Understanding Failure Modes in LLMs Recently, many works have examined the failure modes of formal reasoning in LLMs. It has been observed that LLMs can exhibit failure modes of both “overthinking” [ 13] and “underthinking” [ 76] when tackling mathematical problems and code genera- tion [ 30,64,16,64,68,11]. To understand LLM abilities beyond formal reasoning tasks, recent work has leveraged insights and datasets from cognitive science [ 22,7,15,80]. In particular, researchers have started to use cognitive science to explore the failed behaviors in LLMs [ 33]. Using these methods, researchers have found that LLMs sometimes overestimate human rationality [ 42], exhibit inconsistencies in probability judgments [ 83], and perform worse as a result of engaging in reasoning [43]. In a similar vein, our work draws upon research from cognitive science to design the black boxes used in our reverse-engineering experiments. 3 Reverse Engineering 3.1 Problem Formulation We define a black box f∗:X → Y as a deterministic function that maps a query x∈ X to a response y∈ Y through its internal dynamics. The reverse-engineering problem is for a model to infer the internals of a black box f∗(e.g.list mapping programs, production rules of formal languages, and math equations) from a sequence of query-response pairs O={(x1, y1),(x2, y2), . . . , (xN, yN)} ⊂ X × Y (Figure 1). We consider two cases of the reverse-engineering problem: observation-only and observation-intervention . In the observation-only scenario, all the queries are randomly sampled fromXand the corresponding response yi=f∗(xi)is generated by the black box from a uniform distribution to construct the observation set. A large language model Mmust generate a hypothesis f=M(O)without further interaction with the black box. This setting assesses the model’s ability to perform inductive reasoning [ 3]. In the observation-intervention scenario, the LLM is first given a set of observations Oobtained in the observation-only scenario and is instructed to interact with the black box in a multi-round fashion. In each round, the LLM chooses one of the following actions: 1) construct a new query xN+1to query the black box and obtain the response yN+1, 2) construct a new query-response pair (xN+1, y′ N+1)and check its validity using the black box | https://arxiv.org/abs/2505.17968v1 |
( 1[y′ N+1=f∗(xN+1)]), or 3) stop and conclude with a hypothesis fabout the black box. Before constructing the new query, the LLM can analyze the current oservations with strategies such as verbalizing its current belief or describing the current hypothesis (§5.2). Before the LLM chooses to stop or reaches the maximal number of rounds, the query-response pairs obtained during intervention are appended to Ofor the next round. 3.2 Black-Box Types Drawing on the literature on inductive inference in cognitive science, we select tasks commonly used to study learning of complex relationships to design our black-box systems and scale them up for evaluation with LLMs. These three distinct black-box function classes – Program, Formal Lan- guage, and Mathematical Equation – simulate problems encountered in scientific reverse-engineering scenarios. Due to space constraints, detailed black-box designs are relegated to Appendix A. Program. We use list-mapping programs [ 58] for the Program black-box. Each program imple- ments a lambda expression (e.g., (lambda(singleton(third $0))) ) in Python, where the query is a list of integers and the response is an integer. Formal Language. The Formal Language black-box is defined by a simple program that generates sequences of symbols. For example, the language AnBngenerates sequences consisting of some number of As followed by the same number of Bs. The black-box allows the LLM to intervene by validating if a string is allowable under the rule. We define 46distinct black boxes each based on a language from Yang & Piantadosi [79] or McCoy & Griffiths [48]. Math Equation. We use the Constant Elasticity of Substitution (CES) formulation from economics [21] as the Math Equation black-box. The utility U= (P iaixr i)1 ris given by the weights ai, the 4 ratior, and the quantities of each kind of goods xi. The LLM can query the black-box with two lists of item types with quantities and obtain a response indicating which list has higher utility. 3.3 Evaluation Protocol A black-box can be represented in multiple ways, rendering evaluation challenging. For example, two black-boxes can be compared through their descriptions in natural language (descriptive evaluation) or whether they respond similarly to the same queries (functional evaluation; see §H). In this paper we focus on descriptive evaluation , where the black-box f∗ NLis expressed in natural language, due to its communicative nature and real-world use [ 14,23]. The LLM-generated hypothesis fNLis scored by an LLM judge against the black-box on a 0−10scale based on the criteria of each black-box type (score =LM-Judge (fNL, f∗ NL)). We use descriptive evaluation for Program and Formal Language. As the Math Equation does not require verbalization beyond the weights and ratio, we report the flipped root mean square error (1 - RMSE) between the inferred parameters and ground truth. 4 Experiments Experimental setup. We use different versions of GPT-4o [ 31] for reverse-engineering ( gpt-4o- 2024-08-06 , dubbed as reverse-engineer LLM) and as the judge ( gpt-4o-2024-05-13 , dubbed as the judge LLM). We use greedy decoding of both the reverse-engineer and the judge LLMs and report performance over 3seeds. For the observation-only experiments, we report performance | https://arxiv.org/abs/2505.17968v1 |
for number of observations N={2,5,10,20,50,100}. For the observation-intervention setting, the reverse-engineer LLM performs M={5,10,20,50}rounds of interventions conditioned on the initial set of 10observations ( |O|= 10 ). In addition to GPT-4o, we report full results for Claude-3.5- Sonnet-20241022 [ 4], DeepSeek-R1 [ 29], Llama-3.3-70B-Instruct [ 27] in Appendix E. We provide the prompts for both intervention and hypothesis generation in Appendix D. We also study different evaluation approaches in Appendix H. 4.1 LLM Struggles to Utilize Observations Optimally 01020304050601030507090 01020304050601020304050 010203040506030507090 Bayesian (observation-only) LLM (observation-only)# datapoints # datapoints # datapointsDescriptive Score Descriptive Score 1 - RMSEProgram Formal Language Math Equation Figure 2: Observation-only results across three black-box types. We compare the GPT-4o performance (blue) to Bayesian inference (green). The horizontal-axis represents the number of provided (x, y) pairs. We report 1 - RMSE for Math Equation and descriptive score for Program and Formal Language. We first establish the reference performance achievable by the Bayesian model in each setting. These three settings were selected in part because they are all cases where previous work has defined inference algorithms that make it possible to approximate the posterior distribution over hypotheses as more observations becomes available [ 58,79,21]. As shown in Figure 2, the Bayesian models (green) consistently improve with the increased number of observations across all three tasks. On the other hand, while the LLM reverse-engineer (blue) starts off with higher performance for Program and Formal Language, potentially leveraging its prior knowledge, it peaks at 10observations and struggles to use the extra observations thereby causing performance to plateau. We also calculate repeated measures ANOV As [ 25] for each black-box type and found significant Model ×number of datapoints interactions for Program ( F(5,10) = 51 .9,p < 0.001), Formal Language ( F(5,10) = 11 .8, p= 0.001), and Math Equation ( F(5,10) = 8 .7,p= 0.002), showing that the Bayesian inference algorithms increasingly outperformed LLMs with additional observations. Details for the ANOV A calculations are provided in Appendix C.1. 5 4.2 Intervention Is Crucial for the LLM to Refine Hypotheses 01020304050601030507090 01020304050601020304050 010203040506030507090 LLM (observation-only) Bayesian (observation-only) LLM (observation-intervention) Bayesian (observation-intervention)# datapoints # datapoints # datapointsDescriptive Score Descriptive Score 1 - RMSEProgram Formal Language Math Equation Figure 3: Observation-intervention results across three black-box types. Red: observations and interventions by GPT-4o. Yellow: taking the observation-intervention collected from GPT-4o as observations for the Bayesian inference algorithms. Dashed lines: observation-only reference for GPT-4o (blue) and Bayesian inference (green). In Figure 3, we compare the performance of models with access to only the observations (dashed lines) against using the data that is actively collected through intervention (solid lines). We observe that enabling the LLM to actively intervene significantly improves performance (red) over observation- only (dashed blue). Through intervention, the LLM consistently improves as more data becomes available across all three black-box types. To assess the quality of the interventions, we provide the LLM-collected intervention data to the Bayesian model as observations, akin to the passive yoked data studied in Markant & Gureckis [46,47]. Our results indicate that while the interventions are beneficial to the LLM itself, they are | https://arxiv.org/abs/2505.17968v1 |
not universally more informative, paralleling the findings in human active learning [ 46,47]. This gap was statistically significant, as shown by an ANOV A for each black box type: Program ( F(5,10) = 23 .9,p <0.001), Formal Language ( F(5,10) = 7 .9, p= 0.003), and Math Equation ( F(5,10) = 14 .9,p <0.001). 4.3 Identifying the Value of Generating the Intervention Data 01020304050602030405060 01020304050601020304050 01020304050602535455565 GPT-4o (observation-only) GPT-4o (observation-intervention) GPT-4o (intervention-yoked)# datapoints # datapoints # datapointsDescriptive Score Descriptive Score 1 - RMSEProgram Formal Language Math Equation Figure 4: Comparing intervention-yoked results with observation-only and observation-intervention across three black-box types. The improvement in performance produced by the interventions could have two sources: it could be that the resulting data are more informative, or that the process of generating interventions itself helps the model. To tease these apart, we adopt the passive-yoked design that [ 46,47] used to study human learning, where the data generated via active learning by one group of participants is presented to another group of participants as passive observations. In Figure 4, we compare GPT-4o across three conditions: observation-only (blue), observation-intervention (red), and intervention-yoked (purple) where GPT-4o only passively observes the interventional data without the verbalization and analysis that are used to construct such data. Results consistently show that the intervention-yoked setting leads to lower performance compared to the observation-intervention setting across all three black-box types. This shows that active learning is more beneficial than passive-yoked learning in part because it allows the LLM to dynamically refine its hypothesis in response to its own interventions. 6 5 Analysis 5.1 Escaping the Failure Modes: Overcomplication & Overlooking +12.7 +4.0 +1.7 +0.8 -0.6+12.1 +8.0+8.5 +8.3 +2.0+6.7+7.8+9.4 Level-1 Level-2 Level-3 Level-4 Level-5Δ (Intervention – Observation)Program Formal Language Math Equation Figure 5: Descriptive scores for five different complexity levels. Averaged across three seeds for each of the three black-box types. To understand how intervention improves LLM performance, we analyze common failures by sampling 20failed examples (scoring below 2out of 10points) from the observation-only experiment, which were inspected by human experts. We provide more details in Appendix F.1. We identify two major failure modes: 1) overcomplication , where the LLM excessively interprets the data, resulting in unnecessarily complex hypotheses, and 2) overlooking , where the LLM inadequately leverages available information, leading to poorly reasoned hypotheses. We classified 20randomly sampled examples for each black-box into the two failure modes or “Not Applicable” by human annotation. Results show that for Program the failures are predominantly from overcomplication ( 17cases out of 20) whereas Math Equation contains more overlooking failures ( 16cases out of 20). The failures are more evenly distributed for Formal Language, with 8examples classified as overcomplication, 11 examples as overlooking, and 1example as “Not Applicable”. We provide examples for these failure mode in Appendix F.2. Notably, we find that the impact of interventions on alleviating these two failure modes is contingent upon the complexity of the reverse-engineering task itself. For each of the three specific domains we study, we include a brief characterization of complexity in Appendix K. Within each domain, we observe that | https://arxiv.org/abs/2505.17968v1 |
the complexity of the reverse-engineering problem instance characterized by f∗governs the extent to which interventions rectify failures of overcomplication and overlooking. In Figure 5, we show that performance improvements from intervention on Program diminish as task complexity increases for black-box systems dominated by the overcomplication failure mode. In contrast, actively collected data proves more beneficial when addressing challenging black-box instances dominated by the overlooking failure mode, such as Math Equation. For Formal Language, where both failure modes frequently occur, we observe consistent improvements across all complexity levels. Case study. Figure 6 demonstrates how an LLM progressively updates its hypothesis through active interventions to ultimately reverse-engineer the underlying mechanism of a black-box system using a Formal Language black-box intervention example, where GPT-4o strategically designs subsequent queries to validate its current belief about the system. In contrast, under the observation-only scenario, the model remains trapped in identifying spurious patterns from passively observed data and lacks a meaningful way to assess its own uncertainty. Through active interventions, the LLM iteratively tests and revises its hypotheses after encountering failures, gradually reducing uncertainty and converging toward an accurate understanding of the black-box mechanism. 5.2 Intervention Strategies Similar to how LLMs use chain-of-thought reasoning [ 77] to solve complex tasks, we allow the LLM to verbalize its hypotheses and analyze the observations before constructing the query. We investigate 7 AAAABBBBCCCCCCCC is generated by the black box AAABBBCCCCCC is generated by the black box {more observations}Black-Box: Formal Language Rule = A^n B^n C^(2n) Intervention Round NObservations Intervention Round N+1Let's review the observations and tests conducted so far to refine our understanding of the black box's rules: - A's and B's in the generated strings is equal; C’s is exactly double the number of A's (or B's) - AABBBCCCC: Not generated, indicating the number of A's and B's must be equal. [...] - ABCCCC: Not generated, reinforcing the rule that C's must be exactly double the number of A's (or B's). - AABBCCCC: Generated, confirming the pattern of equal A's and B's, and C's being double. Hypothesis: the black box generates strings that follow these rules: - If not empty, it consists of a sequence of 'A's followed by 'B's followed by 'C's. - The number of 'A's and 'B's must be equal. - The number of 'C's must be exactly double the number of 'A's (or 'B's). Let's test a string that follows this pattern with a different count of A, B, and C. Test: ABCC Black-box: can be generatedHypothesis: the black box generates strings that follow a pattern where [...] - If not empty, it consists of a sequence of 'A's followed by 'B's followed by 'C's. - The number of 'A's and 'B's can vary, but the number of 'C's tends to be larger. Let’s test strings with a different count of 'A's, 'B's, and 'C's to see if it is generated by the black box. Test: AABBBCCCC Black-box: cannot be generated Test: AAABBBCCCC Black-box: cannot be generated Test: ABCCCC Black-box: cannot be generated Test: AABBCCCC Black-box: can be generated Test: AAAABBBBCCCCCCCC Black-box: can be generatedFigure 6: | https://arxiv.org/abs/2505.17968v1 |
Case study example. GPT-4o updates the hypothesis using intervention on Formal Language black box. Yellow: GPT-4o verbalizes the hypothesis based on the passive observations in round N and updates the hypothesis in round N+ 1. Red: constructing test cases. Teal: black box response. Descriptive Functional Analyze-then- Black Box Intervention Intervention Intervention Query Intervention Program 43.4 47.6 19.2 50.8 Formal Language 24.1 28.6 22.8 34.7 Math Equation 34.8 38.8 39.9 38.0 Table 1: Comparison of the four intervention strategies.how different reasoning strate- gies impact the effectiveness of intervention. We compare four strategies: 1) Intervention: no reasoning before construct- ing the query, 2) Descriptive Intervention: describing the current hypothesis about the black-box, 3) Functional Inter- vention: verbalize the black-box implementation as a Python program [ 36,45], and 4) Analyze- then-Query: allowing the LLM to analyze data and verbalize a hypothesis freely. Throughout our experiments, we allow the LLM to reason once every five queries. As shown in Table 1, allowing the LLM to reason generally improves the effectiveness of intervention regardless of the strategy. However, the results also suggest that the most effective intervention typically requires the LLM to carefully analyze past observations and strategically plan subsequent steps to acquire more informative data from the black-box. Interestingly, while structured reasoning in functional intervention [ 36,45] is known to improve performance in formal reasoning tasks, it does not produce additional improvement in the context of reverse-engineering. This suggests that the LLM reverse-engineering abilities may differ from its formal reasoning capabilities. 5.3 Transferring to another LLM 01020304050602030405060 01020304050601020304050 010203040506020304050 observation-only observation-intervention intervention-yoked# datapoints # datapoints # datapointsDescriptive Score Descriptive Score 1 - RMSEProgram Formal Language Math Equation Figure 7: Intervention data transfer results. Red: Llama-3.3-70B-Instruct performing intervention. Blue: Llama-3.3-70B-Instruct using observations only. Purple: using interventional data from GPT-4o as observations for Llama-3.3-70B-Instruct. We also examine whether interventional data actively collected by one LLM (GPT-4o) can effectively transfer and benefit another LLM (Llama-3.3-70B-Instruct). This is relevant to whether AI scientists 8 can transfer their experiments and findings successfully to another AI scientist. Adopting a similar passive-yoked design, we compare three scenarios for Llama-3.3-70B-Instruct [ 27]:observation- only,observation-intervention , and intervention-transfer , where the interventional data is collected by GPT-4o. As shown in Figure 7, the intervention-transfer scenario achieves performance comparable to or slightly better than the observation-only baseline but consistently underperforms Llama’s own intervention (observation-intervention). This suggests that while the intervention data from GPT-4o is informative, the effectiveness diminishes when transferred to a different LLM, showing that the benefit from intervention is model-specific. 6 Limitations and Future Directions We have discussed in this paper the LLM’s inabilities and failure modes in reverse-engineering black-boxes. However, the three black-box types we studied represent only a narrow slice of possible tasks, even within controlled settings. A more comprehensive assessment will require expanding and scaling up the evaluation suite to probe LLMs’ reverse-engineering abilities across a broader spectrum of scenarios. In addition, we have assumed idealized, noise-free black-boxes and fully trustworthy data—a condition that is rarely met in real scientific practices. An important next step is to relax this assumption and rigorously | https://arxiv.org/abs/2505.17968v1 |
test LLM robustness in the presence of noise and uncertainty. As our paper discuss extensively on the failure modes of LLMs, we leave open the question: “ How can we train LLMs to become effective reverse engineers? ”, which includes enhancing the LLM’s ability to perform correct inference from passive observations and to conduct optimal experiments. In particular, what kinds of data and algorithms are needed to train such a model (e.g., reinforcement learning using black-box environments), and can improvements in one domain generalize to broader scientific automation tasks? Finally, we have demonstrated that the actively acquired data by one LLM may not be useful for another LLM, pointing to the issue of transferability of experiences . This is important for automating scientific discovery as many major scientific advances have relied on effective collaborations. Understanding and quantifying the impact of this limited transferability of knowledge may be crucial as multi-agent systems become prevalent, and it will be essential to design such systems with effective communication baked in. 7 Conclusion In this paper, we identified and formalized the reverse-engineering problem as a core ability and prerequisite for building a reliable AI scientists. We showed that current LLMs still struggle to effectively leverage passive observations even on seemingly simple and controlled black-boxes. Allowing LLMs to actively collect intervention data improves performance, but still falls short of closing the gap with Bayesian inference, casting doubt on the promise of truly reliable AI scientists. Through extensive analysis, we identified issues such as overcomplication and overlooking and illustrate how intervention can mitigate such failures. Despite the effectiveness of intervention, our analysis revealed that the intervention data collected by LLMs were primarily beneficial to the models themselves, rather than being objectively informative or transferable to other models. Altogether, our paper directly assesses the ability of LLMs to infer underlying causal structures and mechanisms through controlled reverse-engineering experiments. This capacity mirrors the fundamental scientific discovery process, which relies heavily on identifying hidden relationships and principles behind observed phenomena. Consequently, if an LLM cannot reliably reverse-engineer even simple or controlled systems, this raises concerns regarding its dependability in addressing more complex and ambiguous scientific challenges. Evaluating an LLM’s reverse-engineering ability provides a concrete and principled way to assess its capacity for scientific reasoning, helping us understand whether such models possess the foundational skills required to function as dependable AI scientists. Acknowledgments Our experiments were supported by Azure credits from a Microsoft AFMR grant. This work was supported by ONR MURI N00014-24-1-2748. Special thanks to Danqi Chen, Ryan Liu, Alexander Wettig, and Lucy He for providing invaluable feedback for this project. 9 References [1]Jacob Abernethy, Kareem Amin, Michael Kearns, and Moez Draief. Large-Scale Bandit Problems and KWIK Learning. In International Conference on Machine Learning , pp. 588–596, 2013. [2] Dana Angluin. Queries and Concept Learning. Machine Learning , 2:319–342, 1988. [3] Dana Angluin and Carl H Smith. Inductive Inference: Theory and Methods. ACM Computing Surveys (CSUR) , 15(3):237–269, 1983. [4]Anthropic. Claude 3.5 sonnet. https://www.anthropic.com/news/claude-3-5-sonnet , 2024. [5]Dilip Arumugam and Thomas L Griffiths. Toward Efficient Exploration by Large Language Model Agents. arXiv | https://arxiv.org/abs/2505.17968v1 |
preprint arXiv:2504.20997 , 2025. [6]Richard Bellman. A Markovian Decision Process. Journal of Mathematics and Mechanics , pp. 679–684, 1957. [7]Marcel Binz and Eric Schulz. Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences , 120(6):e2218523120, 2023. [8]David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association , 112(518):859–877, 2017. [9]Ronen I Brafman and Moshe Tennenholtz. R-MAX – A General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning. Journal of Machine Learning Research , 3:213–231, 2002. [10] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. pp. 1877–1901, 2020. [11] Mert Cemri, Melissa Z Pan, Shuyi Yang, Lakshya A Agrawal, Bhavya Chopra, Rishabh Tiwari, Kurt Keutzer, Aditya Parameswaran, Dan Klein, Kannan Ramchandran, et al. Why do multi- agent llm systems fail? arXiv preprint arXiv:2503.13657 , 2025. [12] Kathryn Chaloner and Isabella Verdinelli. Bayesian Experimental Design: A Review. Statistical Science , pp. 273–304, 1995. [13] Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. Do Not Think that Much for 2+3=? On the Overthinking of o1-like LLMs. arXiv preprint arXiv:2412.21187 , 2024. [14] Sahil Chopra, Michael Henry Tessler, and Noah D Goodman. The first crank of the cultural ratchet: Learning and transmitting concepts through language. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 41, 2019. [15] Julian Coda-Forno, Marcel Binz, Jane X Wang, and Eric Schulz. Cogbench: A Large Language Model Walks into A Psychology Lab. arXiv preprint arXiv:2402.18225 , 2024. [16] Alejandro Cuadron, Dacheng Li, Wenjie Ma, Xingyao Wang, Yichuan Wang, Siyuan Zhuang, Shu Liu, Luis Gaspar Schroeder, Tian Xia, Huanzhi Mao, et al. The danger of overthinking: Examining the reasoning-action dilemma in agentic tasks. arXiv preprint arXiv:2502.08235 , 2025. [17] MH DeGroot. Uncertainty, Information, and Sequential Experiments. The Annals of Mathemat- ical Statistics , 33(2):404–419, 1962. 10 [18] Marc Deisenroth and Carl E Rasmussen. PILCO: A Model-Based and Data-Efficient Approach to Policy Search. In Proceedings of the 28th International Conference on Machine Learning (ICML-11) , pp. 465–472, 2011. [19] Armen Der Kiureghian and Ove Ditlevsen. Aleatory or Epistemic? Does it Matter? Structural Safety , 31(2):105–112, 2009. [20] Fabian Falck, Ziyu Wang, and Chris Holmes. Is in-context learning in large language models Bayesian? A martingale perspective. arXiv preprint arXiv:2406.00793 , 2024. [21] Adam Foster, Martin Jankowiak, Elias Bingham, Paul Horsfall, Yee Whye Teh, Thomas Rainforth, and Noah Goodman. Variational Bayesian optimal experimental design. Advances in Neural Information Processing Systems , 32, 2019. [22] Michael C Frank. Baby steps in evaluating the capacities of large language models. Nature Reviews Psychology , 2(8):451–452, 2023. [23] Kanishk Gandhi, Michael Y Li, Lyle Goodyear, | https://arxiv.org/abs/2505.17968v1 |
Louise Li, Aditi Bhaskar, Mohammed Zaman, and Noah D Goodman. BoxingGym: Benchmarking progress in automated experimental design and model discovery. arXiv preprint arXiv:2501.01540 , 2025. [24] Yolanda Gil, Mark Greaves, James Hendler, and Haym Hirsh. Amplify scientific discovery with artificial intelligence. Science , 346(6206):171–172, 2014. [25] Ellen R Girden. ANOVA: Repeated measures . Number 84. Sage, 1992. [26] Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, Khaled Saab, Dan Popovici, Jacob Blum, Fan Zhang, Katherine Chou, Avinatan Hassidim, Burak Gokturk, Amin Vahdat, Pushmeet Kohli, Yossi Matias, Andrew Carroll, Kavita Kulkarni, Nenad Tomasev, Yuan Guan, Vikram Dhillon, Eeshit Dhaval Vaishnav, Byron Lee, Tiago R D Costa, José R Penadés, Gary Peltz, Yunhan Xu, Annalisa Pawlosky, Alan Karthikesalingam, and Vivek Natarajan. Towards an AI co-scientist. arXiv preprint arXiv:2502.18864 , 2025. [27] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ah- mad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Han- nah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira 11 Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Ro- main Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, | https://arxiv.org/abs/2505.17968v1 |
Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish V ogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, and Zoe Papakipos. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783 . Accessed: 2025-05-14. [28] Thomas L Griffiths, Jian-Qiao Zhu, Erin Grant, and R Thomas McCoy. Bayes in the Age of Intelligent Machines. Current Directions in Psychological Science , 33(5):283–291, 2024. [29] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [30] Yancheng He, Shilong Li, Jiaheng Liu, Weixun Wang, Xingyuan Bu, Ge Zhang, Zhongyuan Peng, Zhaoxiang Zhang, Wenbo Su, and Bo Zheng. Can large language models detect errors in long chain-of-thought reasoning? arXiv preprint arXiv:2502.19361 , 2025. [31] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. GPT-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [32] Katie Kang, Amrith Setlur, Dibya Ghosh, Jacob Steinhardt, Claire Tomlin, Sergey Levine, and Aviral Kumar. What do learning dynamics reveal about generalization in llm reasoning? arXiv preprint arXiv:2411.07681 , 2024. [33] Alexander Ku, Declan Campbell, Xuechunzi Bai, Jiayi Geng, Ryan Liu, Raja Marjieh, R Thomas McCoy, Andrew Nam, Ilia Sucholutsky, Veniamin Veselovsky, et al. Using the tools of cognitive science to understand large language models at different levels of analysis. arXiv preprint arXiv:2503.13401 , 2025. [34] Salvatore La Torre, Parthasarathy Madhusudan, and Gennaro Parlato. A robust class of context- sensitive languages. In 22nd Annual IEEE Symposium on Logic in Computer Science (LICS 2007) , pp. 161–170, 2007. [35] Andrew Kyle Lampinen, Stephanie C Y Chan, Ishita Dasgupta, Andrew J Nam, and Jane X Wang. Passive learning of active causal strategies in agents and language models. Advances in Neural Information Processing Systems , 2023. [36] Jia Li, Ge Li, Yongmin Li, and Zhi Jin. Structured chain-of-thought prompting for code generation. ACM Transactions on Software Engineering and Methodology , 34(2):1–23, 2025. [37] Lihong Li and Michael L Littman. Reducing Reinforcement Learning to KWIK Online Regression. Annals of Mathematics and Artificial Intelligence , 58:217–237, 2010. [38] Lihong Li, Michael L Littman, and Thomas J Walsh. Knows What It Knows: A Framework for Self-Aware Learning. In Proceedings of the 25th International Conference on Machine Learning , pp. 568–575, 2008. [39] Dennis V Lindley. On a measure of the information provided by an experiment. The Annals of Mathematical | https://arxiv.org/abs/2505.17968v1 |
Statistics , 27(4):986–1005, 1956. [40] Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning , 2:285–318, 1988. 12 [41] Evan Z Liu, Aditi Raghunathan, Percy Liang, and Chelsea Finn. Decoupling Exploration and Exploitation for Meta-Reinforcement Learning Without Sacrifices. In International Conference on Machine Learning , pp. 6925–6935, 2021. [42] Ryan Liu, Jiayi Geng, Joshua C Peterson, Ilia Sucholutsky, and Thomas L Griffiths. Large language models assume people are more rational than we really are. arXiv preprint arXiv:2406.17055 , 2024. [43] Ryan Liu, Jiayi Geng, Addison J Wu, Ilia Sucholutsky, Tania Lombrozo, and Thomas L Griffiths. Mind your step (by step): Chain-of-thought can reduce performance on tasks where thinking makes humans worse. arXiv preprint arXiv:2410.21333 , 2024. [44] Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, and David Ha. The ai scien- tist: Towards fully automated open-ended scientific discovery. arXiv preprint arXiv:2408.06292 , 2024. [45] Yijia Luo, Yulin Song, Xingyao Zhang, Jiaheng Liu, Weixun Wang, GengRu Chen, Wenbo Su, and Bo Zheng. Deconstructing long chain-of-thought: A structured reasoning optimization framework for long cot distillation. arXiv preprint arXiv:2503.16385 , 2025. [46] Doug Markant and Todd Gureckis. Category learning through active sampling. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 32, 2010. [47] Douglas B Markant and Todd M Gureckis. Is it better to select or to receive? learning via active and passive hypothesis testing. Journal of Experimental Psychology: General , 143(1):94, 2014. [48] R Thomas McCoy and Thomas L Griffiths. Modeling rapid language learning by distilling bayesian priors into artificial neural networks. arXiv preprint arXiv:2305.14701 , 2023. [49] R Thomas McCoy, Shunyu Yao, Dan Friedman, Mathew D Hardy, and Thomas L Griffiths. Embers of autoregression show how large language models are shaped by the problem they are trained to solve. Proceedings of the National Academy of Sciences , 121(41):e2322420121, 2024. [50] Ziqi Ni, Yahao Li, Kaijia Hu, Kunyuan Han, Ming Xu, Xingyu Chen, Fengqi Liu, Yicong Ye, and Shuxin Bai. Matpilot: an llm-enabled ai materials scientist under the framework of human-machine collaboration. arXiv preprint arXiv:2411.08063 , 2024. [51] Charles O’Neill, Tirthankar Ghosal, Roberta R ˘aileanu, Mike Walmsley, Thang Bui, Kevin Schawinski, and Ioana Ciuc ˘a. Sparks of science: Hypothesis generation using structured paper data. arXiv preprint arXiv:2504.12976 , 2025. [52] Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) Efficient Reinforcement Learning via Posterior Sampling. Advances in Neural Information Processing Systems , 26:3003–3011, 2013. [53] Martin L. Puterman. Markov Decision Processes—Discrete Stochastic Dynamic Programming . John Wiley & Sons, 1994. [54] Chandan K Reddy and Parshin Shojaee. Towards scientific discovery with generative ai: Progress, opportunities, and challenges. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pp. 28601–28609, 2025. [55] Ronald L Rivest and Robert E Schapire. Diversity-Based Inference of Finite Automata. In 28th Annual Symposium on Foundations of Computer Science (SFCS 1987) , pp. 78–87, 1987. [56] Ronald L Rivest and Robert E Schapire. Inference of Finite Automata Using Homing Sequences. InProceedings of the Twenty-First Annual ACM Symposium on Theory of Computing , pp. 411– 420, 1989. | https://arxiv.org/abs/2505.17968v1 |
[57] Milena Rmus, Akshay K. Jagadish, Marvin Mathony, Tobias Ludwig, and Eric Schulz. Towards automation of cognitive modeling using large language models. arXiv preprint arXiv:2502.00879 , 2025. 13 [58] Joshua S Rule, Steven T Piantadosi, Andrew Cropper, Kevin Ellis, Maxwell Nye, and Joshua B Tenenbaum. Symbolic metaprogram search improves learning efficiency and explains rule learning in humans. Nature Communications , 15(1):6847, 2024. [59] Amin Sayedi, Morteza Zadimoghaddam, and Avrim Blum. Trading off Mistakes and Don’t- Know Predictions. Advances in Neural Information Processing Systems , 23, 2010. [60] Samuel Schmidgall, Yusheng Su, Ze Wang, Ximeng Sun, Jialian Wu, Xiaodong Yu, Jiang Liu, Zicheng Liu, and Emad Barsoum. Agent laboratory: Using llm agents as research assistants. arXiv preprint arXiv:2501.04227 , 2025. [61] Burr Settles. Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison, 2009. [62] Parshin Shojaee, Kazem Meidani, Shashank Gupta, Amir Barati Farimani, and Chandan K Reddy. Llm-sr: Scientific equation discovery via programming with large language models. International Conference on Learning Representations , 2025. [63] Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. Can LLMs generate novel research ideas? A large-scale human study with 100+ NLP researchers. arXiv preprint arXiv:2409.04109 , 2024. [64] Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Durrett. To CoT or not to CoT? chain- of-thought helps mainly on math and symbolic reasoning. arXiv preprint arXiv:2409.12183 , 2024. [65] Giulio Starace, Oliver Jaffe, Dane Sherburn, James Aung, Jun Shern Chan, Leon Maksin, Rachel Dias, Evan Mays, Benjamin Kinsella, Wyatt Thompson, et al. Paperbench: Evaluating ai’s ability to replicate ai research. arXiv preprint arXiv:2504.01848 , 2025. [66] Alexander L Strehl and Michael L Littman. An Analysis of Model-based interval estimation for Markov Decision Processes. Journal of Computer and System Sciences , 74(8):1309–1331, 2008. [67] Malcolm JA Strens. A Bayesian framework for reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning , pp. 943–950, 2000. [68] Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Hanjie Chen, Xia Hu, et al. Stop overthinking: A survey on efficient reasoning for large language models. arXiv preprint arXiv:2503.16419 , 2025. [69] Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the Seventh International Conference on Machine Learning , pp. 216–224, 1990. [70] Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bulletin , 2(4):160–163, 1991. [71] Richard S Sutton and Andrew G Barto. Introduction to Reinforcement Learning . MIT Press, 1998. [72] István Szita and Csaba Szepesvári. Agnostic KWIK learning and Efficient Approximate Reinforcement Learning. In Proceedings of the 24th Annual Conference on Learning Theory , pp. 739–772, 2011. [73] Sebastian B Thrun and Knut Möller. Active exploration in dynamic environments. Advances in Neural Information Processing Systems , 4, 1991. [74] Thomas J Walsh, István Szita, Carlos Diuk, and Michael L Littman. Exploring Compact Reinforcement-Learning Representations with Linear Regression. In Proceedings of the Twenty- Fifth Conference on Uncertainty in Artificial Intelligence , pp. 591–598, 2009. | https://arxiv.org/abs/2505.17968v1 |
[75] Hanchen Wang, Tianfan Fu, Yuanqi Du, Wenhao Gao, Kexin Huang, Ziming Liu, Payal Chandak, Shengchao Liu, Peter Van Katwyk, Andreea Deac, et al. Scientific Discovery in the Age of Artificial Intelligence. Nature , 620(7972):47–60, 2023. 14 [76] Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, et al. Thoughts are all over the place: On the underthinking of o1-like llms. arXiv preprint arXiv:2501.18585 , 2025. [77] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems , 35:24824–24837, 2022. [78] Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in-context learning as implicit Bayesian inference. In International Conference on Learning Representations , 2021. [79] Yuan Yang and Steven T. Piantadosi. One model for the learning of language. Proceedings of the National Academy of Sciences , 119(5):e2021865119, 2022. [80] Lance Ying, Katherine M Collins, Lionel Wong, Ilia Sucholutsky, Ryan Liu, Adrian Weller, Tianmin Shu, Thomas L Griffiths, and Joshua B Tenenbaum. On benchmarking human-like intelligence in machines. arXiv preprint arXiv:2502.20502 , 2025. [81] Cedegao E Zhang, Katherine M Collins, Lionel Wong, Mauricio Barba, Adrian Weller, and Joshua B Tenenbaum. People use fast, goal-directed simulation to reason about novel games. arXiv preprint arXiv:2407.14095 , 2024. [82] Jian-Qiao Zhu and Thomas L Griffiths. Eliciting the priors of large language models using iterated in-context learning. arXiv preprint arXiv:2406.01860 , 2024. [83] Jian-Qiao Zhu and Thomas L Griffiths. Incoherent probability judgments in large language models. arXiv preprint arXiv:2401.16646 , 2024. 15 A Black Box Designs Program We used 100 list-mapping program instances from [ 58] to design the Program black-box API. Each black-box instance represents as a symbolic program defined in a domain-specific language (DSL). We implemented an interpreter pipeline that parses DSL expressions into abstract syntax trees and compiles them into executable Python code. Each black-box supports two modes: observation (observation-only) and intervention (observation-intervention). In the observation mode, the black-box takes a random input list and returns the output produced by the underlying symbolic program, generating paired observational data: input list →program execution →output list In the intervention mode, the LLM queries an input or explicitly specifies an input-output pair. The black-box generates the output list or evaluates whether the given output matches the internally computed output and provides clear feedback: Feedback =("output ⇒Correct" , if the provided output matches the program output, "output ⇒Incorrect" ,otherwise. Formal Language We followed [ 79,48] to implement a collection of 46 formal language instances to construct our formal language black-box, each instance being capable of generating strings according to specific symbolic rules ( e.g. AnBn). Each black-box instance behaves as an API from a generative model, operating in two modes: observation andintervention . In the observation mode (observation-only), the black-box randomly produces valid strings from its underlying rule, explicitly labeling each as generated output, for example: “AAAABBBB” is generated by the black-box. In the intervention mode (observation-intervention), the LLM submits | https://arxiv.org/abs/2505.17968v1 |
a specific string query to the black-box, which evaluates whether the string complies with its rule. The black-box responds clearly, indicating either acceptance or rejection: Response =(“[string] is generated by the black-box” , if the strings compile with the rule, “[string] cannot be generated by the black-box” ,otherwise. To avoid generating infinite strings, we imposed a maximum character length of 64 for all single characters generated by the black-box. Math Equation For the math equation, we implemented the CES utility model as the black-box, designing it as a generative model capable of generating observational data or responding to queries from an LLM. The utility function of CES is mathematically defined as: U= X iaixr i!1 r , where the weights aisatisfy the constraintP iai= 1, the parameter rcontrols the substitution elasticity, and xirepresents the quantities of goods in a basket. CES black-box also provides two operational modes: observation (observation-only) and intervention (observation-intervention). In the observation mode, the black-box randomly samples two baskets (each a list of good quantities) and computes their utilities using the CES formulation. It then returns the preference outcome indicating which basket is preferred based on higher utility: Preference = Basket1 , U (Basket1 )> U(Basket2 ), Basket2 , U (Basket1 )< U(Basket2 ), equal utility , U (Basket1 ) =U(Basket2 ). 16 In the intervention mode, an external model explicitly queries the black-box by specifying two baskets. In addition, the external model can also provide an estimated preference. The CES black-box internally evaluates the utilities based on the specified baskets and returns the actual preference outcome or feedback indicating whether the provided estimate was “correct” or “incorrect”. B Bayesian models as the ‘Optimal’ Reference We employ Bayesian models as an oracle for optimal reverse-engineering against which we may assess the capabilities of LLMs. Unlike LLMs, Bayesian models explicitly perform probabilistic inference within a clearly defined hypothesis space, systematically updating posterior beliefs using the Bayes rule to identify the underlying mechanism that best explain observed data. Under the critical assumption that the true underlying rule resides within this hypothesis space (that is, the standard assumption of a well-specified prior), Bayesian models serve as an optimal reference standard in our experimental setting. We hypothesize that LLMs, when provided only with passive observational data, are unable to effectively utilize available information due to their inherent reliance on prior knowledge, resulting in significantly lower performance compared to the Bayesian optimal standard. However, allowing LLMs to actively intervene and collect data can substantially reduce the performance gap. For each of the three black-box systems evaluated, we replicated the Bayesian models from their original studies, adapting them to closely match our experimental conditions. Specifically, we provide Bayesian models with observed data generated by our black-box systems as an ideal reference. We also provide Bayesian models with the actively collected data from LLMs intervention to assess the informativeness of the data gathered by LLMs. To ensure rigorous comparability, we applied identical evaluation methodologies to both the Bayesian models and LLMs. Program We used the Bayesian inference approach from [ 58] to establish an optimal | https://arxiv.org/abs/2505.17968v1 |
reference for list-mapping program black-box. Specifically, we utilized their MetaProgram Learner, which performs Bayesian inference over symbolic metaprograms that generate target programs from observed data. Given observational data D, consisting of input-output pairs generated by symbolic programs, the MPL computes the posterior distribution over candidate hypotheses (metaprograms) Haccording to the Bayes rule: P(H|D)∝P(D|H)·P(H). The prior distribution P(H)integrates two complementary sources of simplicity bias: the meta- program prior PM(H)and the induced program prior PP(eH). This combined prior is defined as: P(H)∝exp lnPM(H) + ln PP(eH) 2! , whereeHdenotes the program compiled from the metaprogram H. The likelihood P(D|H)measures the consistency of a meta-program Hwith the observational data provided, incorporating a noise model to accommodate minor discrepancies between the model predictions and observations. Formal Language We adopted the Bayesian inference approach from [ 79] as an optimal reference model to determine the theoretical upper bound on the learnability of formal language rules from the observations generated by our black-boxes or from the interventions queried by LLM. Specifically, we provided strings generated by our formal language black-boxes as observational data to the Bayesian model, which then inferred the underlying symbolic grammar rules. Just as before, the Bayesian inference framework defines the posterior distribution over candidate hypotheses conditioned on observed data using Bayes’ rule: P(H|D)∝P(D|H)P(H), where Hrepresents a candidate hypothesis (grammar or probabilistic program), Drepresents the observed string data generated by the black-box, P(H)represents the prior probability reflecting initial beliefs about the simplicity and plausibility of hypotheses, and P(D|H)denotes the likelihood of observing data Dgiven hypothesis H. 17 The Bayesian model uses a structured prior P(H), assigning higher probabilities to simpler, more concise grammars or symbolic programs. As observational data increases, Bayesian updating systematically refines prior beliefs into posterior distributions, enhancing the probability assigned to grammars that best explain the data. Formally, each new observed string updates the posterior, shifting probability mass toward hypotheses consistent with the cumulative dataset. By leveraging this Bayesian inference mechanism, we quantify the upper bound of the learnability of the observations, thus providing a rigorous baseline to evaluate LLM’s effectiveness in utilizing the same observational data. Math Equation To infer the parameters of the CES utility model from the observations provided, we followed [ 21] by employing a Bayesian inference approach explicitly conditioned on these observations. Bayesian inference integrates observed data with prior beliefs, updating these beliefs into posterior distributions to progressively improve parameter estimates. Initially, we specified prior distributions for the model parameters: ρ∼Beta(ρ0, ρ1), α∼Dirichlet (αconc), slope∼LogNormal (slopeµ,slopeσ). Given pairs of consumption bundles (d1, d2)and the corresponding observed user preferences y, the Bayesian framework models these preferences probabilistically through a censored sigmoid-normal likelihood: y∼CensoredSigmoidNormal (slope·(U(d1)−U(d2)),slope·obs_sd ·(1 +∥d1−d2∥2)), where U(d1)−U(d2)denotes the utility difference between the two bundles. Here, “censored” refers to applying a sigmoid function to latent utility values and then truncating the results to the observed preference interval (e.g., [0,1]), ensuring that responses remain within these limits. The posterior distributions are updated via Bayes’ theorem by explicitly integrating observational data: p(ρ, α, slope|y, d)∝p(y|ρ, α, slope, d)p(ρ, α, slope ), where p(ρ, α, slope )represents prior distributions and p(y|ρ, | https://arxiv.org/abs/2505.17968v1 |
α, slope, d)represents the likelihood function given the observations. While some sources prefer uppercase probability notation such as P(H|D), this paper adopts lowercase notation ( p) consistently for both probability densities and random variables throughout. Parameter estimation was performed via variational inference [ 8], iteratively optimizing the evidence lower bound (ELBO), defined as: ELBO (ϕ) =Eqϕ[logp(y|ρ, α, slope, d)]−DKL(qϕ(ρ, α, slope )∥p(ρ, α, slope )), where qϕdenotes the variational posterior distribution used to approximate the true posterior distribu- tion. Thus, as additional observational data are incorporated, Bayesian inference continually updates prior beliefs into posterior distributions, systematically refining parameter estimates toward their true underlying values. C Statistical Significant Tests C.1 Repeated-measures ANOV A To statistically evaluate the interaction between models (Bayesian vs. LLM) and steps, we calculated the repeated-measures ANOV As. Each black-box instance involved multiple repeated measurements corresponding to different steps. Letting Yijkrepresent the performance score for subject i, models j (Bayesian or LLM), and step k, the repeated-measures ANOV A model can be expressed as: Yijk=µ+Si+Mj+Tk+ (M×T)jk+ϵijk 18 where µis the mean in all measurements, Sirepresents the random effect of the subjects (individual seeds), Mjdenotes the main effect of the model, Tkis the main effect of steps, (M×T)jkis the interaction between the model and the step, and ϵijkrepresents residual error. The ANOV A decomposes the total variance into these distinct sources. Specifically, the significance of the interaction of the Step Method ×was determined by calculating the corresponding F-statistic: F=MS (M×T) MSerror where MS (M×T)is the mean square for the Method ×Step interaction, and MSerror is the residual mean square. Significance was assessed using an F-distribution with numerator degrees of freedom equal to (J−1)(K−1), where Jis the number of method levels and Kis the number of steps, and denominator degrees of freedom equal to (I−1)(K−1), where Iis the number of subjects. D Prompts D.1 Intervention prompt In this task, you are given a ``black box ''and need to determine its inner workings. {black box information} You will have a series of turns to interact with the black box. On each turn, you can either gather more information or test your hypothesis. To gather more information, {query instruction}, and obtain a result. To test your hypothesis, {test instruction}. All the information gathered across all the turns is used to reverse engineer the black box. Throughout the process, you can decide whether the gathered information is sufficient to correctly identify the workings of the black box, in which case you can stop. Otherwise, you need to continue the interaction. Concretely, you can perform one of the following actions at each turn: 1) query, 2) test, or 3) stop. Provide a *thorough reasoning* before performing the action. Leverage the past observations to design your next query and make your hypothesis as accurate as possible. Below is the format for each action. Query: ```query List[int] ``` Test: ```test List[int] List[int] ``` Stop: ```stop ``` Note that you should only perform one of the actions above with one input example in your response. Below are your past observations of the black box: {observations} Response: 19 D.2 Evaluation | https://arxiv.org/abs/2505.17968v1 |
Prompts Program (judge): In this task you will be given a ground truth program and pseudocode that you need to evaluate. You will output a score for the quality of the pseudocode based on a set of assessment criteria. Below is the ground truth program: {ground_truth} Evaluate the quality of the following pseudocode: {response} Score the above pseudocode against the ground truth program based on the following criteria (total 10 points): 1. Does the provided pseudocode correctly specify the implementation of the ground truth program and manipulate the variables in the same way? Ignore the programming language difference. [5 point] 2. Does the provided pseudocode specify the implementation in the most simple and straightforward way without extra unused parts (Occam 's Razor principle) [5 point] Explain your judgement and return the final score with the type float and following the format below: ```judgement YOUR JUDGEMENT HERE ``` ```score YOUR SCORE HERE ``` Response: Formal Language (judge): In this task, you will be given a ground truth formal language and a proposed rule describing that formal language, which you need to evaluate for quality. You will then output a score based on a set of assessment criteria. Below is the ground truth formal language: {ground_truth} Evaluate the quality of the following formal language rule: {response} Score the above formal language rule against the ground truth formal language based on the following criteria (total: 10 points): 1. Does the provided rule correctly generate the examples given in the ground truth? Your score is determined by how many examples are correctly generated out of the total number of examples. [3 points] 2. Does the provided rule correctly reverse engineer the ground truth formal language? [5 point] 3. Is the provided rule in the most simple and straightforward way without extra unused parts (Occam 's Razor principle)? Note: If the provided rule is completely incorrect, you should give 0 point for this criterion. [2 point] Explain your judgement and return the final score with the type float and following the format below: 20 ```judgement YOUR JUDGEMENT HERE ``` ```score YOUR SCORE HERE ``` Response: Math Equation (judge): In this task, you are provided with a ground truth CES utility function and a CES utility function predicted by a model. Your task is to evaluate the quality of the predicted utility function based on a set of assessment criteria and output a score. The ground truth utility takes this form: U(\\mathbf{{x}}) = \\left(\\sum_{{i=1}}^n a_i \\cdot x_i^{{\\text{{rho}}}}\\right)^{{1/\\text{{rho}}}} The utility depends on the following parameters: 1. a_i: float rounded to the first decimal point and should sum up to 1. (Note that there will be multiple a_i 's.) 2. rho: float rounded to the first decimal point. Below is the information about the ground truth utility function: {ground_truth} Evaluate the quality of the following predicted the parameters of the utility function: {response} Score the predicted utility function against the ground truth using the following criteria (total 10 points): 1. Is the predicted utility function has a correct rho? [2 points] 2. Compare the predicted utility function to the ground | https://arxiv.org/abs/2505.17968v1 |
truth, how many a_i's are correct (order matters)? This will give us an accuracy percentage. The score for this bullet should be the accuracy percentage times the total allocated 6 points [6 points] 3. In the predicted utility function, do the unknown parameters a_i sum up to 1 and do the number of a_i 's match the number of goods? [1 point] 4. Does the predicted utility function express the function in a simple and straightforward way without any unnecessary elements (adhering to the Occam 's Razor principle)? [1 point] Explain your judgement and return the final score with the type float and following the format below: ```judgement YOUR JUDGEMENT HERE ``` ```score YOUR SCORE HERE ``` Response: Descriptive Evaluation: 21 In this task, you are given a ``black box ``and need to determine its inner workings. {black box information} Below are some past observations from the black box: {observations} Your task is to reverse engineer the rule underlying {more detailed instructions} in the following format: ```Rule YOUR RULE HERE ``` Response: Function Implicit Evaluation: In this task, you are given a ``black box ``and need to determine its inner workings. {black box information} Below are some past observations from the black box: {observations} {More detailed instructions} Output your generated string in the following format: ```output YOUR RESPONSE HERE ``` Response: E Reverse Engineering Abilities Across Different Categories of LLMs In Figure 8, we report the results of observation-only and observation + intervention across different LLMs: Llama-3.3-70B-Instruct, Claude-3.5-Sonnet, and DeepSeek-R1. Across nearly all black-box types and models, actively intervening with iteratively refining hypotheses consistently enhances models’ understanding of the underlying black-box dynamics. In particular, we show that DeepSeek R1, utilizing Long CoT reasoning, has the potential to continuously extract informative knowledge even from passive learning scenarios. This detailed and long reasoning allows the model to explore various potential hypotheses. However, despite these advantages, DeepSeek R1 does not significantly outperform models without explicit reasoning (e.g., GPT-4o, Llama-3.3-70B-Instruct, Claude 3.5 Sonnet) in reverse-engineering tasks. This finding highlights the inherent limitations of current reasoning steps for existing LLMs. F Common Failure Modes F.1 Human Annotation To systematically analyze LLM’s failure modes, we defined an LLM reverse-engineering attempt as a failure if its descriptive score was below 2out of 10, according to our descriptive evaluation rubric. For each black-box type, we randomly selected 20representative failure cases from the observation- only setting. We have two human experts independently reviewed these examples, categorizing each case based on the nature of the error. Any disagreements were resolved through discussion. Finally, human annotators identified two common failure modes: overcomplication and overlooking. 22 0 10 20 30 40 50 60354045505560 0 10 20 30 40 50 601020304050 0 10 20 30 40 50 60303540455055 0 10 20 30 40 50 60303540455055 0 10 20 30 40 50 6010203040 0 10 20 30 40 50 6020253035404550 0 10 20 30 40 50 602025303540 0 10 20 30 40 50 6005101520 0 10 20 30 40 50 60202530354045 010 20 30 40 50 603035404550 010 20 30 40 50 60510203040 010 20 30 | https://arxiv.org/abs/2505.17968v1 |
40 50 60202530354045 observation-only observation-intervention# datapoints # datapoints # datapointsDescriptive Score Descriptive Score 1 - RMSEDescriptive Score Descriptive Score 1 - RMSEDescriptive Score Descriptive Score 1 - RMSEDescriptive Score Descriptive Score 1 - RMSEGPT-4o (Program) GPT-4o (Formal Language) GPT-4o (Math Equation) Llama-3.3-70B (Program) Llama-3.3-70B (Formal Language) Llama-3.3-70B (Math Equation) Claude 3.5 Sonnet (Program) Claude 3.5 Sonnet (Formal Language) Claude 3.5 Sonnet (Math Equation) DeepSeek R1 (Program) DeepSeek R1 (Formal Language) DeepSeek R1 (Math Equation)Figure 8: Results of reverse engineering abilities across different categories of LLMs. We report Llama-3.3-70B-Instruct, Claude 3.5 Sonnet and Deepseek R1 on Program, Formal Language and Math Equation. F.2 Overcomplication &Overlooking Examples Across the three black-box types, we find that overcomplication is a common failure mode, particularly in the Program, while overlooking most often occurs in Math Equation. For Formal Language, both overcomplication and overlooking are observed when LLMs fail at reverse engineering. In Tables 2,3 4 and 5, we show the failure examples for Program (overcomplication), Formal Language (overcomplication & overlooking) and Math Equation (overlooking). G Complexity Categorization We rank the complexity level from 1−5. Each black-box type includes multiple instances of varying task complexity. Program. The complexity level is determined based on the number of operations, which ranges from 1−12. Instances with fewer than 2operations are classified as complexity level 1(complexity − 1), those with fewer than 4operations as complexity −2, fewer than 6operations as complexity −3, and fewer than 8operations as complexity −4. Due to the limited number of remaining examples, all others are grouped into the highest complexity level ( complexity −5). Formal Language. Instead of using five complexity levels, we divided the Formal Language instances into three levels, drawing on insights from [ 34]. Specifically, we categorized regular language instances as complexity-1 black-boxes, context-free languages as complexity-3, and context- sensitive languages as complexity-5. 23 Math Equation. We categorize complexity levels according to the number of goods involved, ranging from 2to6. Specifically, instances with 2goods are labeled as complexity −1,3goods as complexity −2, and so on, with instances involving 6 goods classified as the highest complexity level, complexity −5. H Evaluation of the Reverse-Engineering Ability Figure 9: Comparison of descriptive evaluation (yellow) and functional evaluation (purple) across black-box complexity levels. Unlike typical tasks used to benchmark LLMs, such as solving math problems or question answering which are commonly evaluated using accuracies, the reverse-engineering ability is less straightforward. One can assess how well the black-box f∗is recovered by an LLM by: 1) descriptive evaluation where the LLM verbalizes the hypothesis to compare to the ground truth and 2) functional evaluation which captures how well the LLM emulates the black-box’s input-output dynamics and generalizes to unseen examples [ 32]. In functional evaluation, the LLM directly predicts the response conditioned on the test query and the past observations and compute accuracy Acc=1 MPM i=1 1[ytest i=M(xtest i,O)], without generating the black-box implementation, akin to in-context learning [ 10]. As shown in Figure 9, descriptive and functional evaluation trends align for Program across complexity levels. However, we also observe discrepancies of trends between the two evaluations for Formal Language | https://arxiv.org/abs/2505.17968v1 |
(complexity level 3 to 5) and Math Equation (complexity level 1 to 3), demonstrating that the evaluation protocol and the format of the model output may capture different strengths and weaknesses of the model. For Program, we used the original samples from the black box of the list mapping program as test cases [ 58] and ensured that none of these input–output pairs were included in the observations. For Formal Language and Math Equation, we use our deterministic black-box randomly sample 20 test cases per black-box instance. I Another Black-Box Type: Board Game I.1 Black-Box Design We design a connect- nboard game ( 2×2to9×9) variant following [ 81]. The black-box is defined by the rules that dictate the winning condition of the game (e.g., “Win by connecting 3 stones in a column.”). The LLM can query with a board state and an action, and the black-box responds with the new board state and a game status (win/lose/draw/ongoing). In our black-box design, every game instance exposes two modes— observation (observation-only) and intervention (observation-intervention) —and uses the symbols XandOto mark the moves of the two players. Game definition. For a given instance, let the board be a r×cgrid and let ⟨rwin, cwin, dwin⟩denote the required number of consecutive marks needed to win horizontally, vertically, and diagonally, respectively. During play the black-box tracks the current board state B, the active player p∈ {X,O}, and whether the game has ended. Inobservation mode, an external LLM supplies an initial board (or leaves it empty). The black-box generates the following as the outputs: • the round number, • the updated board, 24 • whose move it was last, • the current game status ( ongoing ,draw ,PlayerX_wins , etc.). If the move ends the game, the record also names the winner. Inintervention , the LLM needs to specify (i) additional pieces to place on the board, (ii) the candidate action it wishes the black-box to take, and (iii) optionally, a predicted follow-up board. The black-box returns the same structured record as in observation mode. If the LLM also proposed a prediction of the next state, the black-box confirms it ( “Correct” ) or explains why it is invalid. For Board Game, we do not have a Bayesian model as the optimal reference for the comparison. I.2 GPT-4o Results 0 10 20 30 40 50 604045505560 observation-only observation-intervention# datapointsDescriptive ScoreBoard Game Figure 10: Observation-only and observation-intervention results for Board Game. In Figure 10, we do not observe the same trends seen in Programs, Formal Language, and Math Equation black-box types. For Board Game, actively collected data does not improve the reverse- engineering performance of the model, indicating that the data gathered is not even significantly informative for the LLM itself. We hypothesize that this is because, to query the black-box, the LLM must (1) generate a board state, (2) propose a next move, and (3) predict the resulting board state, requiring a multi-step reasoning process. These compounded requirements make it challenging for the LLM to probe edge cases or effectively reduce uncertainty about the black-box. This result further highlights | https://arxiv.org/abs/2505.17968v1 |
a key limitation of current LLMs: When the information signal from the black-box is sparse, actively collected data remain of limited utility. J Functional Evaluation Details For Program, we used the original samples from the black box of the list mapping program as test cases [ 58] and ensured that none of these input–output pairs were included in the observations. For Formal Language and Math Equation, we use our deterministic black-box randomly sample 20 test cases per black-box instance. K Complexity Categorization We rank the complexity level from 1−5. Each black-box type includes multiple instances of varying task complexity. Program. The complexity level is determined based on the number of operations, which ranges from 1−12. Instances with fewer than 2operations are classified as complexity level 1(complexity − 1), those with fewer than 4operations as complexity −2, fewer than 6operations as complexity −3, and fewer than 8operations as complexity −4. Due to the limited number of remaining examples, all others are grouped into the highest complexity level ( complexity −5). 25 Formal Language. Instead of using five complexity levels, we divided the Formal Language instances into three levels, drawing on insights from [ 34]. Specifically, we categorized regular language instances as complexity-1 black-boxes, context-free languages as complexity-3, and context- sensitive languages as complexity-5. Math Equation. We categorize complexity levels according to the number of goods involved, ranging from 2to6. Specifically, instances with 2goods are labeled as complexity −1,3goods as complexity −2, and so on, with instances involving 6 goods classified as the highest complexity level, complexity −5. 26 Black-box instance: (lambda (singleton (third $0))) Observations: Input: [97, 53, 5, 33, 65, 62, 51] ; Output: [5] Input: [61, 45, 74, 27, 64] ; Output: [74] Input: [36, 17, 96] ; Output: [96] Input: [79, 32] ; Output: invalid input Input: [90, 77, 18, 39, 12, 93, 9, 87, 42] ; Output: [18] Input: [71, 12, 45, 55, 40, 78, 81, 26] ; Output: [45] Input: [61, 56, 66, 33, 7, 70, 1, 11, 92] ; Output: [66] Input: [90, 100, 85, 80, 0, 78, 63] ; Output: [85] Input: [31, 93, 41, 90, 8, 24] ; Output: [41] Input: [28, 30, 18, 69, 57, 11, 10, 40, 65, 62] ; Output: [18] Input: [38, 70] ; Output: invalid input Input: [90, 15, 70, 42, 69] ; Output: [70] Input: [77, 70, 75, 36] ; Output: [75] Input: [11, 76, 49, 40, 73, 30, 37, 23] ; Output: [49] Input: [23, 4, 78, 84] ; Output: [78] · · · {More observations} · · · Input: [60, 8, 11, 86, 96] ; Output: [11] Input: [19, 4, 10] ; Output: [10] Input: [87, 50, 90, 67, 35, 66, 30, 27, 86] ; Output: [90] Input: [53, 74, 35, 57, 63, 84, 82, 89, 45, 10] ; Output: [35] Input: [78, 14, 62, 75, 80, 42] ; Output: [62] Input: [31, 2, 93, 34] ; Output: [93] Input: [90, 28] ; Output: invalid input Input: [21, 42, 54, 7, 12, 100] ; Output: [54] Input: [89, 28, 5] ; Output: [5] Input: [81, 68, 77, | https://arxiv.org/abs/2505.17968v1 |
87, 9, 3, 15, 81, 24, 77] ; Output: [77] Input: [15, 50, 11, 47, 14, 4, 77, 2, 24, 23] ; Output: [11] Input: [61, 26] ; Output: invalid input Input: [86] ; Output: invalid input Input: [79, 12, 33, 8, 28, 9, 82] ; Output: [33] Input: [44, 55, 23, 7, 64] ; Output: [23] Model response: (Overcomplication)FUNCTION black_box(input_list): IF length of input_list < 3: RETURN "invalid input" ELSE: SORT input_list in ascending order RETURN the third element in the sorted input_list Explanation: The black box function takes a list of integers as input. If the list contains fewer than 3 elements, it returns "invalid input". Otherwise, it sorts the list in ascending order and returns the third smallest element (which is the third element in the sorted list). Table 2: Overcomplication example of Program. Red: LLM comes up with a more complex rule of black-box instance. 27 Black-box instance: AnBmAnBm ( AnBmAnBm) Observations: AAAAAAAAAAAAAA is generated by the black box AABBBBBBAABBBBBB is generated by the black box AAAAAAAAABBAAAAAAAAABB is generated by the black box AAAAAAAABBBBBAAAAAAAABBBBB is generated by the black box AAAAABBBBBBBBAAAAABBBBBBBB is generated by the black box AAAAAAAABBBBBBBBBBAAAAAAAABBBBBBBBBB is generated by the black box AAABBBBBBBBBAAABBBBBBBBB is generated by the black box AAAAAAAABBBBBBAAAAAAAABBBBBB is generated by the black box AAABBBBBBBBBAAABBBBBBBBB is generated by the black box ... {More observations} ... AAABBBBAAABBBB is generated by the black box AAAABBAAAABB is generated by the black box AAAAAAAAABAAAAAAAAAB is generated by the black box AAAAAABBBBBBAAAAAABBBBBB is generated by the black box AAAAAABBBBBAAAAAABBBBB is generated by the black box AAAAAAAAAAAAAAAAAA is generated by the black box AAAAAAAAAABBBBBBBBAAAAAAAAAABBBBBBBB is generated by the black box AAAAAAAAAABBAAAAAAAAABB is generated by the black box AAAAABBBBBBBBBAAAAABBBBBBBBB is generated by the black box AAAAABBBBBBBBAAAAABBBBBBBB is generated by the black box AAAABBBBAAAABBBB is generated by the black box Model response: (Overcomplication)The formal language generates strings composed of two segments: a sequence of ’A’s followed by a sequence of ’B’s. The number of ’A’s in the first segment must be equal to the number of ’B’s in the second segment, or the number of ’A’s can be zero, resulting in a string of only ’B’s. Additionally, the language allows for strings composed entirely of ’A’s or entirely of ’B’s. The sequences of ’A’s and ’B’s can be repeated multiple times, as long as the rule of equal counts between ’A’s and ’B’s in each segment is maintained. Table 3: Overcomplication example for Formal Language. Red: The LLM predicts a rule that is more complex than necessary for this black-box instance. In this example, it incorrectly reverse the number of As to be equal to the number of Bs. 28 Black-box instance: XXX (three copies of the same string) Observations: BBABBABABBABBABABBABBABA is generated by the black box BABBABBABBABBABBAB is generated by the black box ABBABBABB is generated by the black box ABAAAABABAAAABABAAAAB is generated by the black box AAA is generated by the black box AAAAAA is generated by the black box BBB is generated by the black box ABAABAABAABAABAABA is generated by the black box ABABAB is generated by the black | https://arxiv.org/abs/2505.17968v1 |
box ABAABAABA is generated by the black box ... {More observations} ... BBBBAABABABBBBAABABABBBBAABABA is generated by the black box ABABAB is generated by the black box AAABAABAAABAABAAABAAB is generated by the black box ABABAB is generated by the black box BABBBBBBBABABBBBBBBABABBBBBBBA is generated by the black box ABBBABBBABBB is generated by the black box BABABA is generated by the black box BBABAABBABBABAABBABBABAABBA is generated by the black box AAAAAA is generated by the black box Model response: (Overlooking)The black box generates strings that are composed of repeated patterns or segments. Each string can be broken down into one or more segments, where each segment is a repetition of a smaller pattern. The patterns can be composed of the letters ’A’ and ’B’, and the repetition of these patterns can vary in length. Additionally, the black box can generate an empty string. The key characteristics of the generated strings are: 1. The strings can be empty. 2. The strings can be composed of repeated patterns of ’A’ and ’B’. 3. The patterns themselves can vary in complexity and length. 4. The repetition of these patterns can be consistent or vary within the string. Examples of patterns include: - Single letters repeated (e.g., "AAA", "BBB"). - Simple alternating patterns (e.g., "ABABAB", "BABABA"). - More complex patterns with varying repetition (e.g., "ABBABBABB", "BABBABBABBAB- BABBAB"). The rule can be estimated as: The black box generates strings by repeating pat- terns of ’A’ and ’B’, where the patterns can be simple or complex, and the repetition can be consistent or vary within the string. Table 4: Overlooking example of Formal Language. LLM estimates a rule of the black-box instance without utilizing the observed data to come up with the hypothesis. 29 Black-box instance: alpha=[0.1,0.1,0.4,0.4]; rho=0.4 Observations: Basket1: [87.79948556831374, 39.184851241810016, 22.551465649464433, 42.84633678341351]; Basket2: [66.20150946367784, 80.43281524767865, 68.02177295663886, 8.112415033884117]; Preference: Basket1 Basket1: [20.086807088533046, 79.79230548488334, 8.817190035426963, 90.22168650956948]; Basket2: [90.40796323190197, 70.51080246059688, 69.96510352125082, 48.476064999077785]; Preference: Basket2 Basket1: [8.003392409139845, 35.23615132704936, 76.3120116750855, 59.18156984237496]; Basket2: [31.278783818535036, 18.047644469275518, 21.855445800494433, 53.88293391636068]; Preference: Basket1 ... {More observations} ... Basket1: [40.80654893485903, 14.795240036848078, 37.35074210377093, 79.44298024697972]; Basket2: [51.30239830060435, 50.0604427932334, 21.59530896237239, 36.88204913376821]; Preference: Basket1 Basket1: [75.18968775093136, 42.30088453163497, 18.388385810882728, 95.92185012513752]; Basket2: [93.37477559149177, 86.19812315165629, 49.438229718409715, 98.68538582332832]; Preference: Basket2 Basket1: [70.30309661179918, 94.48475506394722, 78.28267517818938, 70.90519145824096]; Basket2: [63.648862012782175, 99.80230448540196, 61.33979160499423, 78.38413565179029]; Preference: Basket1 Model response: (Overlooking)To reverse engineer the CES utility function, we need to determine the parameters aiandρ that best fit the observed preferences between baskets. The CES utility function is given by: U(x) = nX i=1ai·xρ i!1/ρ where aiare the weights for each good and ρis the elasticity parameter. Steps to Determine Parameters: •Analyze Preferences: For each pair of baskets, the basket with the higher utility is preferred. •Estimate ρ: The parameter ρaffects the degree of substitutability between goods. ρ= 1is perfect substitutes, ρ= 0is Cobb-Douglas, ρ < 0is complementarity. • Estimate ai: The weights aisum to 1 and reflect the importance of each good. •Iterative Fitting: Use optimization to adjust aiandρto minimize preference viola- tions. Estimation: After analyzing the data and testing various combinations, the following provides a good fit for the observed preferences: Best Fit Utility Function: U(x) = 0.3x0.5 1+ 0.2x0.5 2+ 0.3x0.5 | https://arxiv.org/abs/2505.17968v1 |
arXiv:2505.17978v1 [cs.CL] 23 May 2025AVERIMATEC: A Dataset for Automatic Verification of Image-Text Claims with Evidence from the Web Rui Cao♡, Zifeng Ding♡, Zhijiang Guo♡, Michael Schlichtkrull♢, Andreas Vlachos♡ University of Cambridge♡, Queen Mary University of London♢ {rc990,zd320,zg283,av308}@cam.ac.uk ,m.schlichtkrull@qmul.ac.uk Abstract Textual claims are often accompanied by images to enhance their credibility and spread on social media, but this also raises concerns about the spread of misinfor- mation. Existing datasets for automated verification of image-text claims remain limited, as they often consist of synthetic claims and lack evidence annotations to capture the reasoning behind the verdict. In this work, we introduce AVERI- MATEC, a dataset consisting of 1,297 real-world image-text claims. Each claim is annotated with question-answer (QA) pairs containing evidence from the web, reflecting a decomposed reasoning regarding the verdict. We mitigate common challenges in fact-checking datasets such as contextual dependence, temporal leak- age, and evidence insufficiency, via claim normalization, temporally constrained evidence annotation, and a two-stage sufficiency check. We assess the consis- tency of the annotation in AVERIMATECvia inter-annotator studies, achieving aκ= 0.742on verdicts and 74.7%consistency on QA pairs. We also propose a novel evaluation method for evidence retrieval and conduct extensive experiments to establish baselines for verifying image-text claims using open-web evidence. 1 Introduction Misinformation has become a public concern due to its potential impact on elections, public health and safety [Allcott and Gentzkow, 2017, Aral and Eckles, 2019, Bär et al., 2023, Rocha et al., 2021]. To curb its spread, professional fact-checkers are employed to identify misleading content. However, they are unable to keep up with the vast volume of information online [Pennycook, 2019, Drolsbach et al., 2024]. The severity of the problem, along with the limitations of manual verification, has motivated the development of automated fact-checking (AFC) [Guo et al., 2022, Nakov et al., 2021]. To support research in AFC, the research community has created various benchmark datasets [Thorne et al., 2018, Aly et al., 2021, Schlichtkrull et al., 2023, Alhindi et al., 2018, Yao et al., 2023, Chen et al., 2024], aiming to enhance the effectiveness and interpretability of fact-checking systems. However, most existing benchmarks focus exclusively on textual claims, overlooking the important role of media in the dissemination of misinformation. Recent studies estimate that approximately 80% of online claims are multimodal involving both text and media [Dufour et al., 2024], as media can enhance perceived credibility [Newman et al., 2012] and increase exposure [Li and Xie, 2020]. Among these, images are the most prevalent media type [Dufour et al., 2024]. While several datasets have been developed for image-text AFC, many are synthetic, generated by manually manipulating either the textual or visual modality of image-text pairs [Luo et al., 2021, Papadopoulos et al., 2024, Jia et al., 2023]. Due to discrepancies between synthetic data and real-world data [Zeng et al., 2024], models that perform well on synthetic benchmarks may fail to generalize to real-world claims. Moreover, recent work [Papadopoulos et al., 2025] showed that models can achieve high performance on such datasets by exploiting superficial correlations, such as image-text similarity, without examining factuality and logical consistency. Some benchmarks | https://arxiv.org/abs/2505.17978v1 |
[Zlatkova et al., Preprint. Under review. Claim Text: Kamala Harris with her parents and she is not a black American . Claim Image: Claim Date: 2020.08.12 Claim Source: Facebook Refuting Reasons: Misuse of images; Textually refuted Image Misuse: Out-of-Context Claim Type: ...Q1: What is the date of the claim image being published? A1: The image was taken in 2016.Related Image to Q1: Q2: Were Kamala Harris parents alive in 2016? A2: Her mother died in 2009. Q3: Who are the people shown in the Image of the claim? Related Image to Q3: A3: Harris (center) was with her supporters Suneil Parulekar (left) and Rohini Parulekar (right) at the 2016 Pratham gala. Q4: What is Kamala Harris's ethnic background? A4: She has Jamaican and Indian parents. Q5: Can a person be seen as a black American if they have either Indian or Jamaican parents? A5: Yes. Not all black people are African American.Verdict: Refuted Justification: The image was interpreted out -of-context. The image was taken in 2016, while Harris’ mother died in 2009. It is impossible that she was with her parents in the image. Besides, another evidence proves the image shows Harris was with her supporters at the 2016 Pratham gala, rather than with her parents. Harris has Jamaican and Indian heritage, proving she is a black American. Therefore, the textual part of the claim, “she is not a black American” is refuted.Figure 1: An annotated claim from AVERIMATEC.The rationale for verifying an image-text claim has been decomposed into a sequence of QA pairs, which could be potentially multimodal. 2019, Nakamura et al., 2020, Shu et al., 2020] attempt to include real-world image-text claims extracted from fact-checking articles. As noted in prior work [Ousidhoum et al., 2022, Schlichtkrull et al., 2023], this may result in omitting critical contextual information for verification, such as context to resolve coreferences. Additionally, both synthetic and real-world image-text datasets typically lack annotated evidence, making it impossible to evaluate models’ reasoning process. To address the limitations above, we propose the Automated Verification of Image-TextClaim (AVERIMATEC) dataset, where the verification of real-world image-text claims has been decomposed into a sequence of question-answering with evidence from the web. In addition, each claim is annotated with metadata information, a veracity label based on retrieved evidence and a textual justification, explaining how the verdict is reached, as shown in the example in Figure 1. To construct AVERIMATEC, initially annotators are asked to identify and normalize image-text claims from fact-checking articles, incorporating necessary contextual information while providing associated metadata. Next, annotators convert the verification rationale from the articles into QA pairs, while being restricted to using only online evidence published before the claim’s date. Given the multimodal nature of the task, both questions and answers may involve images. Finally, we conduct two rounds of quality control to ensure that each annotated claim is supported by sufficient evidence for the annotated verdict. The resulting dataset, contains 1,297 image-text claims. To assess the consistency of the verdict labels we conducted an inter-annotator agreement study in which we obtained a Randolph’s [Randolph, 2010] free-marginal κof0.742over 100 | https://arxiv.org/abs/2505.17978v1 |
re-annotated claims. The re-annotation recovered 74.7%of the original QA pairs, confirming that the annotations capture reasoning paths for verifying image-text claims consistently. We further introduce a baseline for image-text claim verification, which operates by generating evidence-seeking questions aimed at fact-checking and answering these questions with a set of expert tools. Since AVERIMATECis the first image-text claim verification dataset to incorporate QA annotations that explicitly reflect reasoning paths and evidence, we develop a reference-based evaluation method to assess models’ generated questions and retrieved evidence. Using this evidence evaluation, we report conditional verdict accuracy which measures the correctness of predicted verdicts only when the associated evidence score exceeds a predefined threshold. 2 Related Works Automated fact-checking (AFC) has become increasingly important due to the pressing need to curb the spread of misinformation [Guo et al., 2022, Nakov et al., 2021]. To support AFC research, several datasets focusing primarily on text-based claims have been proposed (see the top block of Table 1). Motivated by the prevalence of images in claims, fact-checking datasets for image-text claims were introduced (listed in the bottom block of Table 1). Some studies [Luo et al., 2021, 2 Table 1: Comparison of fact-checking datasets. Indep. (independence) denotes whether extracted claims are context independent (e.g., understandable without fact-checking articles). Img.,Suff. andRetr. are abbreviations for image, sufficiency and retrieval, respectively. Sufficiency indicates whether there is sufficient supporting evidence to reach annotated verdicts; Retrieval refers to whether open-world evidence retrieval is performed; Unleaked represents whether annotated evidence contains temporal leakage such as including evidence published after claims. DatasetClaim Evidence Real Indep. Img. Suff. Retr. Unleaked FEVER [Thorne et al., 2018] ✘ ✓ ✘✓ ✓ - FEVEROUS [Aly et al., 2021] ✘ ✓ ✘✓ ✓ - Liar-Plus [Alhindi et al., 2018] ✓ ✘ ✘ ✓ ✘ ✘ Snopes [Hanselowski et al., 2019] ✓ ✘ ✘ ✘ ✘ ✓ MultiFC [Augenstein et al., 2019] ✓ ✘ ✘ ✘ ✓ ✘ A VeriTec [Schlichtkrull et al., 2023] ✓ ✓ ✘✓ ✓ ✓ CLAIMDECOMP [Chen et al., 2024] ✓ ✘ ✘ ✘ ✓ ✓ MOCHEG [Yao et al., 2023] ✓ ✘ ✓ ✘ ✘ ✘ NewsCLIPpings [Luo et al., 2021] ✘ ✓ - - - - InfoSurgeon [Fung et al., 2021] ✘ ✓ - - - - Autosplice [Jia et al., 2023] ✘ ✓ - - - - DGM [Shao et al., 2023] ✘ ✓ - - - - MMFake [Liu et al., 2024b] ✘ ✓ - - - - Verite [Papadopoulos et al., 2024] ✘ ✓ - - - - COSMOS [Aneja et al., 2021] Mix ✘ - - - - FACTIFY [Mishra et al., 2022] Mix ✘ - - - - FACTIFY 2 [Suryavardan et al., 2023] Mix ✘ - - - - MMOOC [Xu et al., 2024] Mix ✓ - - - - Fauxtography [Zlatkova et al., 2019] ✓ ✘ - - - - Fakeddit [Nakamura et al., 2020] ✓ ✘ - - - - Qprop [Barrón-Cedeño et al., 2019] ✓ ✘ - - - - FakeNewsNet [Shu et al., 2020] ✓ ✘ - - - - MuMiN [Nielsen and McConville, 2022] ✓ ✘ - | https://arxiv.org/abs/2505.17978v1 |
- - - AVERIMATEC ✓ ✓ ✓ ✓ ✓ ✓ Papadopoulos et al., 2024, Jia et al., 2023] have generated synthetic claims by applying manipulation techniques to the visual and textual modalities of image-text pairs. However, there are discrepancies between synthetic data and real-world image-text claims [Zeng et al., 2024, Papadopoulos et al., 2025], raising concerns about the generalization of models to real-world image-text claim verification. Although some benchmarks [Zlatkova et al., 2019, Barrón-Cedeño et al., 2019, Shu et al., 2020] focus on real-world claims (e.g., those derived from fact-checking articles), they all suffer from context dependence. Moreover, all existing datasets with image-text claims lack evidence annotations, limiting transparency, and the ability to understand the rationale behind fact-checking verdicts. 3 Annotation Schema Each claim in AVERIMATECis normalised to be understandable alone without additional context, such as the original social media post or the associated fact-checking article. For each claim, we provide key metadata such as the speaker ,publisher ,publication date , and the relevant location . These metadata elements can serve as valuable evidence for claim verification. We further annotate each claim with claim type (e.g., quote verification , which determines whether a quote was actually attributed to the correct speaker), and fact-checking strategy (e.g., reverse image search to find background information about images). While not all of these annotations are directly used during verification, they offer insights for developing fact-checking models. The sequence of QA pairs reflects the reasoning process involved in evidence retrieval and verification. Given the multimodal nature of the claims, both questions and answers may include images (i.e., image-related questions and image answers). A single question may have multiple answers, as there may be conflicts or disagreements in the evidence. Questions may refer to previous questions and 3 Claim NormalizationQuestion & Answer GenerationFirst Evidence Sufficiency CheckAdd or Modify Question & AnswerSecond Evidence Sufficiency Check Non -image -text Claims Unverifiable Claims Image Manipulations Verdict Match Verdict MatchVerdict MismatchVerdict MismatchFigure 2: Annotation pipeline. We first normalize the claim, then perform QA annotation to structure evidence retrieval. Two rounds of evidence sufficiency checks ensure annotation quality. their answers as long as they are understandable on their own, thus capturing multi-hop reasoning in fact-checking. Answers (other than “No answer could be found. ” or derived via image analysis without external information) must be supported by a source url linking to a web page. To ensure long-term accessibility, all source pages are archived on the internet archive.1We also provide metadata for each annotated QA pair, including the question type (e.g., image-related ormetadata- related ),answering method ,answer type (e.g., extractive ,abstractive ,boolean orimage-based ), and source medium type (type of web content used as evidence). We follow the four-way veracity labeling schema from Schlichtkrull et al. [2023]: supported ,refuted , not enough evidence , and conflicting/cherry-picking . An image-text claim can be refuted due to its textual part (the textual part of is factually wrong) and/or due to misuse of images (e.g., mis- interpreting the context of an image). For instance, the claim in Figure 1 is refuted due to both textual refutation and | https://arxiv.org/abs/2505.17978v1 |
image misuse. Not enough evidence refers to cases where evidence is insufficient to either support or refute a claim. Conflicting/Cherry-picking covers cherry-picking claims, true-but- misleading claims, as well as claims with conflicting evidence. Conflicts among evidence has been extensively studied in the context of QA [Liu et al., 2024a, Lee et al., 2024]. A textual justification is added to the claims to explain how verdicts can be reached on the basis of the evidence found. Considering that the evidence can contain multiple images, we assign unique and special tokens to the images in the evidence (e.g., [IMG_1] ) for annotators to refer to in justifications. They may also include commonsense reasoning or inductive reasoning beyond the retrieved evidence. For instance, the evidence of the claim in Figure 1 proves the image was taken in 2016 while Kamala Harris’ mother died in 2009. The justification infers it is impossible that Harris was with her parents in the image , taken in 2016 as her mother died in 2009 already. 4 Annotation Process The five-phase annotation pipeline is illustrated in Figure 2, extending the annotation process proposed in Schlichtkrull et al. [2023] to the domain of image-text claims. In phase one, an annotator extracts and normalize valid image-text claims, given a fact-checking article. For each extracted claim, a different annotator in the second phase generates questions and answers to reflect the rationale of fact-checking based on the article, using evidence from the web. A provisional veracity label is annotated as well. Thirdly, a first round of evidence sufficiency checking will be conducted by a third annotator who provides a justification and a verdict solely based on the QA pairs, without considering the fact-checking article. Different verdicts in the second and third phases suggest possible insufficient evidence in the second phase. In this case, the claim is forwarded to a fourth phase to add or modify existing QA pairs and a fifth phase for an additional round of evidence sufficiency check. Any claims for with unresolved conflicts in the verdicts are discarded. We ensure that each annotation phase of a claim is done independently by using different annotators (i.e., the same claim will not be annotated by the same annotator twice). Details for annotation guidelines and annotators’ demographics are provided in Appendix B and C, respectively. Claim Extraction & Normalization. Given a fact-checking article, an annotator extracts all claims from it, as multiple claims may pertain to a single event. The modality types of claims are annotated, and only image-text claims (i.e., textual claims paired with images) are forwarded to phase two. For 1https://archive.org/ 4 Table 2: Data statistics for dataset splits. End date refers to the latest publication date of claims included in each split. The start date of each devandtestsplit corresponds to the end date of the preceding split. The final row of the table reports the distribution of claim labels across four categories: supported (S),refuted (R),conflicting/cherry-picking (C), and not enough evidence (N). Split Train Dev Test # Claims 793 152 352 # Images / Claim 1.49 1.38 1.38 # QA Pairs / | https://arxiv.org/abs/2505.17978v1 |
Claim 2.86 2.84 3.11 Reannotated (%) 15.0 15.8 9.4 End Date 31-05-2023 31-07-2023 21-03-2025 Labels (S / R / C / N) (%) 1.6 / 95.3 / 0.8 / 2.3 2.6 / 92.8 / 0.7 / 3.9 13.9 / 78.1 / 2.0 / 6.0 identified image-text claims, sufficient context must be provided to ensure that the claim, specifically its textual component, can be understood independently of the surrounding article. The associated images are uploaded and normalized (e.g., separating collages or locating original, unaltered versions). Relevant metadata is also annotated. We exclude unverifiable claims (e.g., speculation or personal opinions) and claims where images are not used in verification or involve manipulated content. Question Generation & Answering. Annotators in this phase are instructed to transform the rationales derived from fact-checking articles into a sequence of QA pairs. Each question is annotated with its question type ,answering method andanswers . For image-related questions, annotators must select relevant images from either the claim images or images from previous answers. If a question is not marked as unanswerable or pertains to image analysis , annotators are required to provide supporting urls of the evidence source. We advise annotators to prioritize evidence sources linked within the fact-checking articles but to exclude anything published after the claim date, including the article itself. When linked sources are unavailable (e.g., dead links) or insufficient, annotators are provided with a custom Google search interface that supports both text and image queries. All retrieved pages from the interface are restricted to dates prior to the claim date to prevent temporal leakage [Glockner et al., 2022]. Based on the generated QA pairs, annotators assign a verdict. Evidence Sufficiency Check. In this phase, a third annotator, who does not have access to the fact-checking article, is presented with the extracted claim and its associated annotated QA pairs. The annotator is tasked with assigning a verdict and providing a textual justification. This verdict is then compared to the one generated during the QA annotation phase. A discrepancy between the two verdicts indicates insufficient evidence, and the QA annotation process is repeated to refine the QA pairs, followed by a second round of evidence sufficiency assessment with new annotators. 5 Dataset Statistics Data Distribution. We began with 2,353 fact-checking articles, and after discarding 210 inaccessible ones, we obtained 1,297 annotated image-text claims using the annotation pipeline described in the previous section. The splits of our dataset are temporally organized, and detailed statistics are presented in Table 2. We find that 23.7%image-text claims include more than one claim image, highlighting the need to understand multiple visual inputs. On average, each claim is annotated with 2.92questions. Of these, 3.5%questions have more than one answer, and 62.5%are image-related, emphasizing the importance of visual context in verifying image-text claims. To address these questions, annotators frequently selected Image-search (53.9%) as the answering method, indicating the necessity for tools that support image-centric information retrieval. Regarding answer types, 58.8%areextractive , consistent with our annotation guidelines. Additionally, 1.6%of answers are images themselves, underscoring the importance of supporting image retrieval as direct answers. A small proportion | https://arxiv.org/abs/2505.17978v1 |
( 2.6%) of questions are marked as unanswerable , reflecting cases where no supporting evidence could be found online. Further metadata statistics, such as claim types and answer types , are provided in Appendix D.3. The dataset shows a label imbalance, with most claims being refuted , which is expected given that misleading content is more likely to be scrutinized. Inter-Annotator Agreement. We re-annotated 100claims with a different group of annotators, following the same annotation pipeline as described in Section 4. As in prior work [Schlichtkrull et al., 2023], we assume that the first phase of annotation has already been completed, and thus 5 the re-annotation process begins from the second phase. We evaluate inter-annotator agreement for both verdict labels and QA annotations. For verdict agreement, we use Randolph’s [Randolph, 2010] free-marginal multi-rater κ, designed for unbalanced datasets [Warrens, 2010]. We obtained an agreement score of κ=0.742. For comparison, A VeriTec [Schlichtkrull et al., 2023] reported agreement of 0.619. Using Fleiss’ κ, a more traditional metric, our annotation process achieves an agreement score of 0.450. To evaluate QA annotations, the three best performing annotators were provided with an extracted claim and two independently annotated sets of QA pairs, and asked to determine how many QA pairs in one set were covered by the other. We compute recall and precision by comparing the original annotated QA pairs against those from re-annotation. The recall rate is 74.7%and the precision is 67.2%. The substantial overlap between the two sets suggests strong agreement between annotators. 6 Evaluation The evaluation of model accuracy on our dataset considers both the retrieved evidence and the veracity . Following [Thorne et al., 2018, Schlichtkrull et al., 2023], we first assess the quality of the retrieved evidence by comparing it against human-annotated references, and report veracity prediction accuracy conditioned on the evidence scores. I.e., the accuracy of a verdict prediction is considered only if the associated evidence score exceeds a predefined threshold λ, otherwise the claim is considered to be labeled incorrectly. This reflects the requirement that an effective fact-checking system should not only predict return verdicts on claims but also provide appropriate evidence. Recent research [Akhtar et al., 2024] showed that reference-based evaluation of evidence with large language models (LLMs) aligns best with human assessments. We extend this framework to a multimodal setting where evidence from the web may comprise text and images. We transform each QA pair into an evidence statement following [Akhtar et al., 2024], where images are represented with special image tokens (e.g., [IMG_1] ). For instance, the first QA pair in Figure 1 is transformed to“[IMG_1] was published in 2016 . ”. We then conduct separate reference-based evaluations for the textual and visual components. For the textual part, we use a validation method similar to [Akhtar et al., 2024]. If a matched evidence item is found within the ground-truth set, we proceed to a second step to compare the associated images. If image similarity falls below a threshold the match is deemed invalid due to image mismatch. We exploit Gemini-2.0-Flash [DeepMind, 2024] as the scoring model | https://arxiv.org/abs/2505.17978v1 |
for both steps in the reference-based evaluation inspired by its power [Akhtar et al., 2024]. We report evidence recall , defined as the percentage of ground-truth evidence instances successfully retrieved. We performed alignment checks and robustness checks (against adversarial attacks) of different reference-based evaluation schemes (text-only, interleaved and the separated evaluation) in our setting (details in Appendix F.2 and F.3). Our separated reference-based evaluation achieved the highest alignment with human assessments, with a Spearman correlation coefficient ( ρ) [Spearman, 1904] of 0.332and a Pearson correlation coefficient ( r) [Pearson and Henrici, 1896] of 0.381. Furthermore, the separated evaluation method is the most robust towards adversarial attacks. Our evaluation primarily focuses on evidence retrieval and verdict prediction. Additionally, given the availability of justification annotations, we assess the quality of model-generated justifications by comparing them to the human-annotated ground truth with the traditional evaluation metric, ROUGE-1 [Lin, 2004], a standard metric that is relatively tolerant of longer outputs. While ROUGE-1 provides a baseline assessment, we recognize that more sophisticated and targeted evaluation methods may be necessary. Similarly, we adopt a conditional justification generation score that any claim for which the evidence score is below λreceives a score for justification of 0. The prompts used for the QA pair conversion and evaluation are provided in Appendix I.1 and I.2, respectively. 7 Experiments 7.1 Baselines Our baseline system guides the fact-checking process by sequentially posing and answering essential questions to verify image-text claims. It consists of four components: a question generator , an answer generator , averifier and a justification generator . Throughout the question-answering process, the system maintains an evolving evidence history context , which is continuously updated 6 Table 3: Experimental results of baselines on AVERIMATEC. Q-Eval andEvid-Eval denote for recall scores of generated questions and retrieved evidence, with reference of ground-truth questions and evidence. We report verdict prediction and justification generation scores conditioned on evidence retrieval performance, specifically only considering verdict accuracy and justification generation performance when the evidence score is above 0.2, 0.3 and 0.4. LLM MLLM Q-Eval Evid-Eval Veracity (.2/.3/.4) Justifications (.2/.3/.4) Paralleled Question Generation Gemini Gemini 0.42 0.15 0.15 0.13 0.08 0.15 0.11 0.07 Qwen Qwen-VL 0.43 0.18 0.09 0.08 0.05 0.13 0.11 0.07 Gemma Gemma 0.39 0.21 0.14 0.12 0.09 0.17 0.14 0.10 Qwen LLaV A 0.37 0.16 0.09 0.08 0.05 0.12 0.10 0.06 Dynamic Question Generation Gemini Gemini 0.33 0.22 0.17 0.16 0.12 0.17 0.16 0.11 Qwen Qwen-VL 0.27 0.12 0.10 0.09 0.05 0.09 0.08 0.05 Gemma Gemma 0.27 0.19 0.15 0.13 0.10 0.15 0.13 0.09 Qwen LLaV A 0.32 0.16 0.13 0.11 0.08 0.11 0.10 0.08 Hybrid Question Generation Gemini Gemini 0.36 0.19 0.18 0.17 0.10 0.17 0.15 0.09 Qwen Qwen-VL 0.37 0.16 0.11 0.09 0.06 0.12 0.10 0.06 Gemma Gemma 0.26 0.25 0.16 0.15 0.11 0.19 0.17 0.12 Qwen LLaV A 0.30 0.17 0.09 0.09 0.07 0.12 0.11 0.07 with summarized evidence derived from each QA pair. After the QA stage, the verifier receives the claim and the accumulated evidence context to predict a veracity label. Finally, the justification generator produces a rationale that explains how | https://arxiv.org/abs/2505.17978v1 |
the predicted verdict is supported by the evidence. Question Generator. A straightforward approach is to generate all verification questions at once, given an image-text claim, as in prior work [Schlichtkrull et al., 2023]. We refer to this strategy as paralleled question generation ( PQG ). However, since later questions often depend on earlier ones (e.g., as shown in Figure 1), we also propose a dynamic question generation ( DQG ) method, where each subsequent question is generated based on both the claim and the evolving evidence history. To combine the strengths of both approaches, we introduce a hybrid question generation ( HQA ) method, which generates the first few questions in parallel and then switches to dynamic generation for the remaining ones. All three strategies employ an MLLM for question generation. The prompts used for each strategy are detailed in Appendix I.3. Answer Generator. Given a generated question, the answer generator is responsible for generating an answer to the question. Inspired by recent research on tool usage [Wu et al., 2023, Cao et al., 2024, Braun et al., 2024], we integrate a set of specialized tools into the answer generation module, along with a tool selector that automatically selects the appropriate tool for a given question. The system includes tools for: (1) reverse image search (RIS) to retrieve text-based information associated with an image; (2) web search for texts (WST) to retrieve relevant web texts given a textual query; (3) web search for images (WSI) to retrieve relevant images using a textual query; and (4) visual question answering (VQA) to answer questions directly based on input images, including comparison and detail analysis. Among these, VQA, implemented with an MLLM, directly outputs an answer, while RIS and WST are followed by an LLM to leverage the retrieved text to generate an answer. WSI is followed by an MLLM, which incorporates the retrieved image into the answer generation process. Prompts for tool selection and answer generation are provided in Appendix I.4 and I.5, respectively. Verifier. The verifier takes the claim to be verified, the multimodal evidence context and the verdict definitions as introduced in Section 3 to predict a veracity label for the claim (prompts in use shown in Appendix I.6). An MLLM will serve as the verifier. Justification Generator. Given the predicted verdict, in this step, the justification generator is asked to provide an explanation for the prediction. It also has access to the claim and evidence history (detailed prompts in Appendix I.7). We rely on an MLLM for justification generation. 7 Table 4: Baseline performance on verdict prediction and justification generation with ground- truth evidence (first block) and without accessing any external evidence (second block). NEE is for Not Enough Evidence and Conflict. is for Conflicting/Cherry-picking . Justi. is the for the performance of justification generation. LLM MLLM Refuted Supported NEE Conflict. Overall Justi. Gemini Gemini 0.84 0.92 0.52 0.00 0.82 0.50 Qwen Qwen-VL 0.87 0.47 0.62 0.00 0.78 0.44 Gemma Gemma 0.63 0.84 0.62 0.00 0.64 0.49 Qwen LLaV A 0.55 0.47 0.90 0.00 0.55 0.43 Qwen Qwen-VL 0.01 0.02 0.14 | https://arxiv.org/abs/2505.17978v1 |
0.00 0.02 0.04 Model Implementation. Our baseline system includes both an LLM and an MLLM, which could take different roles in components. We experimented with four combinations of LLMs and MLLMs: 1) Gemini-2.0-flash-001 [DeepMind, 2024] ( Gemini ) as both the LLM and the MLLM; 2) Qwen/Qwen2.5-7B-Instruct [Yang et al., 2024] ( Qwen ) acts as the LLM and Qwen2.5-VL-7B- Instruct [Bai et al., 2025] ( Qwen-VL ) serves as the MLLM; 3) Gemma-3-12B [Kamath et al., 2025] (Gemma ), capable of both unimodal and multimodal understanding, is used as both the LLM and the MLLM; and 4) Qwen and LLaV A-Next-7B [Li et al., 2024] ( LLaV A ) work as the LLM and MLLM respectively. More details on model implementation and experiment settings are in Appendix H. 7.2 Main Results We investigated the performance of baseline models both under zero-shot and few-shot settings. In the latter, three training instances were used as demonstrations to guide question generation (details in Appendix H.2). Overall, few-shot baselines slightly outperform their zero-shot counterparts. Few-shot performance results for various combinations of LLMs and MLLMs are presented in Table 3 (zero-shot model performance in Appendix G.1, and the findings hold for both settings). We report conditional veracity accuracy and ROUGE-1 scores for justification generation under varying evidence evaluation thresholds, λ={0.2,0.3,0.4}. Comparison of Question Generation Strategies. Among the three question generation strategies, the parallel approach consistently outperformed others in producing critical questions for fact- checking. The dynamic strategy, where questions are generated based on evolving evidence, yielded a weaker performance. This suggests that the increased complexity of reasoning over evolving interleaved image-text evidence poses significant challenges for current MLLMs. Open-source MLLMs often generated repetitive questions when using the dynamic strategy, further highlighting their limitations in handling complex multimodal inputs. Performance of Evidence Retrieval. Evidence scores across all models are significantly lower than their question evaluation scores, underscoring the difficulty of retrieving appropriate evidence for image-text verification. One reason for the failure of evidence retrieval is that models exhibited a bias toward using VQA as the answering tool (e.g., Qwen + Qwen-VL in PQG selected VQA as the answering tool for 30% questions), diverging from human fact-checkers’ preferences, despite being provided with tool selection demonstration examples (more elaborations in Appendix H). MLLMs tend to rely more on internal image details rather than external contextual information to address image-related questions. Additionally, approximately 13% images failed to retrieve any contextual information via RIS (i.e., no web pages published before claim dates could be found), consistent with the findings of [Tonglet et al., 2024]. A notable portion of questions, e.g., 30% questions of the Qwen + Qwen-VL baseline with PQG, elicited responses such as “No answer could be found” , based on retrieved evidence. This can be attributed to two main factors besides the failure of RIS: (1) many web pages retrieved by RIS were non-scrapable (e.g., Instagram posts), and (2) the baseline evidence employed a naive ranking method, BM25 [Robertson and Zaragoza, 2009], which neither considered the visual content of images nor incorporated fine-grained re-ranking. Interestingly, higher scores for question | https://arxiv.org/abs/2505.17978v1 |
generation did not always translate into better evidence retrieval. For instance, under the hybrid strategy, the Gemini-based baseline model achieved a 0.1-point higher question generation score than the Gemma-based model, but had worse evidence retrieval performance. Further analysis showed that Gemini generated a higher proportion of RIS-dependent questions ( 40.1%vs.32.4%), and the majority ( 62.6%vs.31.6%) of these were unanswerable using the retrieved evidence. 8 7.3 Analysis and Discussion We conducted analysis to assess baseline model performance under two conditions: (1) using ground- truth evidence (first block of Table 4), and (2) disabling web-based evidence retrieval (second block of Table 4). For both settings, we report conditional accuracy and justification generation scores with the evidence threshold set to λ= 0.3. Baselines’ Performance with Ground-truth Evidence. The results obtained using golden evidence represent upper-bound performance, highlighting the models’ full potential. Most baselines perform well in verdict prediction under this setting, underscoring both the importance and difficulty of effective evidence retrieval. Notably, the Gemini-based baseline achieves the highest scores for both prediction and justification generation. In contrast, baselines using LLaV A as the MLLM demonstrate the weakest performance. This is expected, as LLaV A [Li et al., 2024] was not pre-trained on interleaved image-text documents. Across all baselines, we observe consistent failure in identifying conflicting claims. This result aligns with prior findings on the challenge of detecting conflicting textual claims [Schlichtkrull et al., 2023]. Moreover, our dataset contains a relatively small number of conflicting claims, which can cause instability in model performance. It is also worth noting that our models do not explicitly model conflict within evidence, in contrast to [Schlichtkrull et al., 2023], which predicted verdicts per evidence piece and then examined whether these verdicts conflicted. In our setup, we observe stronger dependencies between multiple QA pairs that individual QA pairs are often insufficient to support or refute a claim. For example, only by combining the first and second QA pairs in Figure 1 can the model conclusively refute the claim. Baselines’ Performance without Searching. For the baseline without external evidence retrieval, we selected the combination of Qwen + Qwen-VL under PQG, as it demonstrated the strongest performance in question generation for claim verification. In this setting, all questions that require external search, those invoking the RIS, WST, or WSI tools, are marked with the response “No answer could be found. ” . As shown in the second block of Table 4, the model achieves an evidence score of 0.06, reflecting a significant drop in evidence retrieval. This result underscores the critical role of web-based information retrieval in real-world claim verification tasks. Nonetheless, in a few isolated cases, the model was still able to generate accurate evidence. These instances typically involved questions about locations or events depicted in the image, which the VQA tool could address correctly, likely because the model encountered these images during pretraining. However, such behavior also raises concerns about potential data leakage from MLLMs. Since the model’s responses are not grounded in externally retrieved evidence, there is a risk that its predictions stem from memorized content or prior exposure to | https://arxiv.org/abs/2505.17978v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.