text
string
source
string
fact-checking articles during training. 8 Limitations We made considerable efforts to prevent temporal leakage. Specifically, we provided annotators with custom search bars to retrieve online evidence published prior to the claim date. However, the exact dates of a small portion of claims ( 5%) were unavailable. In such cases, we used the dates of the corresponding fact-checking articles, which may have been published a few days after the original claims. Additionally, we relied on Google Search and the Python package htmldate.find_date to estimate publication dates of web pages, which are coarse approximations. Furthermore, we exploited a reference-based evaluation strategy for both the generated questions and the retrieved evidence. Though it aligns well with human assessments, it has limitations in cases where model predictions are reasonable but not reflected in the reference annotations. In such cases, evaluation scores may be undeservedly low due to poor alignment with the references. 9 Conclusion We present a real-world image-text claim verification dataset, annotated with QA pairs that capture the reasoning and evidence retrieval processes involved in claim verification. To ensure high annotation quality, we employed a multi-stage evidence sufficiency validation process, resulting in substantial inter-annotator agreement on both verdicts and QA annotations. In addition, we introduce a reference- based evaluation framework for open-web multimodal evidence retrieval, along with a set of baseline models for image-text claim verification that leverage web-sourced evidence. These contributions provide a foundation for advancing research in image-text claim verification. 9 Acknowledgement This research was supported by the Alan Turing Institute and DSO National Laboratories in Singapore Partnership (ref DCfP2\100063). Zifeng Ding and Andreas Vlachos were further supported by the ERC grant A VeriTeC (GA 865958). Michael Schlichtkrull is supported by the Engineering and Physical Sciences Research Council (grant number EP/Y009800/1), through funding from Responsible Ai UK (KP0016). References Mubashara Akhtar, Michael Sejr Schlichtkrull, and Andreas Vlachos. Ev2r: Evaluating evidence retrieval in automated fact-checking. CoRR , abs/2411.05375, 2024. Tariq Alhindi, Savvas Petridis, and Smaranda Muresan. Where is your evidence: Improving fact-checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERification, FEVER@EMNLP , pages 85–90, 2018. Hunt Allcott and Matthew Gentzkow. Social media and fake news in the 2016 election. The Journal of Economic Perspectives , 31(2):211–235, 2017. ISSN 08953309. Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopou- los, Oana Cocarascu, and Arpit Mittal. FEVEROUS: fact extraction and verification over unstructured and structured information. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks , 2021. Shivangi Aneja, Christoph Bregler, and Matthias Nießner. Catching out-of-context misinformation with self- supervised learning. CoRR , abs/2101.06278, 2021. Sinan Aral and Dean Eckles. Protecting elections from social media manipulation. Science , 365(6456):858–861, 2019. Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. Multifc: A real-world multi-domain dataset for evidence-based fact checking of claims. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP , pages 4684–4696, 2019. Shuai
https://arxiv.org/abs/2505.17978v1
Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Ming-Hsuan Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. CoRR , abs/2502.13923, 2025. Dominik Bär, Nicolas Pröllochs, and Stefan Feuerriegel. New threats to society from free-speech social media platforms. Commun. ACM , 66(10):37–40, 2023. Aviv Barnoy and Zvi Reich and. The when, why, how and so-what of verifications. Journalism Studies , 20(16): 2312–2330, 2019. Alberto Barrón-Cedeño, Israa Jaradat, Giovanni Da San Martino, and Preslav Nakov. Proppy: Organizing the news based on their propagandistic content. Inf. Process. Manag. , 56(5):1849–1864, 2019. Tobias Braun, Mark Rothermel, Marcus Rohrbach, and Anna Rohrbach. DEFAME: dynamic evidence-based fact-checking with multimodal experts. CoRR , abs/2412.10510, 2024. Rui Cao, Yuming Jiang, Michael Sejr Schlichtkrull, and Andreas Vlachos. Decompose and leverage preferences from expert models for improving trustworthiness of mllms. CoRR , abs/2411.13697, 2024. Jifan Chen, Grace Kim, Aniruddh Sriram, Greg Durrett, and Eunsol Choi. Complex claim verification with evidence retrieved in the wild. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), NAACL , pages 3569–3587, 2024. DeepMind. Introducing gemini 2.0: our new ai model for the agentic era, 2024. URL https://blog.google/ technology/google-deepmind/google-gemini-ai-update-december-2024/ . Chiara Patricia Drolsbach, Kirill Solovev, and Nicolas Pröllochs. Community notes increase trust in fact-checking on social media. PNAS Nexus , 3(7):pgae217, 2024. ISSN 2752-6542. doi: 10.1093/pnasnexus/pgae217. 10 Nicholas Dufour, Arkanath Pathak, Pouya Samangouei, Nikki Hariri, Shashi Deshetti, Andrew Dudfield, Christopher Guess, Pablo Hernández Escayola, Bobby Tran, Mevan Babakar, and Christoph Bregler. Ammeba: A large-scale survey and dataset of media-based misinformation in-the-wild. CoRR , abs/2405.11697, 2024. Yi R. Fung, Christopher Thomas, Revanth Gangi Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu Chang, Kathleen R. McKeown, Mohit Bansal, and Avi Sil. Infosurgeon: Cross-media fine-grained information consistency checking for fake news detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP , pages 1683–1698, 2021. Max Glockner, Yufang Hou, and Iryna Gurevych. Missing counter-evidence renders NLP fact-checking unrealistic for misinformation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP , pages 5916–5936, 2022. Zhijiang Guo, Michael Sejr Schlichtkrull, and Andreas Vlachos. A survey on automated fact-checking. Trans. Assoc. Comput. Linguistics , 10:178–206, 2022. Andreas Hanselowski, Christian Stab, Claudia Schulz, Zile Li, and Iryna Gurevych. A richly annotated corpus for different tasks in automated fact-checking. In Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL , pages 493–503, 2019. Shan Jia, Mingzhen Huang, Zhou Zhou, Yan Ju, Jialing Cai, and Siwei Lyu. Autosplice: A text-prompt manipulated image dataset for media forensics. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR , pages 893–903, 2023. Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas
https://arxiv.org/abs/2505.17978v1
Mesnard, Geoffrey Cideron, Jean-Bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, Gaël Liu, Francesco Visin, Kathleen Kenealy, Lucas Beyer, Xiaohai Zhai, Anton Tsitsulin, Róbert Busa-Fekete, Alex Feng, Noveen Sachdeva, Benjamin Coleman, Yi Gao, Basil Mustafa, Iain Barr, Emilio Parisotto, David Tian, Matan Eyal, Colin Cherry, Jan-Thorsten Peter, Danila Sinopalnikov, Surya Bhupatiraju, Rishabh Agarwal, Mehran Kazemi, Dan Malkin, Ravin Kumar, David Vilar, Idan Brusilovsky, Jiaming Luo, Andreas Steiner, Abe Friesen, Abhanshu Sharma, Abheesht Sharma, Adi Mayrav Gilady, Adrian Goedeckemeyer, Alaa Saade, Alexander Kolesnikov, Alexei Bendebury, Alvin Abdagic, Amit Vadi, András György, André Susano Pinto, Anil Das, Ankur Bapna, Antoine Miech, Antoine Yang, Antonia Paterson, Ashish Shenoy, Ayan Chakrabarti, Bilal Piot, Bo Wu, Bobak Shahriari, Bryce Petrini, Charlie Chen, Charline Le Lan, Christopher A. Choquette- Choo, CJ Carey, Cormac Brick, Daniel Deutsch, Danielle Eisenbud, Dee Cattle, Derek Cheng, Dimitris Paparas, Divyashree Shivakumar Sreepathihalli, Doug Reid, Dustin Tran, Dustin Zelle, Eric Noland, Erwin Huizenga, Eugene Kharitonov, Frederick Liu, Gagik Amirkhanyan, Glenn Cameron, Hadi Hashemi, Hanna Klimczak-Plucinska, Harman Singh, Harsh Mehta, Harshal Tushar Lehri, Hussein Hazimeh, Ian Ballantyne, Idan Szpektor, and Ivan Nardini. Gemma 3 technical report. CoRR , abs/2503.19786, 2025. Georgi Karadzhov, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, and Ivan Koychev. Fully automated fact checking using external sources. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP , pages 344–353, 2017. Yoonsang Lee, Xi Ye, and Eunsol Choi. Ambigdocs: Reasoning across documents on different entities under the same name. CoRR , abs/2404.12447, 2024. Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava- next-interleave: Tackling multi-image, video, and 3d in large multimodal models. CoRR , abs/2407.07895, 2024. Yiyi Li and Ying Xie. Is a picture worth a thousand words? an empirical study of image content and social media engagement. Journal of Marketing Research , 57(1):1–19, 2020. doi: 10.1177/0022243719881113. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, 2004. Siyi Liu, Qiang Ning, Kishaloy Halder, Wei Xiao, Zheng Qi, Phu Mon Htut, Yi Zhang, Neha Anna John, Bonan Min, Yassine Benajiba, and Dan Roth. Open domain question answering with conflicting contexts. CoRR , abs/2410.12311, 2024a. Xuannan Liu, Zekun Li, Peipei Li, Shuhan Xia, Xing Cui, Linzhi Huang, Huaibo Huang, Weihong Deng, and Zhaofeng He. Mmfakebench: A mixed-source multimodal misinformation detection benchmark for lvlms. CoRR , abs/2406.08772, 2024b. 11 Grace Luo, Trevor Darrell, and Anna Rohrbach. Newsclippings: Automatic generation of out-of-context multimodal media. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP , pages 6801–6817, 2021. Meta. Introducing llama 3.1: Our most capable models to date, 2024. URL https://ai.meta.com/blog/ meta-llama-3-1/ . Shreyash Mishra, Suryavardan S, Amrit Bhaskar, Parul Chopra, Aishwarya N. Reganti, Parth Patwa, Amitava Das, Tanmoy Chakraborty, Amit P. Sheth, and Asif Ekbal. FACTIFY: A multi-modal fact verification dataset. InProceedings of the Workshop on Multi-Modal Fake News and Hate-Speech Detection (DE-FACTIFY 2022) co-located with the Thirty-Sixth AAAI Conference on Artificial Intelligence AAAI , volume 3199 of CEUR Workshop Proceedings , 2022. Kai Nakamura, Sharon Levy, and William
https://arxiv.org/abs/2505.17978v1
Yang Wang. Fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC , pages 6149–6157, 2020. Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. Automated fact-checking for assisting human fact- checkers. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21 , pages 4551–4558, 8 2021. Survey Track. Eryn J. Newman, Maryanne Garry, Daniel M. Bernstein, Justin Kantner, and Stephen Lindsay. Nonprobative photographs (or words) inflate truthiness. Psychonomic Bulletin & Review , 19:969–974, 2012. Dan Saattrup Nielsen and Ryan McConville. Mumin: A large-scale multilingual multimodal fact-checked misinformation social network dataset. In SIGIR ’22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 3141–3153, 2022. Nedjma Ousidhoum, Zhangdie Yuan, and Andreas Vlachos. Varifocal question generation for fact-checking. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP , pages 2532–2544, 2022. Stefanos-Iordanis Papadopoulos, Christos Koutlis, Symeon Papadopoulos, and Panagiotis C. Petrantonakis. VERITE: a robust benchmark for multimodal misinformation detection accounting for unimodal bias. Int. J. Multim. Inf. Retr. , 13(1):4, 2024. Stefanos-Iordanis Papadopoulos, Christos Koutlis, Symeon Papadopoulos, and Panagiotis C. Petrantonakis. Similarity over factuality: Are we making progress on multimodal out-of-context misinformation detection? InProceedings of the Winter Conference on Applications of Computer Vision (WACV) , pages 5570–5579, 2025. Karl Pearson and Olaus Magnus Friedrich Erdmann Henrici. Vii. mathematical contributions to the theory of evolution.& iii. regression, heredity, and panmixia. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character , 187:253–318, 1896. Gordon Pennycook. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences , 116:201806781, 2019. doi: 10.1073/pnas. 1806781116. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML , volume 139, pages 8748–8763, 2021. Justus Randolph. Free-marginal multirater kappa (multirater free): An alternative to fleiss fixed-marginal multirater kappa. Advances in Data Analysis and Classification , 4, 01 2010. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4902–4912, 2020. Stephen E. Robertson and Hugo Zaragoza. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr. , 3(4):333–389, 2009. Yasmim Rocha, Gabriel Moura, Gabriel Alves Desiderio, Carlos Oliveira, Francisco Lourenço, and Larissa Nicolete. The impact of fake news on social media and its influence on health during the covid-19 pandemic: a systematic review. Journal of Public Health , 31:1–10, 2021. 12 Michael Sejr Schlichtkrull, Zhijiang Guo, and Andreas Vlachos. Averitec: A dataset for real-world claim verification with evidence from the web. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023,
https://arxiv.org/abs/2505.17978v1
NeurIPS , volume 36, pages 65128–65167, 2023. Rui Shao, Tianxing Wu, and Ziwei Liu. Detecting and grounding multi-modal media manipulation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR , pages 6904–6913, 2023. Jieun Shin and Kjerstin Thorson. Partisan selective sharing: The biased diffusion of fact-checking messages on social media. Journal of Communication , 67(2):233–255, 2017. ISSN 0021-9916. Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, and Huan Liu. Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media. Big Data , 8(3):171–188, 2020. C. Spearman. The proof and measurement of association between two things. American Journal of Psychology , 15:88–103, 1904. S. Suryavardan, Shreyash Mishra, Parth Patwa, Megha Chakraborty, Anku Rani, Aishwarya Naresh Reganti, Aman Chadha, Amitava Das, Amit P. Sheth, Manoj Chinnakotla, Asif Ekbal, and Srijan Kumar. Factify 2: A multimodal fake news and satire news dataset. In Proceedings of De-Factify 2: 2nd Workshop on Multimodal Fact Checking and Hate Speech Detection, co-located with AAAI , volume 3555 of CEUR Workshop Proceedings , 2023. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT , pages 809–819, 2018. Jonathan Tonglet, Marie-Francine Moens, and Iryna Gurevych. "image, tell me your story!" predicting the original meta-context of visual misinformation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP , pages 7845–7864, 2024. Matthijs J. Warrens. Inequalities between multi-rater kappas. Adv. Data Anal. Classif. , 4(4):271–286, 2010. ISSN 1862-5347. Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. CoRR , abs/2303.04671, 2023. Qingzheng Xu, Heming Du, Huiqiang Chen, Bo Liu, and Xin Yu. MMOOC: A multimodal misinformation dataset for out-of-context news analysis. In Information Security and Privacy - 29th Australasian Conference, ACISP , volume 14897 of Lecture Notes in Computer Science , pages 444–459, 2024. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. CoRR , abs/2412.15115, 2024. Barry Menglong Yao, Aditya Shah, Lichao Sun, Jin-Hee Cho, and Lifu Huang. End-to-end multimodal fact-checking and explanation generation: A challenging dataset and models. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR , pages 2733–2743, 2023. Fengzhu Zeng, Wenqian Li, Wei Gao, and Yan Pang. Multimodal misinformation detection by learning from synthetic data with multimodal llms. In Findings of the Association for Computational Linguistics: EMNLP , pages 10467–10484, 2024. Dimitrina Zlatkova, Preslav
https://arxiv.org/abs/2505.17978v1
Nakov, and Ivan Koychev. Fact-checking meets fauxtography: Verifying claims about images. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP , pages 2099–2108, 2019. 13 A Statement of Dataset We provide the training and development splits of our dataset for review.2However, due to naming inconsistencies with Kaggle, we are unable to upload the images there. Instead, we have made the image files available on Harvard Dataverse.3We cannot upload the entire dataset (including the .json file and images) to Harvard Dataverse, as it requires publishing the dataset when generating the Croissant file. To avoid unexpected inaccessibility of images on Harvard Dataverse, we also upload the whole development set (images and json file) to supplementary , due to file size limits. Since we plan to use the test split for future shared tasks, we are withholding it at the time of submission. However, the test split will be made privately available to reviewers upon request. If accepted, we will publicly release the training and development splits of the dataset and maintain both the data and baseline code on Harvard Dataverse and GitHub. The dataset and baselines will be licensed under the CC BY-NC 4.0 license. B Annotation Platform and Guidelines The annotation was performed using a custom-developed web platform specifically designed for this task from our team. We will make the platform’s source code available upon request. Annotators were provided with detailed guidelines outlining the annotation process. Due to the length of the guideline, we attached it to the supplementary . After reviewing the instructions, they underwent training for each phase. Specifically, we provide 14,12and12training instances for the first three phases, respectively. Phases four and five replicate phases two and three; therefore, no additional training instances were provided. We offered continuous feedback to annotators throughout both the training and annotation phases. C Annotation Fees and Annotator Demographics The annotation was conducted with the assistance of Appen,4a private company that provides machine learning services. Annotators were recruited through the company, which ensures fair compensation practices in accordance with their fair pay guidelines.5A total of 20 annotators participated in the project, 12 women ( 60%) and 8 men ( 40%). Fourteen annotators were based in the United States and six in the United Kingdom. Regarding age distribution, 20% of annotators were between 18 and 30 years old, 35% between 31 and 45, 25% were older than 45, and 20% chose not to disclose their age. D Additional Dataset Information In this section, we provide additional dataset statistics. D.1 Statistics for Data Filtering in Phase One We began with 2,353fact-checking articles in Phase One (P1) of the annotation process. To increase the proportion of image-text claims, we incorporated partially annotated articles from three sources: 1) filtered articles from A VeriTec [Schlichtkrull et al., 2023] containing multimodal claims, 2) articles verifying image-related claims from AMMEBA [Dufour et al., 2024] and 3) all true claims from ClaimReview6over the past two years that included keywords such as phtos orpictures . Among the 2,353 articles, we
https://arxiv.org/abs/2505.17978v1
identified 91 duplicate claims (e.g., those verified by multiple fact-checking organizations). Duplicates were detected automatically by computing cosine similarity and Jaccard distance between N-gram representations of textual part of claims. We manually reviewed claim pairs with a cosine similarity or Jaccard distance greater than 0.3. Despite these efforts, a small number of duplicates (fewer than 5%) remained, as fact-checkers may paraphrase similar claims differently. 2https://kaggle.com/datasets/a1ebcc27f233ca16e80fa7007d1a8cea6c491656d7b4ca6d87531df61c5089fd 3https://dataverse.harvard.edu/previewurl.xhtml?token=5fbd9e3c-bc9e-49d7-a109-eace6f1bdacd 4https://www.appen.com/ 5https://success.appen.com/hc/en-us/articles/9557008940941-Guide-to-Fair-Pay 6https://developers.google.com/fact-check/tools/api 14 However, due to the temporal split of our dataset, such duplication does not cause data leakage across the splits. Paywalled or inaccessible articles were reported by annotators and subsequently discarded. Although we leveraged coarsely annotated articles that verify multimodal claims, not all of them involved image-text relationships. In the first step, we filtered out claims of other modalities and retained only the image-text claims (around 85%) for further P1 annotation, which included image uploading, metadata collection, and verdict annotation. We further filter out image-text claims, where images are not used in claim verification (14.7%) or which involve image manipulations(4.0%). Both types were excluded from subsequent phases. Following [Schlichtkrull et al., 2023], we discard speculative claims, personal opinions and claims relying solely on fact-checking reference as the fact-checking strategy. After the P1 annotation, we retained a total of 1,457 valid image-text claims, for further processing in subsequent phases. D.2 Discarded Claims In Phase Two, annotators can report and skip invalid claims, particularly when images associated with claims are inaccessible due to regional restrictions. A total of 48such cases were discarded. As described in Section 4, when there are conflicting verdicts between annotators in Phases Two and Three, the affected claims proceed to Phases Four and Five for quality assurance updates and a second round of evidence sufficiency checks. In our dataset, 14.6%of claims required this second round of evaluation. Claims with unresolved conflicts after the five-phase annotation process are also discarded. Overall, approximately 7%of claims were excluded due to irreconcilable disagreements in annotated verdicts, even after the second round of evidence sufficiency checks. D.3 Metadata Distributions Following [Schlichtkrull et al., 2023], we provide details of distributions of metadata of data splits in this part. Table 5: Distributions of claim types. Claim Type Train Dev Test Event/Property Claim 85.4 91.4 93.5 Causal Claim 5.7 0.7 3.4 Numerical Claim 3.2 3.9 1.4 Media Analysis Claim 21.1 14.5 30.1 Media Publishing Claim 2.1 2.0 2.0 Position Statement 1.2 0.7 0.3 Quote Verification 1.4 0.7 0.3 Table 6: Distributions of fact-checking strategies. Claim Type Train Dev Test Media Source Discovery 26.4 20.4 26.4 Keyword Search 19.5 19.7 14.8 Written Evidence 87.5 89.5 84.7 Reverse Image Search 50.2 57.9 67.3 Image Analysis 20.7 24.3 21.3 Consultation 30.0 24.3 16.8 Video Analysis 6.8 4.6 8.2 Numerical Comparison 2.9 3.3 1.4 Satirical Source Identification 1.9 3.3 1.4 Fact-checker Reference 11.0 11.8 3.7 Geolocation 4.0 5.3 5.4 15 Metadata about Claims. We present the distribution of claim types and fact-checking strategies in Table 5 and 6, respectively. It is important to note that a single claim may belong to multiple claim types and can be verified using several fact-checking strategies. The
https://arxiv.org/abs/2505.17978v1
most relevant locations Table 7: Counts of locations associated with claims. Countries are represented with their ISO country code. We do not show countries with fewer than ten occurrences in the table, whereas the complete location information is available in our dataset. Country code Counts IN 417 US 309 GB 74 UA 37 PK 30 IL 27 NG 20 LK 19 TR 18 PS 17 KR 17 KE 15 AU 14 RU 13 BD 12 JP 11 TH 10 MY 10 CN 10 CA 10 CR 10 associated with claims are listed in Table 7. We observed a bias in the geographic distribution of claims. As our dataset includes claims verified by fact-checkers, it may inherit any biases present in the original fact-checking sources [Shin and Thorson, 2017, Barnoy and and, 2019]. Table 8: Distributions of question types. Question Type Train Dev Test Text-related 35.2 38.9 31.8 Image-related 63.4 58.8 62.2 Metadata-related 3.2 4.4 8.3 Commonsense-related 0.8 0.9 1.1 Metadata about QA Annotations. In addition to metadata for claims, we also provide metadata associated with the annotated QA pairs, including question types, answering methods, answer types and source mediums. Metadata statistics for questions and answers are presented in Table 8 and Table 9, respectively. Notably, a single question may belong to multiple question types. E Inter-Annotator Agreement To assess the quality of our annotated data, we recruited a different set of annotators to re-annotate 100 randomly sampled claims from our dataset and performed an inter-annotator agreement check. During re-annotation, we assumed that claim extraction and normalization had already been completed, and the annotators proceeded with the remaining phases. We ensured that the sample included at least five claims for each veracity label. The inter-annotator agreement check was done regarding both the agreement on verdicts and agreements on annotated QA pairs. 16 Table 9: Distributions of metadata related to answers. The first block is the distribution for answering methods, the second for answer types and the third for source medium. Question Type Train Dev Test Image-search 54.6 54.6 51.9 Text-search 40.1 41.0 37.4 Metadata 1.8 3.0 7.7 Image Analysis 3.2 1.4 2.5 Abstractive 16.4 15.0 19.1 Extractive 57.9 61.3 58.9 Unanswerable 2.2 3.2 3.2 Boolean 21.9 18.8 17.3 Image 1.6 1.6 1.5 Web text 85.2 85.2 78.9 PDF 0.8 1.2 0.8 Metadata 1.2 2.1 7.1 Video 2.5 1.2 2.9 Image/graphic 4.4 5.3 4.6 Web table 0.3 0.5 0.1 Other 0.2 0.0 0.1 Verdict Agreement. To evaluate agreement on verdicts, we used Randolph’s [Warrens, 2010] free-marginal multi-rater κ, which is well-suited for unbalanced datasets, following previous prac- tices [Schlichtkrull et al., 2023, Ousidhoum et al., 2022]. We achieved an agreement score of 0.742 on the double-annotated claims. QA Pair Agreement. For assessing agreement on annotated QA pairs, we recruited three best per- forming annotators to compare the similarities between the original and re-annotated QA annotations. Specifically, we instructed them to evaluate 1) whether the annotated verdict from our dataset could be supported by the original QA pairs; 2) how many original QA pairs for a claim are covered by the re-annotated QA pairs, and
https://arxiv.org/abs/2505.17978v1
3) how many QA pairs from the re-annotation are covered by the original QA pairs. The platform used for this agreement check is shown in Figure 3. Justification Evaluation. Although we primarily focused on verdict and QA annotation agreement, we also conducted a small-scale human evaluation to assess the quality of justifications. We randomly sampled 20 justifications from our dataset and from [Schlichtkrull et al., 2023], and asked a human evaluator to rate them on a scale from 0 to 5. Our justifications achieved an average score of 4.2, compared to an average score of 1.95 from [Schlichtkrull et al., 2023]. F Human Alignment of Evaluation Metrics Following [Schlichtkrull et al., 2023], we evaluate the quality of generated questions and retrieved evidence. Motivated by a recent study [Akhtar et al., 2024], we adopt a reference-based evaluation method that compares model responses to human-annotated ground-truth data. This method is applied to both question and evidence evaluations. To assess the reliability of the reference-based evaluation, we compare the resulting scores with human judgments obtained from independent raters (see Appendix F.1 and F.2). Additionally, we conduct checklist tests to evaluate the sensitivity of the reference-based evidence evaluation method, following the approach outlined in [Akhtar et al., 2024] (Appendix F.3). F.1 Alignment Check on Question Evaluation. Regarding question evaluation, we focus on assessing the semantics (i.e., textual content) of the questions. This evaluation approach aligns with that used in [Akhtar et al., 2024] and has also been 17 Annotated QA Pairs Claim Information ……Task Introduction Re-annotated QA Pairs ……Figure 3: Platform and instructions for validating annotators’ agreement on QA annotations. applied at the FEVER workshop,7demonstrating its reliability and strong correlation with human ratings. To further ensure alignment with human judgment in our setting, we conducted a small-scale human evaluation to validate the reference-based evaluation of questions. Specifically, we invited two NLP researchers (both authors of this paper) to participate in the assessment. They were presented with model-generated questions from the first 20 testing claims in our dataset and were asked to compare these questions against ground-truth annotated questions. The comparison was made based on two criteria: Relevance to claim verification (how relevant the generated questions are for verifying the claim) and Coverage of the ground-truth questions (how many ground-truth questions are accurately 7https://huggingface.co/spaces/fever/A VeriTeCFEVER8 18 covered by the predicted questions.). The agreement between human and the evaluation metric was measured using Spearman ( ρ) [Spearman, 1904], achieving a value of 0.705 , and Pearson correlation coefficients ( r) [Pearson and Henrici, 1896] achieving a value of 0.791 . These results indicate a strong alignment between human judgments and the reference-based evaluation scores on question evaluation. Table 10: Correlation between human evaluation and automatic evaluation metrics. We presents both the Spearman ( ρ) and Pearson ( r) correlation coefficients. Scorer Text-only Interleaved Separate ρ 0.263 0.08 0.332 r 0.286 0.14 0.381 F.2 Alignment Check on Evidence Evaluation. We leveraged re-annotation and re-used the human judgment on QA annotations as described in Appendix E to validate the alignment between our automatic evidence evaluation method and human judgment. To
https://arxiv.org/abs/2505.17978v1
assess alignment, we treated the original annotations from our dataset as predictions and the re-annotations as ground truth. We then applied the automatic evaluation methods to compare these “predictions” against the ground truth. Subsequently, we computed alignment scores by comparing the recall obtained from the automatic evaluation methods with human annotations. The correlation between the automatic evaluation scores and human assessments is presented using Spearman ( ρ) and Pearson ( r) correlation coefficients, as shown in Table 10. Our analysis shows that the separated evaluation method aligns most closely with human assessments, demonstrating the effectiveness of our approach. Table 11: Adversarial attack results for the assessment of the robustness of our evaluation method. Results in the table are obtained by computing the evidence evaluation score difference (in %) between initial evidence and manipulated evidence with checklists. Check. Text-only Interleaved Separate Completeness -29.2 -19.8 -28.5 Shuffle -10.2 4.1 -17.9 Irrelevant Image 0.0 9.4 -32.7 Inv. contract -1.6 -7.4 1.5 Inv. num2text 1.8 21.3 -11.3 Inv. text2num -0.3 24.5 0.6 Inv. synonyms -11.8 88.8 -0.6 Redundant words -3.1 85.4 -4.2 Fluency -4.9 31.4 -0.7 Argument structure -1.6 30.8 4.8 Image Invariance 0.0 20.0 -1.4 F.3 Robustness Check on Evidence Evaluation. Motivate by [Akhtar et al., 2024, Ribeiro et al., 2020], we not only assess alignment but also validate the robustness of our evidence evaluation method using checklist tests. Notably, the sensitivity of reference-based evaluation for textual data has already been examined in [Akhtar et al., 2024], and we adopt their adversarial attack design on texts. Details for these attacks ( completeness ,shuffle , Inv. contract ,Inv. num2text ,Inv. text2num ,Inv. synonyms ,Redundant words ,Fluency ,Argument structure ) can be found in the original study. Given the multimodal nature of our evaluation, we extend the robustness tests to include visual adversarial checks. Specifically, we evaluate the robustness of the evidence evaluation method against two types of image-based perturbations: 1) Irrelevant images , replacing images in predicted evidence with unrelated images and 2) Image invariance , applying invariant manipulations, such as resizing and rotation, to images within the predicted evidence 19 We conducted robustness testing on 20claims and compared the difference between the original evidence evaluation scores and the scores obtained after introducing adversarial evidence. The results are summarized in Table 11. A robust evaluation method is expected to show a significant performance drop when facing adversarial attacks in the first block of the table while maintaining consistent scores (i.e., minimal deviation from previous scores) when subjected to attacks in the second block. The results indicate that both the Text-only and Separated reference-based evaluations exhibit ro- bustness against textual adversarial attacks. In contrast, the interleaved evaluation is sensitive and unstable when faced with such attacks. This finding suggests that even advanced MLLMs, such as Gemini, may be prone to instability when handling interleaved image-text comprehension. Regarding visual adversarial attacks (i.e., irrelevant image replacement and image invariance), the Text-only evaluation method fails to maintain stability, while our Separated evaluation method demonstrates robust performance. G Additional Experimental Results G.1 Zero-shot Performance of Baselines Table 12: Zero-shot performance of
https://arxiv.org/abs/2505.17978v1
baselines. Q-Eval andEvid-Eval denote for recall scores of generated questions and retrieved evidence, with reference of ground-truth questions and evidence. We report verdict prediction and justification generation scores conditioned on evidence retrieval performance, specifically only considering verdict accuracy and justification generation performance when the evidence score is above 0.2, 0.3 and 0.4. LLM MLLM Q-Eval Evid-Eval Veracity (.2/.3/.4) Justifications (.2/.3/.4) Paralleled Question Generation Gemini Gemini 0.40 0.18 0.13 0.11 0.08 0.14 0.13 0.08 Qwen Qwen-VL 0.41 0.16 0.08 0.07 0.04 0.11 0.09 0.07 Gemma Gemma 0.37 0.22 0.12 0.12 0.08 0.17 0.15 0.10 Qwen LLaV A 0.37 0.18 0.10 0.09 0.07 0.12 0.11 0.07 Dynamic Question Generation Gemini Gemini 0.35 0.19 0.15 0.14 0.09 0.15 0.13 0.10 Qwen Qwen-VL 0.30 0.14 0.11 0.10 0.06 0.11 0.09 0.06 Gemma Gemma 0.27 0.19 0.12 0.11 0.07 0.14 0.13 0.08 Qwen LLaV A 0.24 0.16 0.11 0.11 0.06 0.12 0.11 0.06 Hybrid Question Generation Gemini Gemini 0.35 0.16 0.13 0.12 0.09 0.12 0.11 0.08 Qwen Qwen-VL 0.34 0.14 0.09 0.08 0.05 0.10 0.09 0.05 Gemma Gemma 0.26 0.25 0.15 0.14 0.09 0.19 0.18 0.12 Qwen LLaV A 0.26 0.17 0.07 0.07 0.05 0.12 0.11 0.08 Due to the limitation of space, we provide the zero-shot performance of baselines in Table 12. The findings discussed in Section 7.2 hold for baselines under the zero-shot setting as well. H Details of Experiments H.1 Model Implementation Hyper-parameters and Implementation. In the main experiments, we set the total number of generated questions to be 5for all QG strategies. For HQG, the first two questions are generated in parallel while the rest three are generated exploited DQG. In cases of textual evidence retrieval (i.e., leveraging the tools of RIS and WST), we truncated retrieved texts into chunks with the maximum length of 128and applied BM25 [Robertson and Zaragoza, 2009] to select the most relevant chunks to the given query. With the increasing capability of long-context understanding of existing LLMs, 20 we keep the top 30most related chunks, without a second stage of fine-grained re-ranking as what previous works have done [Chen et al., 2024, Schlichtkrull et al., 2023]. For the retrieved images returned by WSI, we compute their similarity scores with the given textual query with CLIP [Radford et al., 2021] and select the most related one as the image evidence source. For the choice of LLMs in baselines, we have tried using LLaMA-3.1-8B-Instruct [Meta, 2024] as the LLM, whereas the model got stuck in loops, the same as reported by other users.8For LaV A-Next [Li et al., 2024], which is not designed for interleaved image-text, we only consider the textual part of evidence in verdict prediction and justification as we observed some issues with model generation with complex interleaved image-text information. For the searching related tools, specifically WST, WSI and RIS, we used the API provided by Google. For web search with textual queries (WST and WSI), we first tokenize and post-tag words in queries and only keep verbs, nouns and adjectives as the search term [Karadzhov et al., 2017]. We set temporal constraints with input arguments, limiting all
https://arxiv.org/abs/2505.17978v1
returned web pages published before claim dates. We keep the first 30 search results. For RIS, we employed the google cloud vision service for detecting web pages containing matched images with the querying image. However, the service does not embed arguments to set temporal constraints. Alternatively, we use a post-hoc method by leveraging the Python package htmldate.fine_date to filter out pages published before claim dates. We noticed a lot of web pages returned by RIS are social media posts, which are non-scrapable. For these pages, we use their page titles as the scraping content. Few-shot Setting. To encourage models generate more critical questions for fact-checking, we exploit a computationally efficient method, few-shot learning. Specifically, we use a few training examples to guide models in question generation. Selecting similar claims to the inference one is important as similar claims may have similar reasoning path for claim verification. We rank the similarity between training claims and the inference claim with BM25 [Robertson and Zaragoza, 2009] by comparing their textual part. We set the number of shots to be 3to balance between the input length and information from demonstrations. For the PQG strategy, we directly provide models the ground-truth questions from selected demonstra- tions. For the DQG setting, models are provided the textual part of image-text claims and their first questions to generate the initial questions. For generating subsequent questions, each demonstration contains the textual part of a claim, its evidence history from previous QA pairs and the next question to be asked. Guidance for Tool Selection. Besides the guidance for tool selection in prompts as provided in Appendix I.4, we also used few demonstrations for inspire models to select proper tools. We provide few examples to guide tool selection as in the preliminary experiment we observed a heavy rely on VQA as the answering tool. We leveraged the metadata annotation of QA pairs, the answering method, for tool selection. Image-search will be mapped to RIS, Text-search with an image answer will be converted to WSI while with a textual answer will be mapped into WST. Image analysis will be converted to selecting VQA as the answering tool. Computation Resources. All experiments are conducted with two GPUs each with 40G dedicated memory. Specifically, we exploited either A100 or L40 for our experiments. The Qwen + Qwen-VL baselines and the Qwen + LLaV A baselines take about three hours with A100 and Gemma-based baselines take about seven hours for inference on the test split. Models have a faster inference speed on L40, saving one third of inference time. The inference time of Gemini-based models varies, probably depending on the volume of API calls. Also, we observed instability of Gemini API (e.g., the API call returned 503, saying the service is not available), maybe because of too many requests at the same time. H.2 Experiment Environment and Packages In this section, we introduce the experiment environment and packages in use. We implement all models under the PyTorch Library (version 2.4.0+cu121), with CUDA version 12.1. For the implementation of open-source LLMs and MLLMs, we leverage the HuggingFace Library,
https://arxiv.org/abs/2505.17978v1
with theQwen/Qwen2.5-7B-Instruct (Qwen), google/gemma-3-12b-it (Gemma), Qwen/Qwen2.5-VL-7B- Instruct (Qwen-VL) and llava-hf/llava-v1.6-mistral-7b-hf (LLaV A), respectively. The version of 8https://www.reddit.com/r/LocalLLaMA/comments/1c858ac/llama3_seems_to_get_stuck_in_loops_sometimes/ 21 Huggingface is 4.50.2. For Gemini, we exploited the API, gemini-2.0-flash-001 . For the CLIP model employed in image-text similarity computation for evidence rankin, we adopt the checkpoint, openai/clip-vit-base-patch32 , from Huggingface as well. I Prompts in Use In this section, we provide the exact prompts in use for baselines. I.1 Prompts for QA Conversion to Evidence Statement You are a expert writer. Given a question ([QUES]) and its answer [ANS], your goal is to convert the QA pair into a statement [STAT]. There could be images either in the question or the answer, which we use special tokens [IMG_1], [IMG_2] ... as placeholders for images. For instance, the question "When was the image published? [IMG]" asks for the publication date of the image denoted as [IMG]. Below are some examples: [QUES]: What is the date of the claim? [ANS]: Nov. 22, 2023. [STAT]: The date of the claim is Nov. 22, 2023. [QUES]: Did Trump pretended to be the palace guard in the meeting? [ANS]: [IMG_1] [STAT]: Trump dressed as [IMG_1] in the meeting. …… [QUES]: When was the image shot? [IMG_1] [ANS]: The image has been taken on Jan. 25, 1998. [STAT]: [IMG_1] was taken on Jan. 25, 1998. Please convert the QA pair below into its statement: [QUES]: [INFERENCE_QUES] [ANS]: [INFERENCE_ANS] [STAT]: Figure 4: The prompt in use for converting QA pairs to evidence statement. Following [Akhtar et al., 2024], we convert QA pairs to evidence statement, for both evidence evaluation and maintaining the evidence history. We consider a text-only conversion for simplicity and use special tokens as placeholders for image. These placeholders could be placed with the exact images in the future. The prompt is demonstrated in Figure 4. I.2 Prompts for Evaluation We adopt a reference based evaluation strategy, which compare predictions against references, for both question evaluation and evidence retrieval evaluation. Question Evaluation. Though questions could be multimodal, the semantics are the most informative. Therefore, we leverage a vanilla reference based evaluation scheme to compare the textual part of predicted questions and annotated questions. The exact prompt in use is shown in Figure 5. Evidence Evaluation. We conducted a two-stage reference-based evaluation of evidence, as described in Section 6. In the first stage, we consider compare the textual of retrieved evidence and ground-truth evidence. Hence, the prompt used for evaluation is similar to that in question evaluation. The difference is that there are special image tokens in evidence and we need the evaluator to output the index of aligned predictions and ground-truth annotations. The prompt is illustrated in Figure 6. 22 You will get as input a reference question set ([REF]) and a predicted question set ([PRED]).Please verify the correctness of the predicted questions by comparing it to the reference questions, following these steps:1. Evaluate each question in the predicted question set individually: Check whether it is covered by any question in the reference set ([REF]). A predicted question is covered if it conveys the same meaning or intent
https://arxiv.org/abs/2505.17978v1
as a reference question, even if the wording differs.2. Evaluate each question in the reference question set individually: Check whether it is covered by any question in the predicted set ([PRED]), using the same criteria. Do not use additional sources or background knowledge.3. Finally summarise (1.) Count how many predicted questions are covered by the reference questions and provide explanations([PRED in REF] and [PRED in REF Exp]), (2.) Count how many reference questions are covered by the predicted questions and provide explanations ([REF in PRED] and [REF in PRED Exp]).Generate the output as shown in the examples below: [PRED]: 1. Is there a correlation between CO2 levels and climate change? 2. Where and when was the image taken? 3. What is the caption of the chart in the image? [REF]: 1. Was the source article publishing the chart discussing climate change? 2. Will a raise of CO2 levels lead to global warming? 3. Which country was shown in the image? 4. When was the image taken?[PRED in REF]: 2[PRED in REF Exp]: 1. The question is similar to the second reference question. 2. The question conveys similar information to the third and fourth question in the reference set. 3. The question is not covered by nor similar to any reference question.[REF in PRED]: 3[REF in PRED Exp]: 1. The question is not covered by the predicted question set. 2. The question is covered by the first predicted question. 3. The question is covered by the second question of the predicted questions. 4. The question is covered by the second predicted question. …… Return the output in the exact format as specified in the examples, do not generate any additional output: [PRED]: [PRED_EVID] [REF]: [REF_EVID]Figure 5: The evaluation prompt for generated questions. You will get as input a reference evidence ([REF]) and a predicted evidence ([PRED]). [IMG_1], [IMG_2] .. are placeholders for images and they are regared as the same text token.Please verify the correctness of the predicted evidence by comparing it to the reference evidence. Note, a fact with "no answer could be found .." or "it is unkown .." contradicts with facts mentioning any exact information (i.e., indicating the answer can be found and it is known). Please verify following these steps:1. Evaluate each fact in the predicted evidence individually: is the fact supported by the REFERENCE evidence (reference evidence presents a similar )? Do not use additional sources or background knowledge.2. Evaluate each fact in the reference evidence individually: is the fact supported by the PREDICTED evidence? Do not use additional sources or background knowledge.3. Finally summarise (1.) how many predicted facts are supported by the reference evidence, which refernece evidence supports which predicted facts and explanations([PRED in REF] and [PRED in REF Exp]), (2.) how many reference facts are supported by the predicted evidence, which predicted evidence supports which reference fact and explanations ([REF in PRED] and [REF in PRED Exp]).Generate the output as shown in the examples below: [PRED]: 1. The missle in [IMG_1] is Fateh 110. 2. Ilan Omar has attended the training in [IMG_2]. 3. Prince
https://arxiv.org/abs/2505.17978v1
Phillip wore the Royal Guard uniform in Jan. 14, 2003. 4.The raid in Washington took place on Saturday, Oct. 26, 1999.[REF]: 1. [IMG_1] was taken in Jan. 20, 2003. 2. No evidence can be found related to the type of missle in [IMG_2]. 3. The woman in [IMG_3] for a training is not Ilan Omar. 4. No answer was found regarding when the raid in Washington took place. 5. Prince Phillip wore the Royal Guard uniform shown in [IMG_4] previously in Jan. 2003.[PRED in REF]: 1; (PRED_3,REF_5)[PRED in REF Exp]: 1. No relevant evidence to the fact can be found in the reference evidence set. 2. The fact contradicts with its relevant fact, the third fact, in the evidence set. It fact in the reference set claims the woman in the training is not Ilan Omar. 3. The fact is supported by the fifth fact in the evidence set. 4. The fact is refuted by the fourth fact in the reference set, which claims the date of the raid in Washington is unknown.[REF in PRED]: 2; (REF_1,RPED_3);(REF_5,PRED_3)[REF in PRED Exp]: 1. It is supported by the third fact in the predicted evidence. 2. It is refuted by the first fact in the predicted evidence set. 3. It is contracted with the second fact in the predicted evidence set, which claims the woman in the training is Ilan Omar. 4. It is refuted by the fourth fact in the predicted evidence which claims the date of the raid could be found. 5. The fact aligns with the third fact in the predicted evidence set. …… Return the output in the exact format as specified in the examples, do not generate any additional output: [PRED]: [PRED_EVID] [REF]: [REF_EVID] Figure 6: The evaluation prompt for retrieved evidence. 23 You are a fact -checker to ask questions to verify an image -text claim. Here is the image -text claim. The textual part is: [CLAIM_TEXT]; with a list of images of the claim: [CLAIM_IMAGE] . We have already retrieved the evidence below: [EVID_HISTORY]. [IMG] is the placeholder for images in the evidence. Please ask another one question, either related to the textual part or related to the image part of the claim for the verification and avoid questions already presented in the evidence (though maybe no answer found to the question). For each question, please also indicate it is Text -related or Image -related before the question (using **Text - related:** and **Image -related:**). For image -related questions, please explicitly point out which image you are asking about (using **Image Index:**) and do not provide an index larger than the number of images (i.e., the index should be smaller than the total number of images). For instance, questions could be asked like: **Text -related:** [QUES]. \n**Image -related:** [QUES]. **Image Index:** 2. \n**Image -related:** [QUES]. **Image Index:** 2,3. \n[QUES] is the placeholder for the question to be generate. The image index should be smaller than the total number of claim images. Please generate your question:Figure 7: The prompt for dynamic question generation. In the few-shot setting, the second
https://arxiv.org/abs/2505.17978v1
paragraph is replaced with the few-shot demonstrations. You are a fact -checker to ask questions to verify an image -text claim. . Here is the image -text claim. The textual part is: [CLAIM_TEXT]; with a list of images of the claim: [CLAIM_IMAGE] . Please ask [NUM_QUES] questions related to the textual part or the image part of the claim for the verification of the claim. For each question, please also indicate it is Text -related or Image -related before the question. For Image -related questions, please also indicate which images the question is about with **Image Index:**. Specifically, if the question is related to the first and second image of the claim, it should be **Image Index:** 1,2. We illustrate an example of first three questions as below ([QUES] is the placeholder for the question.): 1..**Text -related:** [QUES] 2. **Image -related:** [QUES] **Image Index:** 1. 3. **Image -related:** [QUES] **Image Index:** 2. Please generate [NUM_QUES] questions: Figure 8: The prompt for paralleled question generation. In the few-shot setting, the third paragraph is replaced with the few-shot demonstrations. I.3 Prompts for Question Generation We considered three strategies for question generation as introduced in Section 7.1. The hybrid generation is the combination of the paralleled and the dynamic question generation strategy. Below, we provide prompts for the DQG and PQG strategies. For DQG, we use the prompt shown in Figure 7 and the PQG prompt is shown in Figure 8. For the few-shot question generation setting, we utilize the same prompts while adding a few demonstrations before the information of inference instances. I.4 Prompts for Tool Selection As mentioned in Section 7.2, we observed bias of models for heavily relying on VQA as the answering tool, diverging from fact-checkers’ choice. This leads to failures for retrieving essential evidence. 24 You need to select a proper tool to answer a given question. We have four tools: (A). Reverse image search: The tool aims to find information related to given images, such as the date of, the event in or the celebrities in the image. It uses the provided images as input to query a search engine. (B). Visual question answering: The tool is frequently used for answering questions related to image details (e.g., the text in the image and the weather in the image). (C). Searching Google using texts for textual information: The tool aims to find related textual information by querying a search engine with texts. (D). Searching Google using texts for images: The tool aims to find related images by querying a search engine with texts. You need to response with the options for tools (i.e., A, B, C or D) and please do not respond with any other words. For text -related questions, please select from tool C and D. For image -related questions, please select from A and B. Below we provide some examples: [DEMONSTRATIONS] Question: [INFERENCE_QUES] Question type: [INFERENCE_QUES_TYPE] Tool option:Figure 9: The prompt in use for selecting tools to answer questions. [DEMONSTRATIONS] are placeholders of examples provided to guide the tool selection. Considering the issue, besides the tool definitions,
https://arxiv.org/abs/2505.17978v1
we provide a few demonstrations, each consisting of a question, a question type and the tool should be selected. Specifically, we leverage the annotated metadata information of questions. For questions annotated with the answering method of image- search, we consider the tool of RIS for such cases. For questions with the answering method as text-search while the answers are not images, we regard WST as the tool to be selected; if there are image answers, then the WSI should be the tool. For questions answered by image analysis, we would consider VQA as the answering method. The prompt for tool selection is shown in Figure 9. I.5 Prompts for Answer Generation As introduced in Section 7.1, when leveraging the tools of RIS, WST and WSI, there follows an answering model (either an LLM or an MLLM) to leverage retrieved evidence to address the question. For using an LLM to leverage textual evidence, we use the prompt: You need to answer a question according to a set of retrieved documents. Question: [QUES] ;Document: [RETRIEVED_DOC] .If the question is not answerable according to the provided document, please answer as: No answer can be found. Start you answer as: **ANSWER:** For VQA with an MLLM, we prompt models with the template below for an answer: Question: [QUES] ;Related images to the question: [IMAGES] . I.6 Prompts for Verdict Prediction The verifier receives the claim and the retrieved evidence for predicting a veracity label of the claim. Below is the prompt exploited for the verifier: You need to select a verdict for a given Image-Text claim when provided a set of evidence. [IMG] is a placeholder for images. We provide four verdict labels and the definitions of them below: Supported: The claim is supported by the evidence presented. Refuted: The claim (either the text or the image part) is contradicted by the evidence presented. Not Enough Evidence: There is not enough evidence (NEE) to support or refute the claim. Conflicting: The claim is misleading due to conflicting evidence/cherry-picking, but not explicitly refuted. You need to response with the verdict for the claim (i.e., Supported, Refuted, Not Enough Evidence or Conflicting) and please do not respond with any other words. The metadata of the claim: [DATE_AND_LOCATION] .Claim: [CLAIM_TEXT] ;Claim images: [CLAIM_IMAGES] . Here is the evidence: [EVID] .Verdict: 25 I.7 Prompts for Justification Generation The prompt for justification generation receives the information about the claim (textual part and claim images), retrieved evidence and predicted verdict to explain how the verdict could be reached. Below is the exact prompt in use: Given an image-text claim and a set of evidence for verifying the claim, a fact-checker predict a verac- ity label for the claim. You need to explain how the verdict is reached for the image-text claim. Below is information for the image-text claim: The metadata of the claim: [DATE_AND_LOCATION] .Claim: [CLAIM_TEXT] ;Claim images: [CLAIM_IMAGES] .The predicted verdict is: [PRED_VERDICT] . Here is the evidence: [EVID] .Please generate your justification (i.e., explanation) for the verdict: Outputs from MLLMs are verbose, whereas human annotated justifications are concise. Therefore, we
https://arxiv.org/abs/2505.17978v1
TRACE for Tracking the Emergence of Semantic Representations in Transformers Nura Aljaafari1†, Danilo S. Carvalho3, André Freitas1,2,3 1Department of Computer Science, University of Manchester, United Kingdom 2Idiap Research Institute, Switzerland 3National Biomarker Centre, CRUK-MI, University of Manchester, United Kingdom {firstname.lastname}@[postgrad.]†manchester.ac.uk Abstract Modern transformer models exhibit phase transitions during training, distinct shifts from memorisation to abstraction, but the mechanisms underlying these transitions remain poorly understood. Prior work has often focused on endpoint representations or isolated signals like curvature or mutual information, typically in symbolic or arithmetic domains, overlooking the emergence of linguistic structure. We introduce TRACE (Tracking Representation Abstraction and Compositional Emergence), a diagnostic framework combining geometric, informational, and linguistic signals to detect phase transitions in Transformer-based LMs. TRACE leverages a frame-semantic data generation method, ABSynth, that produces an- notated synthetic corpora with controllable complexity, lexical distributions, and structural entropy, while being fully annotated with linguistic categories, enabling precise analysis of abstraction emergence. Experiments reveal that (i) phase tran- sitions align with clear intersections between curvature collapse and dimension stabilisation; (ii) these geometric shifts coincide with emerging syntactic and se- mantic accuracy; (iii) abstraction patterns persist across architectural variants, with components like feedforward networks affecting optimisation stability rather than fundamentally altering trajectories. This work advances our understanding of how linguistic abstractions emerge in LMs, offering insights into model interpretabil- ity, training efficiency, and compositional generalisation that could inform more principled approaches to LM development. 1 Introduction Transformer models1exhibit evolving internal representations during training, with recent work showing that these representations undergo phase transitions —abrupt shifts in representational structure, generalisation behaviour, and learning dynamics [ 31,36,41]. These transitions mark critical points where models reorganise internal representations and develop increasingly abstract, structured encodings [4, 49, 31]. Understanding the mechanisms and timing of these shifts is essential for interpretability, model steering, and failure mode detection [ 20]. While prior studies have characterised models’ behaviour at convergence or via final-layer probes, less is known about how internal linguistic structures form over the course of training. Furthermore, much of the literature on transformer interpretability and representation focuses on algorithmic or mathematical tasks [ 33,53,54], or examines geometric properties at the level of individual tokens or local concepts [ 39,49]. These works leave open the question of how holistic, sentence-level semantic representations arise in transformer representations. 1Throughout this paper, we use the term "transformer" to refer to Transformer-based architectures as implemented in Vaswani et al. [50] Preprint. Under review.arXiv:2505.17998v1 [cs.CL] 23 May 2025 Figure 1: Overview of the TRACE framework, which integrates the monitoring of (i) intrinsic dimensionality of hidden states, (ii) spectral curvature complexity of the loss landscape, and (iii) linguistic alignment via probing and output accuracy. Inputs are sampled from ABSynth, our proposed synthetically generated corpus grounded on frame-based representations and controlled distributions over entropy, frequency, and complexity. We address this gap by introducing TRACE: Tracking Representation Abstraction and Compositional Emergence (Fig. 1), a diagnostic framework that combines geometric, linguistic, and information- theoretic signals to characterise how transformers transition from memorisation to abstraction2. Our central hypothesis is that abstraction emerges through a measurable phase transition, marked by:
https://arxiv.org/abs/2505.17998v1
(i) characteristic rise-then-stabilise patterns in intrinsic dimensionality of hidden representations; (ii) transient spikes in loss curvature; (iii) surges in linguistic category alignment, particularly for part-of-speech and semantic accuracy; and (iv) decrease in mutual information between input and hidden representations. We test whether these phenomena, each informative in isolation, exhibit coordinated temporal dynamics that can serve as reliable sentence-level representation/abstraction indicators. To isolate model dynamics from data confounds, we introduce a synthetic data generation framework, ABSynth , based on formal frame semantics [ 19,5]. Unlike template-based approaches, ABSynth samples from abstract event frames with predefined semantic roles (agent, patient, etc.), producing corpora with transparent syntactic and semantic structure. We instantiate this framework with ABSynth25K , enabling precise tracking of how theoretically-motivated linguistic abstractions emerge across layers and training iterations. This work addresses the following research questions: •What geometric and statistical signals accompany the transition from memorisation to abstraction w.r.t. sentence-level representations? •When do syntactic and semantic categories emerge in transformer representations over training? •What mechanisms and training dynamics trigger these phase transitions, and how do archi- tectural and optimisation factors influence their onset? This paper makes three key contributions. First, we introduce TRACE , a unified diagnostic frame- work that jointly tracks abstraction and early representational structure formation using coordi- nated geometric (curvature), statistical (mutual information, intrinsic dimensionality), and linguistic 2We use "compositional emergence" to denote the formation of structured internal representations (e.g., roles, syntactic categories), rather than formal compositional generalisation. 2 (probing-based) signals throughout training. Second, an original spectral curvature complexity measure C(H)characterising loss landscape properties. Third, a frame-semantics grounded synthetic sentence generation framework ABSynth , from which we derive the supporting ABSynth-25K corpus for controlled analysis of representational dynamics. Across all experiments, we observe a consistent phase transition, indicated by coordinated shifts in curvature flattening, intrinsic dimensionality, and probe performance, which marks the onset of abstraction. These patterns persist across model scales and ablation variants, pointing to structural regularities in transformer learning dynamics. Understanding these transitions supports designing more interpretable, adaptive, and resource-efficient language models. 2 Related Work Phase Transitions in Training Dynamics. Representational transitions during training have been documented in small-scale settings like grokking [ 36,41], where models shift from memorisation to generalisation. Lee et al. [31] attributed these to geometric reorganisation, while Clauw et al. [11] identified emergent information-theoretic structure. Nakkiran et al. [35] described double- descent phenomena, where generalisation is preceded by changes in spectral behaviour. Stagewise development has also been observed in attention heads and induction circuits [37]. Loss Landscape Geometry and Generalisation. Curvature properties help illuminate learning dynamics in neural networks [ 7,51]. Early work linked sharp minima to overfitting [ 26], while later studies found flatter regions correspond to better generalisation [ 21,44]. Spectral metrics like Hessian trace and effective rank capture curvature anisotropy [ 2,7], with recent transformer analyses showing systematic evolution of these metrics during training [ 23,51]. Empirically, models trained in overparameterised regimes often exhibit flat Hessian spectra with many near-zero eigenvalues [ 46], corresponding to improved generalisation, indicative of abstract representation formation [2]. Intrinsic Dimensionality and Representation Compression. Intrinsic dimensionality (ID)
https://arxiv.org/abs/2505.17998v1
serves as a proxy for the representational complexity of neural networks [ 1,17,4]. Under the manifold hypothesis, high-dimensional activations are assumed to lie on lower-dimensional submanifolds [8], with the ID reflecting the necessary degrees of freedom to explain observed variation. During training, representations typically exhibit a rise–fall pattern—initially increasing as features entangle, then compressing as abstraction emerges [ 4,10,49]. While Aghajanyan et al. [1]showed pre- trained models can be fine-tuned in low-dimensional reparameterisations, Cheng et al. [9]and Lee et al. [31] correlate geometric compression with linguistic information acquisition. Recent work confirms linguistic features occupy low-dimensional subspaces [ 42] and that compressibility enables compositional generalisation [ 16]. ID estimation methods range from PCA [ 34] to geometric approaches like TWO-NN [17] and GRIDE [15]. Intermediate Layers and Representation Studies. Recent studies highlight intermediate layers’ role in shaping model representations [ 38,18,47]. These layers often show stronger linguistic alignment than final layers [ 47]. Lepori et al. [32] introduced structural compositionality, revealing that models can decompose complex tasks into modular subroutines, with intermediate layers playing a crucial role in this decomposition process. Related work on symbolic domains examines abstraction in transformers trained on code or algorithmic tasks [ 33,54,53], though these focus on task-specific behaviours rather than semantic abstraction in linguistic representations. Synthetic datasets Several synthetic benchmarks have been developed to study abstraction and generalisation in neural models, though most focus on algorithmic or symbolic reasoning tasks rather than grounded linguistic structure. SCAN [ 29] tests systematic compositional skills through command-to-action mappings, while PCFG-based datasets [ 24] probe models’ syntactic general- isation abilities using controlled linguistic commands. Mathematical reasoning datasets [ 45] and algorithmic tasks [ 41] provide controlled environments for studying learning dynamics but lack linguistic structure. Unlike prior work that investigates abstraction using symbolic or algorithmic datasets with limited linguistic grounding, our approach targets sentence-level semantic abstraction in transformer models trained on structured, English-like input. We introduce a synthetic corpus generator grounded in frame 3 semantics, enabling precise control over contextual entropy, role structure, and token distributions. This design reflects key properties of natural language while retaining full annotation and sampling transparency. While previous studies provide valuable insights, they often examine a single diagnostic signal in isolation, and are typically restricted to tasks or domains that do not generalise to realistic linguistic settings or scale to larger models. By contrast, our multi-signal approach offers a holistic view of how abstraction emerges during training. This principled integration enables more transferable and interpretable analysis of representation learning in modern transformers. 3 Methodology As shown in Fig. 1, our method integrates the following signals to detect phase transitions during transformer training: spectral curvature of the loss landscape, intrinsic dimensionality of representa- tions, and linguistic category alignment. Our intuition is that these metrics capture complementary aspects of representation learning: optimisation dynamics reflect updates in model weights, geometric measures and probing reveal reorganisation of semantic relationships, and linguistic alignment reflects emergent structure in the model’s outputs. We define semantic abstraction as the model’s ability to internalise role-based generalisations that extend beyond surface lexical forms—for example, recog- nising
https://arxiv.org/abs/2505.17998v1
the ARG2 role regardless of whether it is realised as "noun3", or "location22". In this setting, abstraction is evidenced when internal representations align with underlying semantic functions rather than memorised token identities. The following sections detail each aspect, along with our synthetic data generator and experimental setup. 3.1 Spectral Signals of Loss Landscape Geometry We characterise loss landscape geometry using Hessian-based curvature metrics to detect structural shifts in representation learning. We adopt a scalable approximation via the Lanczos algorithm [ 30], which estimates the top K≪Neigenvalues of the Hessian using efficient Hessian-vector products. LetL(θ)be the training loss with gradient g=∇θL(θ)and Hessian Hθ=∇2 θL(θ). We compute the truncated spectrum {λi}K i=1where K≪N, motivated by observations that curvature information concentrates in dominant modes [44]. Our spectral metrics include: •Hessian Trace: Tr(Hθ) =PN i=1λi, quantifying overall curvature magnitude. Decreasing trace indicates flatter minima associated with improved generalisation [44, 52, 2]. •Entropy-Based Effective Rank: reff= exp ( −P ipilogpi), where pi=|λi|P j|λj|; this Shannon entropy-based measure [ 43] quantifies dominant curvature directions. Low values reflect curvature concentration (anisotropy)—abstraction; high values indicate distributed, isotropic curvature—memorisation [16]. To unify curvature magnitude and directional concentration, we define a curvature complexity score : C(H) =Tr(H)√reff. (1) This measure increases with both the overall curvature and its spectral concentration. High C(H) values correspond to sharp, anisotropic curvature, often reflecting representational reorganisation or memorisation. In contrast, low values denote flatter, more isotropic landscapes, typically aligned with abstraction and generalisation. 3.2 Intrinsic Dimensionality We characterise abstraction in transformer representations through the lens of intrinsic dimensionality (ID), motivated by the manifold hypothesis [ 8]. Given hidden representations Z∈RD, ID is defined as the minimal number of degrees of freedom required to locally parameterise the data distribution [17]. That is, although Zmay lie in a high-dimensional space, it may concentrate around a lower- dimensional manifold M ⊂ RDof dimension d≪D. We adopt the TWO-NN estimator [ 17], a non-parametric, maximum likelihood estimator based on local geometry. Given a batch of activation 4 vectors {xi}N i=1, the intrinsic dimension is estimated as: ID= 1 NNX i=1logr2(xi) r1(xi)!−1 , (2) where r1(xi)andr2(xi)denote distances to the first and second nearest neighbours of xi, respectively. This approach requires no tuning parameters and assumes only uniform local density. To provide a more holistic view, we average the ID over the model layers as ID(t)=1 LPL ℓ=1ID(t) ℓ,, and use ID(t)in our analysis. By computing ID across training steps and network layers, we capture the dynamic evolution of representational structure. We hypothesise a characteristic trajectory aligned with other metrics: low initial ID during early training, rising ID during transition, and stabilisation or decrease as the model projects data onto semantically coherent structures—signalling abstraction. 3.3 Linguistic Category Alignment We evaluate abstraction emergence through two complementary approaches: internal representation probing and output generation analysis. For each, we examine both semantic roles (e.g., AGENT, PATIENT) for event structure understanding, and part-of-speech (POS) categories for syntactic abstraction. Internal Probing. We apply diagnostic probes p(ℓ) cto hidden states at layer ℓto measure category- specific confidence scores during training. These probes quantify how well internal representations encode
https://arxiv.org/abs/2505.17998v1
linguistic structures at each training step t: Conf(ℓ,t) c=1 |B|X x∈Bp(ℓ) c(hℓ(x)), (3) where hℓ(x)is the hidden representation, Bis the evaluation batch size and p(ℓ) cis a linear classifier trained on trained model frozen checkpoints. Probes are used to capture the evolving alignment between internal features and abstract linguistic categories. To validate that observed linguistic alignment reflects learned abstraction rather randomness, we trained probes on randomly initialised models. These probes performed at or near chance, confirming that linguistic features are not encoded prior to training. This supports the view that abstraction emerges progressively and is localised in stages as training evolves. Full results are reported in Appendix B.9. Output Generation Analysis. We also assess whether generated tokens respect linguistic con- straints. For each generated token ˆy, we compute category-specific accuracy: Acc(t) c=1 |Dc|X (xi,yi)∈Dc1[ˆyi∈ {(yi)], (4) where Dccontains sequences with category c, and{(yi)denotes the set of valid tokens for the expected category at position i. This metric reveals whether abstract patterns learned internally are successfully deployed during generation. By jointly analysing internal representation alignment and output conformity alongside geometric metrics, we identify precisely when models transition from memorising token associations to acquiring structured abstractions. Divergence between internal and output measures reveals intermediate states where models have partially acquired abstract representations but cannot yet reliably deploy them in generation. The complete methodology for token categorisation, probe architecture, and training procedures is detailed in Appendix B.6. 3.4 Information Compression via Mutual Information While we explored mutual information (MI) as a potential signal of abstraction, we found that MI esti- mates were highly volatile and did not consistently align with the phase transitions observed through other metrics. This behaviour likely stems from two issues: (i) MI estimation is inherently noisy in high-dimensional settings, and (ii) abstraction in transformers involves structural reorganisation rather than pure information compression. These patterns were consistent across both types of MI we measured: (i) I(X;Zℓ), the information retained about the input Xin the hidden states Zℓ=ϕℓ(X); 5 and (ii) I(Zℓ;Zℓ+1), the information shared between adjacent layers. Due to this instability, MI lacked the resolution to serve as a reliable diagnostic. We include full experimental results, estimation procedures, and MI trajectories in Appendix C for completeness, but do not consider MI a core component of our results. 3.5 Synthetic Data Generator To isolate representational dynamics from confounding factors in natural data, we employ ABSynth , our controllable synthetic data generation framework grounded in frame semantics [ 19,5]. ABSynth controllably builds synthetic corpora by sampling from structured sentence frame representations with predefined semantic roles (e.g., AGENT ,PATIENT ,THEME ). The framework supports precise manipulation of structural properties, including vocabulary size, token frequency, syntactic/semantic complexity, and contextual entropy. The generation pipeline consists of three modular components: (1) a lexicon module that assigns words to categories under a Zipfian frequency distribution [ 55,40] (α= 1.05), augmented with variable-strength collocations; (2) a frame-based sentence constructor that assembles grammatically well-formed sentences across three levels of structural complexity; and (3) an entropy-aware token selector that modulates predictability by adjusting sampling probabilities for the corpus. For this study, we
https://arxiv.org/abs/2505.17998v1
use ABSynth to generate ABSynth25K , a dataset of 25,000 sentences with complete frame- semantic annotations. Each example includes ground-truth semantic roles and syntactic categories (POS tags) derived directly from the underlying frame structure, enabling precise investigation of how linguistic abstractions emerge in neural representations. ABSynth25K follows an 80/10/10 training/validation/test split. Complete generation procedures and frame specifications are detailed in Appendix A. 3.6 Models Architectures and Training Setup Transformer Architectures. We train three decoder-only transformer models of increasing capacity. Small model (2-head, 128 FFN, 64 dmodel , 1-layer), medium model (3-head, 384 FFN, 96 dmodel , 2-layer) and large model (4-head, 512 FFN, 128 dmodel , 3-layer). Models share the same positional encoding scheme, tokenisation, and vocabulary. A fixed sequence length (16) and batch size (128) are used across runs to standardise training dynamics. We record dense checkpoints throughout training to record our metrics. Complete training details, downstream task formalisations and model hyperparameters are reported in Appendix B. Ablation of Architectural Components. To isolate the architectural factors driving abstraction, we ablate key transformer components by removing feed-forward (FFN) blocks and reducing the number of attention heads. These interventions are designed to test whether the capacity for abstraction depends on transformation depth or attention expressivity, and to determine which mechanisms are necessary for triggering representational phase transitions. 4 Results & Analysis 4.1 Coordinated Phase Shift in Training Dynamics Across all model configurations, we observe a robust two-phase training dynamic: an initial regime of rising ID and elevated curvature, followed by a transition into flatter curvature and stabilised representational complexity (Fig. 2). This transition is marked by a consistent intersection between the Hessian curvature score (blue) and ID trajectories (red), which we interpret as a phase shift in learning dynamics. In larger models (right panel), this intersection occurs early (around step 5000) and sharply, with curvature rapidly collapsing while ID plateaus at higher values, indicating the emergence of high- capacity, structured representations. Medium-scale models (centre panel) follow qualitatively similar transitions but exhibit notable periodic spikes in curvature throughout training. These persistent oscillations suggest recurring reorganisation events where the model temporarily revisits higher- curvature regions of the loss landscape, reflecting optimisation instabilities with limited architectural capacity. 6 0 10000 20000 30000 40000 50000 60000 70000 Training Steps0.51.01.52.02.53.03.54.0Hessian Curvature Small 0 10000 20000 30000 40000 50000 60000 70000 Training Steps1 01234567Hessian Curvature Medium 0 10000 20000 30000 40000 50000 60000 70000 Training Steps01234567Hessian Curvature Large 30405060708090 Intrinsic Dim 303540455055 Intrinsic Dim 30354045505560 Intrinsic Dim Curvature and Intrinsic Dimension Across Models 0 10000 20000 30000 40000 50000 60000 70000 Training Steps0.51.01.52.02.53.03.5Hessian Curvature Small 0 10000 20000 30000 40000 50000 60000 70000 Training Steps012345Hessian Curvature Medium 0 10000 20000 30000 40000 50000 60000 70000 Training Steps01234567Hessian Curvature Large 304050607080 Intrinsic Dim 30354045505560 Intrinsic Dim 30354045505560 Intrinsic Dim Curvature and Intrinsic Dimension Across Models - Without FNN 0 10000 20000 30000 40000 50000 60000 70000 Training Steps1.01.52.02.53.03.54.0Hessian Curvature Small 0 10000 20000 30000 40000 50000 60000 70000 Training Steps1 0123456Hessian Curvature Medium 0 10000 20000 30000 40000 50000 60000 70000 Training Steps02468Hessian Curvature Large 30405060708090 Intrinsic Dim 35404550556065 Intrinsic
https://arxiv.org/abs/2505.17998v1
Dim 303540455055 Intrinsic Dim Curvature and Intrinsic Dimension Across Models - Reduced Attention HeadsFigure 2: Coordinated dynamics of Hessian Curvature Score (blue) and Average Intrinsic Dimension (red) across training steps for three model architectures. Each row shows a different architectural variant: standard models (top), models without feed-forward networks (middle), and models with a single attention head (bottom). The intersection points between curvature and ID trajectories mark critical phase transitions in representational learning, with timing and stability varying across architectures but preserving the fundamental pattern. Smaller models (left panel) exhibit delayed transitions (after step 30000), noisier trajectories, and lower equilibrium ID values, demonstrating clear architectural bottlenecks in abstraction capacity . Despite variations in timing, amplitude, and stability, the fundamental pattern of curvature-ID intersection remains consistent, suggesting a universal geometric signature of abstraction emergence that scales with model capacity but preserves its essential character. 4.2 Differential Impact of Architectural Components Our ablation experiments examine how specific architectural components influence abstraction dynamics, revealing subtle but informative effects. Fig. 2 presents curvature and ID trajectories across three architectural variants: standard models (top row), models without feed-forward networks (FFNs) (middle row), and models with a single attention head (bottom row). All variants preserve the fundamental pattern of an initial curvature peak followed by a decline, concurrent with rising ID that eventually stabilises. This consistency demonstrates that abstraction emergence is surprisingly robust to architectural modifications and may be an inherent property of transformer-based optimisation. Removing FFNs (middle row) increases curvature volatility, especially in small and medium models, with persistent oscillations but reduced spike amplitudes. This suggests FFNs contribute to optimisa- tion stability and smooth representational development, functioning as distributed lookup structures [14]. Despite volatility, the phase transition timeline remains preserved, indicating FFNs enhance rather than enable abstraction emergence. Single attention head models (bottom row) show scale-dependent effects. Medium and large models exhibit more frequent but lower-amplitude spikes, revealing a stability-smoothness trade-off . Small models show greater impact: reduced ID values and delayed phase transitions, indicating attention 7 Figure 3: Probe confidence scores over training steps for the large model. Each subplot corresponds to a different decoder layer, with curves representing average model confidence for the presence of specific linguistic tags. capacity constraints affect smaller architectures more severely . Nevertheless, the fundamental curvature-ID intersection pattern persists across all configurations. These results demonstrate the architectural resilience of core learning dynamics in transformers. The preserved geometric signature across configurations establishes that abstraction is observable and a fundamental property of overall transformers’ gradient-based learning on sequential data, rather than a consequence of specific architectural features. This resilience also explains why transformers with varying configurations achieve comparable language task performance. 4.3 Linguistic Alignment with Geometric Transitions The intersection of curvature and ID trajectories coincides with key transitions in linguistic abstraction emergence. Fig. 3 shows how internal representations evolve across decoder layers, measured by probe confidence scores for linguistic categories (semantics). Layer 1 exhibits an interesting pattern of representational reorganisation, evident by a temporary dip and increased volatility in confidence scores, occurring precisely at the curvature-ID intersection shown in Fig. 2. This suggests global
https://arxiv.org/abs/2505.17998v1
geometric shifts correspond to evolving category structure. Layer 2 shows increased category confidence around the same intersection region, mirroring Layer 1’s earlier drop, indicating a hand-off dynamic between middle and upper layers. This tempo- ral complementarity reflects upward abstraction shifts, whereby higher layers specialise in more abstract linguistic features. Unlike Layer 0’s stable dominance and Layer 1’s volatility, Layer 2 maintains a fragmented, dynamic profile throughout training, supporting the formation of higher-order abstractions or interpretability-relevant circuits. Medium-sized models (Appendix B.9) show more harmonised behaviour across layers, demonstrating tighter coupling between abstraction capacity and model scale, with similar behaviour to layer 1 in the large model. These results support that evolving probe confidence reflects internal reorganisation aligned temporally with geometric transitions. This internal development diverges from output predictions (Fig. 4), where semantic classification accuracy rapidly improves and stabilises early in training. This indicates that while models quickly generate syntactically appropriate tokens, internal representations continue restructuring long after. This dissociation implies a two-phase developmental process: in the first phase, output behaviour reflects coarse category distinctions likely driven by surface-level statistical regularities; in the second, deeper abstraction is gradually encoded into the model’s internal geometry, as indicated by evolving probe confidence. Full results across models (Appendix B.6) show consistent probe dynamics for both syntactic and semantic categories. 4.4 Limitations of Statistical Diagnostics Despite theoretical appeal, MI analysis failed to yield reliable insights in our experimental setting. The observed MI dynamics exhibited high variance and showed minimal alignment with phase transitions identified through geometric and linguistic metrics. This instability aligns with concerns raised by Aljaafari et al. [3], which argues that such patterns in transformers may reflect stochastic 8 Figure 4: SRL performance per label across models and training steps fluctuations in early representation formation rather than meaningful abstraction signals. Given these limitations, we do not report MI in the main body of this paper, though complete MI trajectories are reported in Appendix C. 5 Discussion and Conclusion Our results suggest that transformers undergo structured representational reorganisation during training. Rather than emerging gradually, abstraction is evident through phase transitions, coordinated shifts across geometric, information-theoretic, and linguistic signals. These transitions mark a distinct boundary between memorisation and generalisation, where linguistic representations begin to stabilise. We observe consistent alignment between curvature flattening, ID rise and stabilisation, and increased probe accuracy. This coordination suggests certain geometric signals may serve as markers of emerging abstraction. Low-rank curvature and stable ID appear to signal when models begin internalising structure beyond surface patterns. The phenomenon remains robust across model scales and architectural variants, suggesting that abstraction emergence follows predictable patterns. While our experiments use synthetic corpora, TRACE is compatible with broader domains. Metrics such as curvature and ID are model-agnostic and can be applied to pre-trained transformers or fine- tuning regimes. Probing-based signals can be approximated using weak supervision or automated annotation tools. Extending TRACE to large-scale pretraining could reveal whether similar phase transitions emerge in noisier, real-world settings. Finally, integrating TRACE with mechanistic interpretability tools could help localise where and how abstraction-related circuits emerge. 6 Limitations Despite its interpretability, our synthetic
https://arxiv.org/abs/2505.17998v1
corpus does not fully capture the ambiguity and richness of NL. While probe-based diagnostics offer valuable insights, they provide a static view of representation content and may not reflect the dynamic computational mechanisms that transformers deploy at inference time. Finally, while TRACE establishes strong correlations across geometric, informational, and linguistic signals, it does not establish causal relationships or quantify the relative contribution of each factor. 7 Impact Statement This work reveals that abstract reasoning in language models emerges through predictable phase transitions rather than gradual accumulation. Identifying these critical transitions could enable more efficient training strategies, yielding more interpretable models with reduced computational costs. While this understanding may help detect harmful behaviours, it also presents the usual interpretability trade-off of potentially facilitating model manipulation. Our frame-semantic data generation framework provides a reusable tool for studying abstraction and learning dynamics in language models, with fine-grained control over linguistic properties and transparent evaluation capabilities. 9 References [1]Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 7319–7328, Online, 2021. Association for Computational Linguistics. [2]Kwangjun Ahn, Ali Jadbabaie, and Suvrit Sra. How to escape sharp minima with random perturbations. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024. [3]Nura Aljaafari, Danilo S Carvalho, and André Freitas. Carma: Enhanced compositional- ity in llms via advanced regularisation and mutual information alignment. arXiv preprint arXiv:2502.11066 , 2025. [4]Alessio Ansuini, Alessandro Laio, Jakob H Macke, and Davide Zoccolan. Intrinsic dimension of data representations in deep neural networks. Advances in Neural Information Processing Systems , 32, 2019. [5]Collin F. Baker, Charles J. Fillmore, and John B. Lowe. The Berkeley FrameNet project. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1 , pages 86–90, Montreal, Quebec, Canada, August 1998. Association for Computational Linguistics. doi: 10.3115/980845.980860. URL https://aclanthology.org/P98-1013/ . [6]Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning , volume 80 of Proceedings of Machine Learning Research , pages 531–540. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/belghazi18a.html . [7]Lucas Böttcher and Gregory Wheeler. Visualizing high-dimensional loss landscapes with hessian directions. Journal of Statistical Mechanics: Theory and Experiment , 2024(2):023401, 2024. [8]Lawrence Cayton et al. Algorithms for manifold learning . eScholarship, University of California, 2005. [9]Emily Cheng, Corentin Kervadec, and Marco Baroni. Bridging information-theoretic and geometric compression in language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12397–12420, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.762. URL https://aclanthology.org/ 2023.emnlp-main.762/ . [10] Emily Cheng, Diego Doimo, Corentin Kervadec, Iuri Macocco, Lei Yu, Alessandro Laio, and Marco Baroni. Emergence of a high-dimensional abstraction phase in language transformers. InThe Thirteenth International Conference on
https://arxiv.org/abs/2505.17998v1
Learning Representations , 2025. URL https: //openreview.net/forum?id=0fD3iIBhlV . [11] Kenzo Clauw, Daniele Marinazzo, and Sebastiano Stramaglia. Information-theoretic progress measures reveal grokking is an emergent phase transition. In ICML 2024 Workshop on Mecha- nistic Interpretability , 2024. URL https://openreview.net/forum?id=Q4NH6hEPIX . [12] Róbert Csordás, Kazuki Irie, and Jürgen Schmidhuber. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In Proc. Conf. on Empirical Methods in Natural Language Processing (EMNLP) , Punta Cana, Dominican Republic, November 2021. [13] Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. Sparse autoen- coders find highly interpretable features in language models. arXiv preprint arXiv:2309.08600 , 2023. 10 [14] Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. Knowledge neurons in pretrained transformers. arXiv preprint arXiv:2104.08696 , 2021. [15] Francesco Denti, Diego Doimo, Alessandro Laio, and Antonietta Mira. The generalized ratios intrinsic dimension estimator. Scientific Reports , 12(1):20005, 2022. [16] Eric Elmoznino, Thomas Jiralerspong, Yoshua Bengio, and Guillaume Lajoie. A complexity- based theory of compositionality. arXiv preprint arXiv:2410.14817 , 2024. [17] Elena Facco, Maria d’Errico, Alex Rodriguez, and Alessandro Laio. Estimating the intrinsic dimension of datasets by a minimal neighborhood information. Scientific reports , 7(1):12140, 2017. [18] Siqi Fan, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Shuo Shang, Aixin Sun, Yequan Wang, and Zhongyuan Wang. Not all layers of llms are necessary during inference. CoRR , abs/2403.02181, 2024. URL https://doi.org/10.48550/arXiv.2403.02181 . [19] Charles J. Fillmore. Frame semantics. In Linguistics in the Morning Calm , pages 111–137. Hanshin Publishing Co., Seoul, 1982. [20] Roger Grosse. Studying large language model generalization with influence functions. In Proceedings of the 38th Conference on Neural Information Processing Systems , NeurIPS ’24, 2024. Workshop on Scalable Continual Learning for Lifelong Foundation Models. [21] Yaru Hao, Li Dong, Furu Wei, and Ke Xu. Visualizing and understanding the effectiveness of bert. arXiv preprint arXiv:1908.05620 , 2019. [22] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. An empirical analysis of compute-optimal large language model training. Advances in neural information processing systems , 35:30016–30030, 2022. [23] Jesse Hoogland, George Wang, Matthew Farrugia-Roberts, Liam Carroll, Susan Wei, and Daniel Murfet. The developmental landscape of in-context learning. arXiv preprint arXiv:2402.02364 , 2024. [24] Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. Compositionality decomposed: How do neural networks generalise? Journal of Artificial Intelligence Research , 67:757–795, 2020. [25] Subhash Kantamneni, Joshua Engels, Senthooran Rajamanoharan, Max Tegmark, and Neel Nanda. Are sparse autoencoders useful? a case study in sparse probing. arXiv preprint arXiv:2502.16681 , 2025. [26] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836 , 2016. [27] Pejman Khadivi, Ravi Tandon, and Naren Ramakrishnan. Flow of information in feed- forward denoising neural networks. In 2018 IEEE 17th International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC) , pages 166–173, 2018. doi: 10.1109/ICCI-CC.2018.8482098. [28] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization.
https://arxiv.org/abs/2505.17998v1
In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings , 2015. URL http://arxiv.org/abs/1412.6980 . [29] Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International conference on machine learning , pages 2873–2882. PMLR, 2018. [30] Cornelius Lanczos. An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. Journal of research of the National Bureau of Standards , 45 (4):255–282, 1950. 11 [31] Jin Hwa Lee, Thomas Jiralerspong, Lei Yu, Yoshua Bengio, and Emily Cheng. Geometric signa- tures of compositionality across a language model’s lifetime. arXiv preprint arXiv:2410.01444 , 2024. [32] Michael Lepori, Thomas Serre, and Ellie Pavlick. Break it down: Evidence for structural compositionality in neural networks. Advances in Neural Information Processing Systems , 36: 42623–42660, 2023. [33] Yuxuan Li and James McClelland. Systematic generalization and emergent structures in transformers trained on structured tasks, 2023. URL https://openreview.net/forum?id= pXDmbfVL_SB . [34] Anna V Little, Mauro Maggioni, and Lorenzo Rosasco. Multiscale geometric methods for data sets i: Multiscale svd, noise and curvature. Applied and Computational Harmonic Analysis , 43 (3):504–567, 2017. [35] Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment , 2021(12):124003, 2021. [36] Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progress mea- sures for grokking via mechanistic interpretability. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=9XFSbDPmdW . [37] Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. In-context learning and induction heads. arXiv preprint arXiv:2209.11895 , 2022. [38] Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, and Tong Zhang. Lisa: Layerwise importance sampling for memory-efficient large language model fine-tuning. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 57018–57049. Curran Associates, Inc., 2024. URL https://proceedings.neurips.cc/paper_files/paper/ 2024/file/687163285b8affc8ee933bdca8e75747-Paper-Conference.pdf . [39] Kiho Park, Yo Joong Choe, Yibo Jiang, and Victor Veitch. The geometry of categorical and hierarchical concepts in large language models. In ICML 2024 Workshop on Mechanistic Interpretability , 2024. URL https://openreview.net/forum?id=KXuYjuBzKo . [40] Steven T Piantadosi. Zipf’s word frequency law in natural language: A critical review and future directions. Psychonomic bulletin & review , 21:1112–1130, 2014. [41] Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. Grokking: Gen- eralization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177 , 2022. [42] Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Goncharova, Ivan Oseledets, Denis Dim- itrov, and Andrey Kuznetsov. The shape of learning: Anisotropy and intrinsic dimen- sions in transformer-based models. In Yvette Graham and Matthew Purver, editors, Find- ings of the Association for Computational Linguistics: EACL 2024 , pages 868–874, St. Julian’s, Malta, March 2024. Association for Computational Linguistics. URL https: //aclanthology.org/2024.findings-eacl.58/ . [43] Olivier Roy and Martin Vetterli. The effective rank: A measure of effective
https://arxiv.org/abs/2505.17998v1
dimensionality. In 2007 15th European signal processing conference , pages 606–610. IEEE, 2007. [44] Adepu Ravi Sankar, Yash Khasbage, Rahul Vigneswaran, and Vineeth N Balasubramanian. A deeper look at the hessian eigenspectrum of deep neural networks and its applications to regularization. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 35, pages 9481–9488, 2021. [45] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=H1gR5iR5FX . 12 [46] Sidak Pal Singh, Gregor Bachmann, and Thomas Hofmann. Analytic insights into structure and rank of neural network hessian maps. Advances in Neural Information Processing Systems , 34: 23914–23927, 2021. [47] Oscar Skean, Md Rifat Arefin, Dan Zhao, Niket Patel, Jalal Naghiyev, Yann LeCun, and Ravid Shwartz-Ziv. Layer by layer: Uncovering hidden representations in language models. arXiv preprint arXiv:2502.02013 , 2025. [48] Lewis Smith, Senthooran Rajamanoharan, Arthur Conmy, Callum McDougall, János Kramár, Tom Lieberum, Rohin Shah, and Neel Nanda. Negative results for saes on downstream tasks and deprioritising sae research (gdm mech interp team progress up- date #2), March 2025. URL https://www.lesswrong.com/posts/4uXCAJNuPKtKBsi28/ negative-results-for-saes-on-downstream-tasks . LessWrong post. [49] Lucrezia Valeriani, Diego Doimo, Francesca Cuturello, Alessandro Laio, Alessio Ansuini, and Alberto Cazzaniga. The geometry of hidden representations of large transformer models. Advances in Neural Information Processing Systems , 36:51234–51252, 2023. [50] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems , 30, 2017. [51] George Wang, Matthew Farrugia-Roberts, Jesse Hoogland, Liam Carroll, Susan Wei, and Daniel Murfet. Loss landscape geometry reveals stagewise development of transformers. In High-dimensional Learning Dynamicsß: The Emergence of Structure and Reasoning , 2024. URL https://openreview.net/forum?id=2JabyZjM5H . [52] Yang Zhao, Hao Zhang, and Xiuyuan Hu. Penalizing gradient norm for efficiently improving generalization in deep learning. In International conference on machine learning , pages 26982–26992. PMLR, 2022. [53] Ziqian Zhong and Jacob Andreas. Algorithmic capabilities of random transformers. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview.net/forum?id=plH8gW7tPQ . [54] Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Joshua M. Susskind, Samy Bengio, and Preetum Nakkiran. What algorithms can transformers learn? a study in length generalization. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=AssIuHnmHX . [55] George Kingsley Zipf. Human Behavior and the Principle of Least Effort . Addison-Wesley, 1949. 13 A Frame-Semantic Data Generation Framework This appendix describes our controllable synthetic data generation framework ABSynth . Unlike template-based synthetic datasets [ 29] or task-specific benchmarks [ 45], ABSynth is grounded in formal frame semantics [ 19,5] and is able to generate English-like corpora by sampling from abstract event frames with predefined semantic roles, enabling mechanistic study of how linguistic abstractions emerge during transformer training. Figure 5: The frame-semantic data generation pipeline: (1) Frame selection with semantic roles, (2) Lexical realisation with Zipfian scaling, (3) Syntactic construction following grammatical constraints, (4) Entropy calibration for controlled predictability. Each generated sentence preserves ground-truth annotations from the underlying frame structure.
https://arxiv.org/abs/2505.17998v1
ABSynth operationalises frame semantics by representing events as structured predicate-argument frames. Each frame specifies: (i) frame elements; (ii) core semantic roles (e.g., AGENT ,PATIENT , INSTRUMENT ); (iii) corresponding syntactic categories (e.g., NOUN ,VERB ), and (iv) complexity con- straints based on sequence length and entropy calibration. This grounding ensures that generated sentences exhibit genuine compositional structure rather than arbitrary token sequences. The frame-based approach enables direct tracking of how neural models learn to represent abstract semantic categories that underlie surface forms. As illustrated in Figure 5, ABSynth generates datasets through a multi-stage pipeline that includes: (i) semantic frame selection with role specification, (ii) lexical realization following Zipfian frequency distributions and semantic clustering, (iii) syntactic construction respecting grammatical constraints and frame-to-syntax mappings, and (iv) entropy calibration to control contextual predictability. The resulting corpora exhibit theoretically-grounded compositional structure while maintaining naturalistic statistical properties. We detail the generation process below using our ABSynth25K instantiation as an example, emphasis- ing that each component is modular and configurable for different research objectives. This flexibility enables systematic exploration of how specific linguistic properties influence abstraction emergence in neural models. A.1 Lexicon Construction: Scaling and Semantic Clustering The lexicon is built by assigning tokens to POS and semantic role categories, then applying Zipfian scaling and semantic clustering. Each token is associated with a tuple [POS, SRL, Zipf_rank, ClusterID] , ensuring interpretability and alignment across analysis stages. 14 To mimic natural lexical statistics, token frequencies follow a Zipfian distribution [55, 40]: P(toki) =1/iα+εiPV j=1(1/jα+εj), (5) where α= 1.05,εi∼ N(0,0.05), and V= 9000 is the vocabulary size. This maintains a realistic long-tail frequency spectrum. Semantic clustering introduces collocational structure by forming token groups with variable intra- and inter-cluster association strengths. For tokens tokiandtokjbelonging to clusters ciandcj: S(toki, tok j) =Sbase(ci, cj) +U(0, Srange (ci, cj)), (6) where SbaseandSrange are tuned to create structured but noisy associations, emulating semantic co-occurrence patterns. Intra-cluster associations are drawn from [0.4,0.7], while cross-cluster links are weaker ([0.05,0.2]). Each word receives a naturalised name (e.g., result1 ,noun5 ), allowing transparent reverse-mapping for analysis. A.2 Frame-Based Syntactic Realisation and Entropy Control Frame-to-syntax mappings define how semantic frames are realised as surface forms while controlling contextual predictability through entropy calibration. All realisations follow valid English gram- matical constructions, respecting part-of-speech ordering, agreement patterns, and canonical phrase structure. This grounding enables the study of syntactic and semantic abstraction within structurally coherent input sequences. Each frame component is annotated with expected entropy based on its role in the frame: •Low entropy (0.5–1.5 bits): Grammatically determined positions (e.g., determiners required by nouns) •Medium entropy (1.5–3.0 bits): Semantically constrained positions with multiple valid fillers (e.g., theme roles that accept various object types) •High entropy (3.0–4.5 bits): Optional frame elements with high variability (e.g., adverbial modifier) Frames are instantiated according to a target complexity distribution (55% simple, 35% medium, 10% complex), which guides the global entropy profile of the corpus. Example frame realisations include: Simple TRANSFER: [AGENT=NOUN] [ACTION=VERB] [THEME=NOUN] "noun2 verb3 noun5" Medium CREATION: [AGENT=NOUN] [ACTION=VERB] [THEME=NOUN] [PURPOSE=PREP+NOUN] "noun1 verb2 noun4 prep3 noun7" Complex MOTION: [AGENT=NOUN] [REL] [ACTION=VERB] [SOURCE=NOUN] [ACTION=VERB] [GOAL=NOUN] "noun3 rel1
https://arxiv.org/abs/2505.17998v1
verb5 noun6 verb7 noun9" A.3 Dynamic Entropy Adjustment Algorithm To enforce statistical balance across complexity levels, the sentence generator incorporates an entropy- aware sampling mechanism. During generation, the system maintains a global entropy profile, defined as the frequency distribution of sentence positions assigned to low, medium, and high entropy tiers. This profile is updated in real time and compared to the desired target distribution. If the observed distribution diverges from the target (e.g., too many low-entropy tokens have been sampled), the system increases the sampling weight for frames or token categories that contribute to underrepresented tiers. This feedback mechanism modulates the difficulty of the dataset without sacrificing grammaticality. The final vocabulary consists of 9,000 tokens distributed across categories as shown in Table 1. 15 Algorithm 1 Entropy-Calibrated Sentence Generation 1:foreach sentence to be generated do 2: Select template Twith entropy tier annotations 3: foreach token slot iinTdo 4: Compute current global entropy profile Ecurrent 5: Compare to target profile Etarget 6: Derive correction factor αto adjust token sampling 7: Sample token tokifrom adjusted distribution 8: end for 9: Add sentence to corpus 10: Update Ecurrent with entropy tags from new sentence 11:end for A.4 Output Format and Probing Supervision Each sentence is stored with a naturalised token sequence and associated structured annotations, including: •POS labels (e.g., NOUN ,ADJ,VERB ) •Semantic roles (e.g., AGENT ,PATIENT ,RESULT ) •Entropy tier •Other Contextual complexity metadata These annotations enable direct supervision for probing tasks. During model training, hidden states are extracted layer-wise and evaluated using linear probes trained on these annotations. This setup facilitates fine-grained analysis of how and when compositional representations emerge, and how they correlate with curvature, intrinsic dimensionality, and mutual information. Token Category Vocabulary Size Noun 2,780 Transitive Verb 694 Intransitive Verb 694 Communication Verb 347 Motion Verb 347 Adjective 1,388 Adverb 555 Location 694 Temporal 694 Preposition 416 Determiner 111 Conjunction 277 Result 277 Total 9,000 Table 1: V ocabulary distribution across syntactic categories B Technical Implementation Details B.1 Main Model Architecture We implemented decoder-only transformer architectures based on the original design of Vaswani et al. [50]. Each model consists of Llayers, each with hidden size dmodel,Hattention heads (where dhead=dmodel/H), and a feedforward dimension dffn. We focus on decoder-only models given their increasing prevalence in production LLMs and recent arguments that causal architectures provide a cleaner demonstration for emergence [47]. 16 Our architectural choices are guided by recent Transformer scaling laws, notably those articulated by Hoffmann et al. [22]. Namely, we evaluate three configurations that adhere to the Chinchilla scaling law (see below), with approximate parameter counts and architectural specifications as presented in Table 2. All models are trained with a maximum sequence length of T= 16 tokens and a dropout rate of 0.1. We use a simple whitespace-based tokeniser, where each token corresponds to a space-separated word or symbol. This choice allows us to maintain interpretability and simplify downstream repre- sentational analyses, while aligning with our controlled, low-scale experimental setup. Table 2: Transformer model configurations used in this study Model Layers ( L) Hidden size ( dmodel ) Heads ( H)
https://arxiv.org/abs/2505.17998v1
FFN size ( dffn) Small 1 64 2 128 Medium 2 96 3 384 Large 3 128 4 512 Scaling Laws and Training Budget. We follow the Chinchilla scaling principles from Hoffmann et al. [22], which demonstrate that model size and training tokens should scale together. Specifically, Chinchilla findings show that optimal training requires approximately 20 tokens per parameter. Given our dataset size of approximately 360K tokens per epoch, we designed our training regimen to respect these scaling principles. Based on our model sizes (110K, 339K, and 749K parameters), we estimated minimum training requirements of 6, 19, and 42 epochs, respectively, to ensure a sufficient token-to-parameter ratio, as implied by Chinchilla’s 20:1 guideline. However, in practice, we observed that phase transitions occurred at different points across scales, not precisely aligned with these theoretical estimates. As such, we extended training durations beyond these minimums to allow sufficient time for representational transitions to emerge, as discussed in Section B.3, and to test whether the Grokking phoneme [41] existence. B.2 Main Models Training Objectives and Formalisation of Next Token Prediction Next-Token Prediction Task (NTP): We formalise the NTP task, which we use to train our decoder-only language models. NTP involves predicting the next token xn+1given a preceding sequence X={x1, x2, . . . , x n}. In our setting, each xi∈ V belongs to a controlled lexicon from ABS YNTH , reflecting structured linguistic categories (e.g., noun3 ,verb2 , etc.) and conforming to correct English grammar. Formally, the objective is: xn+1= arg max x∈VP(x|x1, x2, . . . , x n) (7) Here,Vdenotes the model’s token vocabulary, which includes synthetic labels representing part-of- speech categories and their variants (e.g., noun1 ,adj2 ). The model autoregressively generates one token at a time, conditioned solely on the preceding tokens. Example Prompt: Given the sequence: "noun1 verb2 adj1" , the task requires predicting the next token, such as: "noun3" , depending on the dataset’s underlying syntactic or semantic generation rules. We focus on NTP because it reflects the core objective used in many widely adopted pretrained language models (e.g., GPT-style models), making it a natural and effective setting for examining representational and interpretive behaviours in a controlled environment. B.3 Main Models Training Configuration All models are trained using next-token prediction with the Adam optimiser [ 28] with learning rate 1e−3, and 1000 warm-up steps. Training was conducted for significantly extended epochs beyond the computed minimum requirements of the discussed scaling laws (Section B.1), consistent with the observations of Nanda et al. [36] on the "grokking" phenomenon, where generalisation emerges 17 abruptly after an initial memorisation phase. We consider this an extension of the training strategies outlined in Csordás et al. [12], which emphasise the importance of avoiding early stopping to fully exploit the learning capacity of neural models. Specifically, we trained: •Small model : 500 epochs ( ∼82× the minimum of 6 epochs) •Medium model : 500 epochs ( ∼26× the minimum of 19 epochs) •Large model : 500 epochs ( ∼12× the minimum of 42 epochs) We record dense checkpoints throughout training (every 500 training steps), extracting
https://arxiv.org/abs/2505.17998v1
hidden states and gradients from all layers to compute our diagnosis metrics. We applied minimal regularisation and continuously monitored loss, accuracy, gradient dynamics, representational similarity, and mutual information (MI) throughout training to detect potential phase transitions. This methodological rigour enables us to assess the relationship between model structure and task complexity under controlled and interpretable conditions, while ensuring that all models are trained in accordance with modern scaling principles. To ensure correctness and reproducibility, all experiments were repeated at least 5 times with different random seeds, and we reported the averaged results. B.4 Model performance B.5 Model Performance All models demonstrate strong performance on the downstream generation task after their respective phase transitions. Table 3 summarises the evaluation metrics across model scales. Table 3: Performance metrics for small, medium, and large models after phase transition. Model Exact Match Token Accuracy BLEU Score Perplexity Small 0.84 0.98 0.20 1.22 Medium 0.98 0.99 0.21 1.08 Large 0.98 0.99 0.21 1.07 B.6 Probing Framework and Label Construction To investigate the interpretability and internal structure of our models, we implemented a probing framework that trains lightweight classifiers on the frozen hidden representations extracted from each layer of our trained models. Specifically, for every layer in the tested models, we trained and evaluated both a part-of-speech (POS) probe and a semantic role labelling (SRL) probe. Each probe is implemented as a feedforward neural network comprising three linear layers inter- leaved with ReLU activations and dropout. The network is trained using binary cross-entropy loss, with sigmoid activations at the output layer to support multi-label classification. Given an input representation h∈Rd, the probe computes: ˆy=σ(W3·ReLU (W2·ReLU (W1·h))), where W1, W 2, W 3are trainable weight matrices and σdenotes the element-wise sigmoid function. 18 Table 4: POS Probe Evaluation – Large Model, Layer 0 Label Count Accuracy Precision Recall F1 Score NOUN 279984 1.000 1.000 1.000 1.000 TRANSITIVE_VERB 151232 0.836 0.577 0.836 0.683 INTRANSITIVE_VERB 59584 0.073 1.000 0.073 0.136 COMMUNICATION_VERB 54352 0.238 1.000 0.238 0.384 MOTION_VERB 68368 0.344 0.659 0.344 0.452 CHANGE_VERB 14240 0.562 0.510 0.562 0.535 ADJ 59072 0.239 0.535 0.239 0.331 LOCATION 101984 0.315 0.858 0.315 0.461 TEMP 37664 0.184 0.535 0.184 0.274 PREP 115264 0.387 0.823 0.387 0.526 RESULT 68592 0.292 0.746 0.292 0.420 CONJ 51248 0.350 0.874 0.350 0.500 Probes are trained independently for each layer in the model, allowing us to analyse the emergence and distribution of linguistic and functional features across depth. Representations from each layer are frozen and taken from an already trained model following our training setting explained in Section B, and the underlying model weights are not updated during probing. Due to the synthetic nature of our dataset, both POS and semantic labels are deterministically derived from token names. For example, a token such as noun3 is assigned the POS label NOUN and may additionally be annotated with semantic roles such as AGENT orENTITY , depending on the symbolic structure of the task. This approach eliminates annotation ambiguity and ensures consistent supervision across examples. Table 5: POS Probe Evaluation – Large Model, Layer 1 Label Count Accuracy Precision Recall F1 Score
https://arxiv.org/abs/2505.17998v1
NOUN 279984 1.000 1.000 1.000 1.000 TRANSITIVE_VERB 151232 0.779 0.655 0.779 0.711 INTRANSITIVE_VERB 59584 0.291 1.000 0.291 0.451 COMMUNICATION_VERB 54352 0.476 1.000 0.476 0.645 MOTION_VERB 68368 0.718 0.678 0.718 0.697 CHANGE_VERB 14240 0.875 0.510 0.875 0.644 ADJ 59072 0.603 0.558 0.603 0.580 LOCATION 101984 0.641 0.843 0.641 0.729 TEMP 37664 0.344 0.535 0.344 0.419 PREP 115264 0.856 0.808 0.856 0.831 RESULT 68592 0.585 0.746 0.585 0.655 CONJ 51248 0.545 1.000 0.545 0.705 Given the supervised design of our dataset, we employ linear probes due to their demonstrated effectiveness in similar contexts. While alternative methods such as Sparse Autoencoders (SAEs) have shown promise in unsupervised settings Cunningham et al. [13], linear probes remain a robust and interpretable choice for supervised feature probing Kantamneni et al. [25], Smith et al. [48]. B.7 Probe Models Training Configuration All probes were trained for 30 epochs using the Adam optimiser [ 28] with a learning rate of 1×10−4, a hidden dimension of 256, and a dropout rate of 0.5. Input dimensionality matched the model’s hidden state size. B.8 Probing performance We report the average POS probe models’ performance on Tables 4, 5, and 6, for the small, medium, and large models, receptively. We also report the average semantic probe models’ performance on Tables 7, 8, and 9, for the small, medium, and large models, receptively. 19 Figure 6: Small model — Semantic (left) and POS (right) probe confidence scores at Layer 0. Figure 7: Medium model — Semantic (left) and POS (right) probe confidence scores across layers. B.9 Probing Extended Results We present extended probing results across all model scales (Small, Medium, and Large) for both part-of-speech (POS) and semantic role categories. Figures 6, 7, and 8 show model confidence scores for each layer across training steps. To assess whether abstraction emerges during training rather than being encoded by architecture alone, we also trained probes on frozen hidden states from randomly initialised models. These models exhibited near-zero performance across all linguistic categories, confirming the absence of structured representations at initialisation. Figures 9, 10, and 11 show the performance of semantic and POS probes on randomly initialised models. Table 6: POS Probe Evaluation – Large Model, Layer 2 Label Count Accuracy Precision Recall F1 Score NOUN 279984 1.000 1.000 1.000 1.000 TRANSITIVE_VERB 151232 0.772 0.657 0.772 0.710 INTRANSITIVE_VERB 59584 0.309 0.895 0.309 0.459 COMMUNICATION_VERB 54352 0.476 1.000 0.476 0.645 MOTION_VERB 68368 0.757 0.667 0.757 0.709 CHANGE_VERB 14240 1.000 0.510 1.000 0.676 ADJ 59072 0.603 0.558 0.603 0.580 LOCATION 101984 0.654 0.829 0.654 0.731 TEMP 37664 0.367 0.535 0.367 0.436 PREP 115264 0.856 0.808 0.856 0.831 RESULT 68592 0.572 0.754 0.572 0.650 CONJ 51248 0.545 1.000 0.545 0.705 Figure 8: Large model — POS probe confidence scores across layers. 20 Figure 9: Randomly initialised Small model — Semantic (left) and POS (right) probe confidence scores. Figure 10: Randomly initialised Medium model — Semantic (left) and POS (right) probe confidence scores. Table 7: Semantic Probe Evaluation – Large Model, Layer 0 Label Count Accuracy Precision Recall F1 Score AGENT 268576 1.000 0.959 1.000 0.979 PATIENT 151232 0.852 0.576 0.852 0.687
https://arxiv.org/abs/2505.17998v1
ACTION 279984 1.000 1.000 1.000 1.000 LOCATION 101984 0.315 0.858 0.315 0.461 RELATION 115264 0.387 0.823 0.387 0.526 CONNECTOR 51248 0.321 0.950 0.321 0.480 RESULT 68592 0.305 0.731 0.305 0.431 OTHER 153248 0.874 0.642 0.874 0.740 Table 8: Semantic Probe Evaluation – Large Model, Layer 1 Label Count Accuracy Precision Recall F1 Score AGENT 268576 1.000 0.959 1.000 0.979 PATIENT 151232 0.785 0.652 0.785 0.713 ACTION 279984 1.000 1.000 1.000 1.000 LOCATION 101984 0.641 0.843 0.641 0.729 RELATION 115264 0.856 0.808 0.856 0.831 CONNECTOR 51248 0.574 0.944 0.574 0.714 RESULT 68592 0.546 0.771 0.546 0.639 OTHER 153248 0.893 0.818 0.893 0.854 Figure 11: Randomly initialised Large model — Semantic (left) and POS (right) probe confidence scores. 21 Table 9: Semantic Probe Evaluation – Large Model, Layer 2 Label Count Accuracy Precision Recall F1 Score AGENT 268576 1.000 0.959 1.000 0.979 PATIENT 151232 0.762 0.659 0.762 0.707 ACTION 279984 1.000 1.000 1.000 1.000 LOCATION 101984 0.641 0.843 0.641 0.729 RELATION 115264 0.856 0.808 0.856 0.831 CONNECTOR 51248 0.545 1.000 0.545 0.705 RESULT 68592 0.390 0.969 0.390 0.556 OTHER 153248 0.893 0.818 0.893 0.854 B.10 Computational Resources and Software Environment All the experiments were on an NVIDIA RTX A6000. We used PyTorch (v2.6.0), scikit-learn (v1.6.1), scipy (v1.12.0), seaborn (v0.13.2), Python (v3.9.21), matplotlib (v3.9.4), numpy (v1.26.4), and matplotlib (v3.9.4). Training the small models required around 21 minutes without tracking, and 116 with tracking of all metrics. For medium models, it takes around 28 minutes to train and 130 minutes to train with full tracking. For large models, it takes around 34 minutes to train and 139 minutes to train with full tracking. Training probes require approximately 9 minutes, 19 minutes, and 28 minutes for small, medium and large models, respectively. C Mutual Information Estimation with MINE To quantify the flow and compression of information within transformer models, we estimate mutual information (MI) between input embeddings and internal layer representations. While several methods exist for MI estimation (e.g., k-nearest neighbours, contrastive approaches), we adopt Mutual Information Neural Estimation (MINE) [ 6] due to its scalability and effectiveness in high- dimensional settings. In theory, a systematic decline in I(X;Zℓ)across layers and training steps should signal abstraction: the model progressively discards surface-level details while retaining task-relevant structure. This compression-based view of abstraction has been explored in other architectures [ 27,16], and we examined whether similar dynamics emerge in transformer models during training. Specifically, we tracked two MI quantities: (i) I(X;Zℓ)— the mutual information between the input embeddings and the hidden states at layer ℓ, and (ii) I(Zℓ;Zℓ+1)— the mutual information between consecutive hidden layers. Despite the theoretical appeal, our empirical findings (see Figures 12) showed that MI was highly variable across training steps and did not consistently align with the phase transitions identified via curvature or intrinsic dimensionality. These results suggest that MI, while informative in principle, may lack the temporal resolution and stability needed to serve as a primary diagnostic in TRACE. We report full implementation details, training settings, and estimator architecture below. C.1 MINE Objective and Architecture MINE approximates the Donsker-Varadhan lower bound on mutual information
https://arxiv.org/abs/2505.17998v1
using a neural critic function Tθ:X × Z → R, parameterised by a multilayer perceptron (MLP). Given joint samples (x, z)∼PXZand marginal samples formed by pairing x∼PXwith independently sampled z′∼PZ, the MI is estimated via: ˆIθ(X;Z) =EPXZ[Tθ(x, z)]−logEPX⊗PZ[eTθ(x,z′)] (8) Our implementation uses a 3-layer MLP with hidden dimensions [128,128,1]and ReLU activations. MINE is used to estimating I(X;Zℓ)— the MI between the input embeddings and layer ℓ— it is also used to compute I(Zℓ;Zℓ+1), capturing how information is transmitted between adjacent layers. This allows us to analyse information bottlenecks, compression phases, and abstraction dynamics across the depth of the model. 22 C.2 Training and Evaluation Protocol For each chosen training step tof the model and its layers, we train a separate MINE estimator to convergence. The training protocol is as follows: •Batch size : 128 examples •Optimiser : Adam optimiser [28] with learning rate 0.001 •Training steps : 200 iterations •Positive samples : Joint pairs (x, z)from the same forward pass •Negative samples :(x, z′)where z′is obtained by shuffling across the batch C.3 MI Across Layers and Models Figure 12 shows how mutual information evolves during training across model scales. In small models (top-left), MI between the embedding and subsequent layers drops rapidly and stabilises early, suggesting early-stage compression and limited representational differentiation. Figure 12: Mutual Information (MI) between adjacent layers over training steps for small (top-left), medium (top-right), and large (bottom) models. Each line represents the MI between one pair of layers (e.g., embedding →Layer 0, Layer 0 →Layer 1). Higher MI values suggest greater information flow or redundancy; drops indicate compression. In medium models (top-right), we observe a pronounced dip in MI that aligns temporally with the phase transition identified in our diagnostics. This suggests that representational compression may act as a precursor or trigger for abstraction. Following this dip, MI values remain volatile across layers, reflecting a noisier or less stable reconfiguration of internal representations. Large models (bottom) follow a broadly similar trend, though with key differences: the MI dip appears in most transitions, but is less evident or absent in others (Layer 1 →Layer 2). While the timing of these dips often aligns with the abstraction phase transition, the metric remains highly volatile. This instability suggests that mutual information, although partially correlated with representational restructuring, may be too noisy and inconsistent to serve as a reliable standalone indicator of abstraction onset. 23
https://arxiv.org/abs/2505.17998v1
arXiv:2505.18011v1 [cs.CL] 23 May 2025Training with Pseudo-Code for Instruction Following Prince Kumar∗ IBM Research AI prince.kumar12@ibm.comRudra Murthy∗ IBM Research AI rmurthyv@in.ibm.comRiyaz Bhat∗ IBM Research AI Riyaz.Bhat@ibm.com Danish Contractor∗ IBM Research AI danish.contractor@ibm.com Abstract Despite the rapid progress in the capabilities of Large Language Models (LLMs), they continue to have difficulty following relatively simple, unambiguous instruc- tions, especially when compositions are involved. In this paper, we take inspiration from recent work that suggests that models may follow instructions better when they are expressed in pseudo-code [Mishra et al., 2023, Puerto et al., 2024]. However, writing pseudo-code programs can be tedious and using few-shot demonstrations to craft code representations for use in inference can be unnatural for non-expert users of LLMs. To overcome these limitations, we propose fine-tuning LLMs with instruction-tuning data that additionally includes instructions re-expressed in pseudo-code along with the final response. We evaluate models trained using our method on 11publicly available benchmarks comprising of tasks related to instruction-following, mathematics, and common-sense reasoning. We conduct rigorous experiments with 5different models and find that not only do models follow instructions better when trained with pseudo-code, they also retain their capabilities on the other tasks related to mathematical and common sense reason- ing. Specifically, we observe a relative gain of 3–19% on instruction-following benchmark, and an average gain of upto 14% across all tasks. 1 Introduction Despite the rapid progress in the capabilities of Large Language Models (LLMs), they continue to struggle following relatively simple instructions, particularly when compositions are involved [Wen et al., 2024, Jiang et al., 2024b, Murthy et al., 2025]. Moreover, they are prone to distractions [Hwang et al., 2025] and often have difficulty adhering to output format requests [Xia et al., 2024]. To address these challenges, a significant amount of research has focused on improving the instruction-following abilities of LLMs. These include enforcing constraints during decoding [Banerjee et al., 2025], prompting strategies that elicit thinking and planning behavior through clever design and structure [Xia et al., 2025], test-time scaling [Snell et al., 2024, Bansal et al., 2025], multi-step agentic flows [Yao et al., 2023b, Mitra et al., 2024] and more. Additionally, training time interventions to improve instruction-following is also an active area of work - this includes designing training objectives [Pham et al., 2024, Shao et al., 2024, Dong et al., 2025, DeepSeek-AI et al., 2025], the inclusion of chain-of-thought [Longpre et al., 2023b, Mitra et al., 2024], and perhaps, most importantly, curation of training data [Lou et al., 2024, Sun et al., 2024, Qi et al., 2024, He et al., 2024]. In this paper, we draw inspiration from recent work that suggests models perform better when given instructions in pseudo-code [Mishra et al., 2023, Li et al., 2024a, Puerto et al., 2024]. Pseudo-code *Authors contributed equally to this work. Preprint. Under review. Figure 1: Sketch of the data conversion pipeline along with an example instruction and corresponding pseudo-code program. The prompt used for generating pseudo-code is shown in Appendix Figure 7 and the prompt used for pseudo-code Repair is presented in Appendix Figure 19. instructions offer several advantages over
https://arxiv.org/abs/2505.18011v1
natural language, including being better structured, clearer, and more concise. This clarity and conciseness can improve model performance by reducing the complexity of instructions. Thus, instead of using verbose natural language instructions, prompting models with pseudo-code [Mishra et al., 2023] or using pseudo-code as a thinking aid during inference [Puerto et al., 2024] can improve model performance on a wide range of traditional natural- language processing tasks including text classification, question-generation, question-answering, common-sense reasoning, etc. However, writing pseudo-code programs can be tedious and using few-shot demonstrations to craft code representations for use in inference can be unnatural for non-expert users of LLMs. Additionally, methods such as Chain-of-Code [Li et al., 2024a] also require the use of external code-emulators for creating line-by-line executed, intermediate outputs of simulated or executed code statements, possibly limiting their adoption. To address the limitations of such inference-time methods, we fine-tune base LLMs using instruction- tuning data that includes pseudo-code along with the final answer. Thus, given a natural language prompt, models are trained to first re-express the instruction in pseudo-code and then attempt to answer the question (see sample pseudo-code program in Figure 1). From an end-user’s perspective, interacting with such an LLM is identical to regular natural-language prompting, offering a seamless and easy drop-in alternative to existing NL-prompted LMs. We note that training models to re-express instructions in pseudo-code before answering a user’s request, is similar in spirit, to training models to generate hidden chain-of-thought (CoT) or ‘thinking’ steps [Hayati et al., 2025]. However, in this work, we do not enrich pseudo-code programs with such ‘thinking’ steps, and only re-express instructions to be able to isolate the impact of pseudo-code instruction training on LLM performance. Experimenting with publicly available benchmarks, we find that not only do models follow instruc- tions better when trained with pseudo-code, they also retain and sometimes improve their capabilities on other tasks related to mathematical and common-sense reasoning. Specifically, we observe a relative gain of 3–19% on Instruction following benchmark, and an average gain of upto 14% across all tasks. Contributions: In summary, our contributions in this work are as follows: (1) We develop a novel multi-stage data generation pipeline that augments instruction tuning datasets with instructions re-expressed in pseudo-code, (2) We use 0.25M samples from the TuluV2 data mixture [Ivison et al., 2023] to train 5different base models. We present comprehensive experiments on 11publicly available benchmarks spanning a range of tasks including instruction-following, common-sense reasoning tasks, and mathematical tasks, (3) We demonstrate that models trained with pseudo-code instructions are better instruction-followers than their counterparts trained exclusively on natural- language instructions, and retain their capabilities on other tasks. 2 Table 1: Statistics of our Training Mixture from the TuluV2 dataset [Ivison et al., 2023]. Dataset Source Total #Samples Sampling Proportion Code Alpaca Chaudhary [2023] 20016 0.15 Tulu v2 Science Rafailov et al. [2024] 7468 0.0375 SNI v2 Wang et al. [2022] 85187 0.325 WizardLM Evol Instruct Xu et al. [2024] 26180 0.15 GPT4-Alpaca Peng et al. [2023] 46149 0.15 CoT Longpre et al. [2023a] 98315 0.1875 2 Data Construction Data Generation Pipeline: As
https://arxiv.org/abs/2505.18011v1
mentioned previously, we enhance existing instruction-tuning datasets by re-expressing instructions in pseudo-code as part of the output. Since manually crafting pseudo- code programs for instruction-tuning is prohibitively expensive, we develop a multi-step data conver- sion pipeline to create pseudo-code instruction tuning data. Our pipeline consists of three stages: (i) Generate : Use a powerful LLM with a 1-shot prompt to generate pseudo-code programs for a given instruction. We manually write 5 pseudo-code instructions for each of the dataset. We randomly select 1of them for the 1-shot prompt. (ii) Evaluate : Similar to Mishra et al. [2023], use only the generated pseudo-code program to run inference and get the output of the instruction. We compare and score the generated output against the ground-truth output, (iii) Repair : For the cases when theinference using the pseudo-code program does not result in the correct ground-truth instruction output, but it does when inference is run using the original natural language instruction, we prompt an LLM-judge to give feedback to improve the pseudo-code program. We summarize the pipeline in Figure 1. In order to generate pseudo-code programs, the pipeline uses hand-crafted demonstrations of pseudo- code instructions from Mishra et al. [2023] with Mixtral-8x7B-Instruct-v0.1*as the teacher model. Prompts for different stages of this pipeline are available in Appendix Figures 7 and 19. All our pseudo code programs also include a statement mimicking the “invocation” of the program (See Line 14in the Pseudo-code program in Figure 1). An instruction tuning instance consists of the original NL-instruction as the input along with the generated pseudo-code program and the original NL-instruction output text (separated by custom [PSEUDOCODE] markers ), as the output of the instruction instance (see example instances in Appendix Section B). Training Mixture: For our experiments, we use datasets from the Tulu v2 mixture [Ivison et al., 2023]. We obtain the full set of instances for Code Alpaca [Chaudhary, 2023] and GPT4-Alpaca [Peng et al., 2023] instead of the subset of instances present in Tuluv2 mixture. We choose all instances of Science subset [Rafailov et al., 2024] and WizardLM Evol-Instruct [Xu et al., 2024] from the Tuluv2 mixture. Additionally we use all tasks from SuperNaturalInstructions [Wang et al., 2022] and sample a maximum of 100instances from each task. Lastly, we use all the instances from the CoT subset of the FLAN collection Longpre et al. [2023a]. Table 1 lists the datasets used in our experiments along with their statistics. 2.1 Pseudo-Code Generation Super-Natural Instructions (SNI v2) dataset: Wang et al. [2022]: This is a collection of around 1600 traditional NLP tasks formatted as input and output with prompts for instruction-tuning LLMs. We select 5pseudo-code demonstrations randomly from Mishra et al. [2023]’s collection of 132tasks for the Generate step. As mentioned in Section 2 for the Evaluate andRepair steps, we prompt the Mixtral-8x7B-Instruct-v0.1 Jiang et al. [2023] model with both pseudo-code instructions and natural languages instructions to generate the answers. We apply additional postprocessing such as splitting based on new line characters etc, to obtain the answer. We then use the exact-match metric to score outputs of the pseudo-code and
https://arxiv.org/abs/2505.18011v1
natural language instructions and trigger the Repair step as needed. We run the Repair twice with different temperature parameters and select the best program among the two runs. Table 8 in the Appendix reports the performance of the generated pseudo-code on the *https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 3 SNI v2 dataset after a Repair step where the ‘combined’ row represents the resulting performance. An example of a corrected pseudo-code program is presented in Appendix Figure 6. GPT4-Alpaca, Code Alpaca WizardLM Evol-Instruct: For these datasets, we run the pseudo- code generation pipeline without Repair . These datasets usually have open-ended outputs and it is difficult to verify the correctness of the generated response unless we use a robust LLM-as-a-judge. Further, for these instances, forcing updates to the pseudo-code program using the Evaluate and Repair stages runs the risk of leaking output information into the pseudo-code program that can result in “CoT” like steps being included in the program.*While this isn’t detrimental and in fact likely to be helpful, we wanted to be careful and ensure we are able to study the impact of training with NL-equivalent pseudo-code instructions without any enrichment. Therefore, we only use the Generate part of our pipeline for these datasets. We select 5instances from each dataset and write the pseudo-code demonstrations ourselves. The instances are chosen to be diverse; for the the 1-shot prompt, we randomly select one of these pseudo-code demonstrations. Filtering: We discard instances or tasks where the Mixtral model fails to generate valid pseudo-code instructions. This may occur if the pseudo-code is not enclosed within the ‘ [PSEUDOCODE] ’ and ‘[/PSEUDOCODE] ’ tags, or if the generated pseudo-code is incomplete or malformed. Note that we also drop the corresponding instances from the natural language instruction set to ensure we have parallel training mixtures. TuluV2 Science and CoT Data: We manually write pseudocode instructions for each of the 9 datasets present in the CoT collection Longpre et al. [2023a] and the Tulu V2 Science subsets Ivison et al. [2023]. The pseudo-code programs for each of these datasets are provided in the Appendix. 3 Experiments Through our experiments we seek to answer the following research questions: (i) Does instruction- tuning with pseudo-code help improve the performance of models on instruction-following tasks?, (ii) Does the introduction of pseudo-code during instruction-tuning affect the ability to acquire skills imparted during traditional instruction-tuning, (iii) The nature of our pseudo-code is very python-like and we also do not use a lot of code data in our training mixture. To what extent does this affect the performance on code-tasks? 3.1 Models Since we study the effect of instruction-tuning with and without pseudo-code instructions, we require unaligned base-models. After a careful review of available choices, we settled on the7−8B parameter range models. Additionally, prior work by Mishra et al. [2023] sug- gests that medium parameter-size models (eg: 6B-7B) were able to exploit pseudo-code bet- ter than smaller models. We experiment with the following 5base models which includes 1 code base model: (i) meta-llama/Llama-3.1-8B ,*(ii)mistralai/Mistral-7B-v0.3 ,*(iii) Qwen/Qwen2.5-7B [Qwen et al., 2025], (iv) ibm-granite/granite-3.1-8b-base*and (v) ibm-granite/granite-8b-code-base-128k [Mishra et al., 2024]. Training:
https://arxiv.org/abs/2505.18011v1
All models were SFT trained on a full NVIDIA A100 80GB node with Flash Attention 2 [Dao, 2024] and bf16 precision using the Dolomite Engine Framework.*All models were trained for2epochs using the PyTorch’s implementation of AdamW [Loshchilov and Hutter, 2019] as the optimizer, with a starting learning rate of 1e-5 and linear decay. Models were trained with two parallel training mixtures - one with NL instructions and the other augmented with pseudo-code programs. *This isn’t a problem with SNI v2 because each task has many instances, and instance specific Repair s don’t generalize on other instances. *https://ai.meta.com/blog/meta-llama-3-1/ *https://huggingface.co/mistralai/Mistral-7B-v0.3 *https://huggingface.co/ibm-granite/granite-3.1-8b-base *https://github.com/IBM/dolomite-engine 4 3.2 Evaluation Tasks We conduct experiments on three main types of tasks (i) Instruction-Following, (ii) Common sense and Reasoning, (iii) Mathematical Tasks. Instruction Following: Since our primary goal was to investigate if instruction-tuning with pseudo- code helps instruction-following, we conduct extensive experiments on instruction-following bench- marks including IFEval [Zhou et al., 2023], KCIF [Murthy et al., 2025], Complex Bench [Wen et al., 2024] and Follow Bench [Jiang et al., 2024b]. IFEval and KCIF both use non-LLM based scoring functions to evaluate performance on instruction- following; ComplexBench [Wen et al., 2024] and FollowBench [Jiang et al., 2024b] use a powerful LLM-as-a-judge to assess instruction-following. While Complex bench helps study different instruc- tion compositions (eg: conjunctions, chains, conditionals, etc), FollowBench reports performance aggregated by task categories (eg: instruction constraints applied to response styles, situations, etc). Further, IFEval only assesses whether the lexical and formatting instructions are followed in responses with no consideration of task completion. All other datasets condition instruction following on core-task performance - eg: KCIF [Murthy et al., 2025] has answer modifying instructions from knowledge tasks, where models are penalized if instructions are applied on the wrong answer to a question. Common Sense & Reasoning Tasks: We also study the performance of models on commonly used benchmarks to investigate whether our method interferes with any skills acquired by vanilla NL-SFT. We present experiments on Reasoning tasks (Hellaswag [Zellers et al., 2019], PiQA [Bisk et al., 2020], Winogrande [Sakaguchi et al., 2021] and ARC [Clark et al., 2018]), and Mathematical Tasks (GSM8K [Cobbe et al., 2021], as well as a revised version of GSM8K called GSM8K Platinum [Vendrow et al., 2025], and MathQA [Amini et al., 2019]). Metrics: We post-process outputs and then use the official scoring scripts from the benchmarks where available. IFEval has custom-scorers for each instruction and it aggregates results at the prompt as well as instruction level. ComplexBench and FollowBench aggregate LLM-judge generated scores grouped by different task categories. KCIF as well as, the Common Sense and Reasoning tasks are measured in terms of exact-match scores. 3.2.1 Evaluation Setup and Metrics Evaluation Setup: All tasks are evaluated under generative settings; this means that in the case of multiple choice questions models are required to verbalize the answer option [Mishra et al., 2023, Li et al., 2024b]. Further, all tasks are evaluated in zero-shot settings with greedy decoding; prompts for all mathematics, reasoning and code tasks encourage thinking step by step. Prompts for the instruction-following tasks are used as
https://arxiv.org/abs/2505.18011v1
included in the original datasets. We use vLLM [Kwon et al., 2023] to speed up inference on the Mathematics, Reasoning and Code Tasks. Output Post Processing: We develop robust task-specific output-processing pipelines for both NL and pseudo-code models. (See Appendix Section F for more details). All tasks are evaluated after output post-processing. 3.3 Results Overall Results: As can be seen from Table 2 training models with pseudo-code, helps improve performance across all tasks. Depending on the model, this gain can be upto 14% (relative). All models consistently benefit from pseudo-code based training when evaluated for instruction-following. In fact, models based on Llama 3.1 8B Base and Mistral 7B v0.3 also benefit on mathematical and reasoning tasks when trained with pseudo-code instructions. However, for the rest of the models there isn’t a consistent winner between training with pseudo-code and natural-language, when evaluated on common-sense, reasoning tasks and mathematical tasks. We include a few example outputs for all tasks in Appendix Section C. 3.3.1 Instruction-Following We now present a deeper study on the following aspects of instruction-following: (i) effect of distractor instructions (ii) performance on answer-modifying instructions, (iii) nested instructions, 5 Table 2: Comparison of NL-tuned vs pseudo-code tuned models on Instruction-Following, Mathe- matics, Commonsense & Reasoning and Knowledge Tasks. Instruction-Following MathematicsCommon Sense &Reasoning Model IFEval KCIFComp. BenchFollow BenchGSM8KGSM8K Plat.Math QAARC Hella. PiQA Wino. Llama 3.1 8B-NL 0.39 0.08 0.42 0.40 0.33 0.33 0.21 0.82 0.71 0.77 0.66 Llama 3.1 8B-PC 0.46 0.13 0.46 0.47 0.44 0.45 0.28 0.89 0.74 0.79 0.59 Mistral 7B v0.3-NL 0.35 0.08 0.35 0.36 0.12 0.13 0.21 0.74 0.53 0.65 0.40 Mistral 7B v0.3-PC 0.37 0.14 0.40 0.45 0.29 0.30 0.21 0.54 0.60 0.68 0.53 Qwen2.5 7B-NL 0.44 0.16 0.45 0.40 0.52 0.54 0.36 0.95 0.81 0.84 0.62 Qwen2.5 7B-PC 0.45 0.16 0.50 0.47 0.63 0.65 0.42 0.94 0.85 0.85 0.58 Granite 8B Code-NL 0.38 0.07 0.37 0.38 0.09 0.09 0.28 0.71 0.47 0.63 0.54 Granite 8B Code-PC 0.46 0.15 0.41 0.31 0.43 0.45 0.25 0.57 0.41 0.61 0.49 Granite 3.1 8B-NL 0.44 0.11 0.45 0.54 0.02 0.02 0.19 0.16 0.14 0.40 0.38 Granite 3.1 8B-PC 0.47 0.13 0.50 0.49 0.14 0.14 0.11 0.07 0.16 0.24 0.33 Figure 2: Examples of different instructions types: (a) a distractor instruction from KCIF [Murthy et al., 2025] (b) nested instruction from ComplexBench [Wen et al., 2024] (c) simple sequential chain from ComplexBench [Wen et al., 2024] (d) two ‘levels’ of fine-grained constraints from FollowBench [Jiang et al., 2024b] (iv) formatting instructions, (v) fine-grained instruction-following. See Figure 2 for examples of these instruction types. The KCIF benchmark [Murthy et al., 2025] studies the performance of models on answer-modifying instructions when compared against regular question-answering (printing the correct-answer label/- text). We use this dataset to study the performance of models on ‘distractors’ (instructions that should have no effect) as well as instructions that are conditional on a model’s ability to answer a QA task correctly. Instructions with Distractors: Table 3 presents two sets of results: On the left hand side of ‘||’ separator, the table compares the exact-match performance of models (after output post-processing)
https://arxiv.org/abs/2505.18011v1
on instructions that do not have distractor instructions*(EM) vs. ones that do (EM-D). As can be seen, all models (with the exception of Qwen2.5 on distractors) universally perform better on both categories of instructions when trained with pseudo-code instructions. Answer-Modifying Instructions: On the right hand side of Table 3, we compare how instruction- compositions applied on the answering task affect performance. Specifically, given a traditional multi-choice knowledge or reasoning task, models first need to generate the correct answer and then apply the instruction that suitable modifies that answer. *Eg.: an instruction that asks to increment any numerical values when the answers are non-numeric text. 6 Table 3: KCIF: Performance comparison on non-distractor (EM) and distractor instructions (EM- D). On the right hand side of ‘||’ separator, Performance drop as compared to printing the correct answer label (PCA Label) or text (PCA) on answer-modifying instructions (String/Numeric/Format manipulation, Operations on List (OOL) – conditioned on the answering the question correctly as well as independent operations on list). Barring the PCA Label column, for all columns to its right - lower scores are better. Instruction Category Model EM EM-D PCA label PCA String Numeric FormatOOL Cond.OOL Avg LLama 3.1 8B-NL 0.04 0.09 0.34 -0.29 -0.34 -0.34 -0.34 -0.28 -0.34 -0.33 / -96.2% LLama 3.1 8B-PC 0.09 0.20 0.38 -0.19 -0.36 -0.38 -0.25 -0.25 -0.38 -0.32 / -85.3% Mistral 7B v0.3-NL 0.04 0.15 0.32 -0.16 -0.3 -0.31 -0.32 -0.26 -0.32 -0.3 / -95.0% Mistral-7B v0.3-PC 0.08 0.27 0.37 -0.15 -0.37 -0.37 -0.31 -0.23 -0.37 -0.33 / -88.4% Qwen2.5 7B-NL 0.09 0.30 0.41 -0.12 -0.4 -0.4 -0.31 -0.28 -0.41 -0.36 / -87.5% Qwen2.5 7B-PC 0.11 0.26 0.46 -0.3 -0.44 -0.46 -0.31 -0.29 -0.44 -0.39 / -83.7% Granite 8B Code-NL 0.04 0.12 0.29 -0.19 -0.28 -0.29 -0.27 -0.29 -0.29 -0.28 / -97.2% Granite 8B Code-PC 0.10 0.26 0.33 -0.14 -0.31 -0.33 -0.19 -0.19 -0.32 -0.27 / -81.3% Granite 3.1 8B-NL 0.07 0.18 0.39 -0.29 -0.38 -0.39 -0.37 -0.31 -0.39 -0.37 / -94.6% Granite 3.1 8B-PC 0.07 0.25 0.43 -0.27 -0.43 -0.43 -0.39 -0.32 -0.43 -0.4 / -92.9% Table 4: ComplexBench: Performance of models on different instruction compositions (And, Chain, Selection) along with levels of nesting ( 1,2,≥3) where applicable. And Chain Selection Selection and Chain 1 2 1 2 ≥3 2 ≥3 Llama 3.1 8B-NL 0.61 0.37 0.30 0.44 0.33 0.25 0.29 0.24 Llama 3.1 8B-PC 0.63 0.39 0.37 0.40 0.35 0.31 0.44 0.31 Mistral 7B v0.3-NL 0.51 0.29 0.25 0.35 0.25 0.23 0.25 0.20 Mistral 7B v0.3-PC 0.56 0.35 0.32 0.39 0.31 0.27 0.24 0.22 Qwen2.5 7B-NL 0.61 0.40 0.37 0.47 0.37 0.24 0.37 0.24 Qwen2.5 7B-PC 0.64 0.48 0.41 0.47 0.45 0.38 0.42 0.31 Granite 8B Code-NL 0.53 0.30 0.28 0.44 0.29 0.18 0.19 0.22 Granite 8B Code-PC 0.58 0.36 0.34 0.38 0.32 0.26 0.27 0.19 Granite 3.1 8B-NL 0.62 0.41 0.36 0.46 0.35 0.24 0.39 0.26 Granite 3.1 8B-PC 0.65 0.44 0.42 0.47 0.42 0.41 0.33 0.30 As compared to the baseline task of question-answering by printing the correct answer label (PCA Label), asking models to print the correct answer text (PCA) or applying answer-modifying
https://arxiv.org/abs/2505.18011v1
instruc- tions (string, numeric or format manipulations, operations on lists regardless of whether they are conditional on the answering task) result in a steep drop in performance. It is noteworthy, that while all models (independent of the training paradigm) have steep deterioration in performance, training with pseudo-code instructions results in a lesser drop in performance. Chained and Nested Instructions: We use ComplexBench [Wen et al., 2024] to study how models perform in the presence of conjunctions (And), sequential instructions that are simple chains (Chain), conditional sequential operations (Sequential) at varying depths of nesting and, combinations of such instructions (eg. selections and chain). As can be seen in Table 4, models trained with pseudo-code instructions consistently perform better than those trained with natural language instructions. This holds true for almost all instruction-categories as well as nesting-depths. It is also interesting to note that doing well on a shallow nesting-depth does not automatically mean that the performance trends follow on deeper nesting-depths (eg: LLama 3.1 8B and Granite 8B Code NL models on Selection tasks at nesting levels 1vs the rest). Formatting Instructions: Using the data in ComplexBench, we study the performance of models on formatting instructions - as an be seen in Table 5 all models trained with pseudo-code instructions outperform their natural-language counterparts except on instructions that require length control. We think that since we require LLMs to also generate pseudo-code models before the final answer it might require specialized training to recalibrate length controls. Performance gains on JSON and Bullet formatting are especially notable ( 11−27% relative gains). 7 Table 5: Performance of models on formatting instructions from ComplexBench. Model JSON FormatMarkdown FormatBullet FormatPunctuation Length Start with End with Template Llama 3.1 8B-NL 0.13 0.66 0.45 0.43 0.27 0.17 0.34 0.39 Llama 3.1 8B-PC 0.18 0.81 0.57 0.49 0.26 0.18 0.27 0.40 Mistral 7B v0.3-NL 0.11 0.58 0.45 0.23 0.24 0.15 0.25 0.23 Mistral 7B v0.3-PC 0.16 0.68 0.52 0.29 0.27 0.19 0.27 0.34 Qwen2.5 7B-NL 0.18 0.66 0.49 0.38 0.30 0.15 0.25 0.48 Qwen2.5 7B-PC 0.18 0.74 0.58 0.40 0.28 0.19 0.28 0.47 Granite 8B Code-NL 0.13 0.68 0.42 0.35 0.28 0.14 0.30 0.28 Granite 8B Code-PC 0.18 0.70 0.53 0.42 0.29 0.21 0.31 0.33 Granite 3.1 8B-NL 0.18 0.64 0.53 0.32 0.29 0.18 0.34 0.40 Granite 3.1 8B-PC 0.22 0.77 0.59 0.46 0.27 0.18 0.28 0.48 Table 6: Performance of models on fine-grained constraints from FollowBench Hard Satisfaction Rate (HSR) % Soft Satisfaction Rate (SSR) % Model L1 L2 L3 L4 L5 Avg. L1 L2 L3 L4 L5 Avg. Llama 3.1 8B -NL 51.46 40.31 34.00 19.96 8.28 30.80 51.46 53.55 56.94 45.35 38.99 49.26 Llama 3.1 8B -PC 58.14 45.48 36.22 28.60 23.33 38.35 58.14 52.52 54.23 49.26 49.70 52.77 Mistral 7B v0.3 -NL 47.06 29.79 29.84 16.79 14.25 27.55 47.06 44.78 48.26 41.73 40.31 44.43 Mistral 7B v0.3 -PC 62.70 46.90 37.82 14.78 23.46 37.13 62.70 60.16 56.50 46.92 50.33 55.32 Qwen2.5 7B -NL 50.21 34.80 32.75 25.84 14.92 31.71 50.21 52.02 53.04 46.98 42.67 48.98 Qwen2.5 7B -PC 61.3250.72 35.31 27.45 17.81 38.52 61.32 60.62 56.42
https://arxiv.org/abs/2505.18011v1
52.33 50.31 56.20 Granite 8B Code -NL 44.92 36.70 33.13 21.01 13.61 29.88 44.92 50.35 50.36 40.70 39.44 45.15 Granite 8B Code-PC 43.86 30.42 25.18 12.68 10.28 24.48 43.86 41.33 40.84 31.12 31.25 37.68 Granite 3.1 8B -NL 67.38 55.59 49.77 38.04 20.69 46.29 67.38 66.51 66.17 60.26 52.68 62.60 Granite 3.1 8B -PC 61.01 51.46 40.38 38.65 19.05 42.11 61.01 61.01 55.78 56.56 49.06 56.68 Fine-grained Instructions: FollowBench [Jiang et al., 2024b] includes instruction tasks where fine-grained constraints are incrementally introduced at each ‘level’. The dataset consists of 5levels of constraints applied to different task types - ‘style’, ‘situations’, ‘content’, ‘format’ and ‘examples’. Models are scored by assessing how well all constraints are satisfied for each instruction (hard- constraint satisfaction rate) and by assessing whether a particular constraint is satisfied, regardless of whether co-occurring constraints are satisfied are not (soft-constraint satisfaction rate). We report the results of this experiment in Table 6. We find that with the exception of Granite 8B Code and Granite 3.1 8B models,*all other models benefit from training with pseudo-code instructions. Further, performance gains when pseudo-code training is used are especially significant at higher (more complex) levels of constraints. 3.3.2 Code Tasks Though most modern LLMs are pre-trained on a substantial amount of code data, our training mixture included only a small proportion of code-related tasks ( 15% of the training data). Further, these tasks were treated no differently than other NL tasks. For example, for a code-synthesis task, before attempting the code-task models first need to generate a pseudo-code of the instruction and then create the code requested. Also, given that our SFT training uses a python-like representation for pseudo-code, we investigate how the performance is affected by the inclusion of such pseudo-code using the HumanEvalPack code benchmark [Muennighoff et al., 2024]. The HumanEvalPack benchmark consists of 3main categories of tasks including – Code Repair, Code Explanation, Code Synthesis across 6programming languages – Python, JavaScript, Java, Go, C++ and Rust. Following prior work [Chen et al., 2021, Muennighoff et al., 2024], we evaluate the performance of all code tasks using the Pass@1 metric, which measures the rate at which the first generated solution passes all test cases upon execution. Based on Table 7, we find that pseudo-code (PC) instruction-training using our mixture consistently reduces performance across all models and tasks compared to natural language (NL) tuning. The drop is especially sharp for Python tasks; supporting the hypothesis that our Python-like pseudo-code *We inspected the results from the Granite models but could not find a pattern that explains this behavior. 8 Table 7: Model performance on low-score HumanEval benchmarks. Metrics: FixD = FixDocs, FixT = FixTests, Synth = Synthesize, ExplSyn = ExplainSynthesize, Py = Python, Oth = Other. Model FixD-Py FixD-Oth FixT-Py FixT-Oth Synth-Py Synth-Oth ExplSyn-Py ExplSyn-Oth Avg Llama 3.1 8B -NL 0.439 0.111 0.22 0.066 0.402 0.104 0.268 0.087 0.212 Llama 3.1 8B -PC 0.189 0.039 0.122 0.024 0.262 0.121 0.299 0.109 0.146 Mistral 7B v0.3 -NL 0.177 0.043 0.079 0.029 0.250 0.071 0.220 0.051 0.115 Mistral 7B v0.3 -PC 0.085 0.009 0.030 0.002 0.195 0.080
https://arxiv.org/abs/2505.18011v1
0.122 0.040 0.071 Qwen2.5 7B -NL 0.720 0.205 0.500 0.160 0.604 0.184 0.543 0.146 0.383 Qwen2.5 7B -PC 0.268 0.154 0.457 0.12 0.055 0.184 0.226 0.095 0.195 Granite 8B Code -NL 0.634 0.200 0.396 0.129 0.537 0.155 0.427 0.133 0.326 Granite 8B Code -PC 0.512 0.145 0.299 0.111 0.372 0.156 0.360 0.132 0.261 Granite 3.1 8B -NL 0.488 0.139 0.256 0.094 0.476 0.144 0.311 0.100 0.251 Granite 3.1 8B -PC 0.110 0.032 0.030 0.013 0.390 0.129 0.140 0.052 0.112 affects real Python code understanding. Code synthesis is the most affected task, with models like Qwen2.5 7B dropping from 0.604 to 0.055 (Pass@1). Code repair and explanation also degrade, though less severely. This suggests that training with pseudo-code instructions requires careful consideration of the nature and amount of code tasks provided during SFT. 4 Related Work Our work is most closely related to the creation of intermediate thinking/reasoning representations LLMs. Most often, these are applied at inference time - styles include simple prompts to elicit thinking [Kojima et al., 2022], instructions that guide LLMs to breakdown tasks and plan [Xu et al., 2023], structured representation and meta chain-of-thought (eg: Tree Of Thoughts [Yao et al., 2023a], Graph of Thoughts [Besta et al., 2024], Table of Thoughts [Sun et al., 2025], Buffer of Thoughts [Yang et al., 2024b]), specialized representations for tasks (eg: Tabular Reasoning [Wang et al., 2024], Numerical Reasoning [Chen et al., 2023], Code Generation [Li et al., 2025], Conditional QA [Lin et al., 2024]), etc. Additionally, prompting strategies that use code-like representation to aid thinking have also been developed [Mishra et al., 2023, Li et al., 2024a, Puerto et al., 2024]. Our work extends this body of work on the use of code-like representations - specifically, instead of relying on tedious few-shot demonstrations to elicit code-generation during inference, we fine-tune models to do this as part of a ‘thinking’ step before returning the answer. We mirror the approach taken by Mishra et al. [2023] and Puerto et al. [2024] of using pseudo-code as ‘instructions’ during inference and do not rely on the use of a code-emulator for line-by-line attempted execution used in Chain-of-Code [Li et al., 2024a]. Thus, our models do not require external code-execution environments or emulators can be used as drop-in replacements for their corresponding NL-instruction-tuned model counterparts. 5 Conclusion and Future Work In this paper experimented with an instruction-tuning strategy that trains models to first re-express the instruction in pseudo-code before attempting to answer the task. We modified a portion of the publicly available TuluV2 dataset [Ivison et al., 2023] and trained 5publicly available base models. We conducted rigorous experiments on a variety of instruction-following tasks and demonstrated the effectiveness of fine-tuning with pseudo-code as compared to only using natural language instructions. Further, we demonstrated that the performance on mathematical as well as common-sense reasoning is, on average, comparable or better than when models are trained only using NL instructions. Our work opens up interesting directions of future work. Since we wanted to study the effect of using pseudo-code we made sure they do not contain any chain-of-thought
https://arxiv.org/abs/2505.18011v1
or contain other forms of instruction enrichment [Qi et al., 2024, Lou et al., 2024]. Further, our results from Table 2 suggests that for some model-task combinations involving common-sense and reasoning, training with vanilla NL instructions may be better. This suggests it may be useful to explore techniques to combine capabilities such as using mixture-of-experts architectures [Jiang et al., 2024a], model merging techniques [Yang et al., 2024a], joint NL and NL-PC training mixtures, experimentation with pretraining and more. Further, our instruction-mixture did not use a lot of code-tasks; experimenting with alternative strategies of mixing code-tasks such as including code-tasks without pseudo-code generation and increasing the proportion of such tasks in the mixture could help mitigate some of 9 the issues related to code-task deterioration. Lastly, we believe training models to ‘think’ and ‘plan’ in ‘code’ could enable even more powerful applications where models could exploit frameworks such as LMQL [Beurer-Kellner et al., 2023] for control-flow to support mixed LLM-executable and code-executable programs. References Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 2357–2367, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1245. URL https://aclanthology.org/N19-1245/ . Debangshu Banerjee, Tarun Suresh, Shubham Ugare, Sasa Misailovic, and Gagandeep Singh. Crane: Reasoning with constrained llm generation, 2025. URL https://arxiv.org/abs/2502.09061 . Hritik Bansal, Arian Hosseini, Rishabh Agarwal, Vinh Q. Tran, and Mehran Kazemi. Smaller, weaker, yet better: Training LLM reasoners via compute-optimal sampling. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id= 3OyaXFQuDl . Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17682–17690, 2024. Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. Prompting is programming: A query language for large language models. Proc. ACM Program. Lang. , 7(PLDI), June 2023. doi: 10.1145/3591300. URL https://doi.org/10.1145/3591300 . Yonatan Bisk, Rowan Zellers, Ronan Le bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. Proceedings of the AAAI Conference on Artificial Intelligence , 34(05):7432–7439, Apr. 2020. doi: 10.1609/aaai.v34i05.6239. URL https://ojs.aaai.org/index.php/AAAI/article/view/6239 . Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation. https: //github.com/sahil280114/codealpaca , 2023. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv e-prints , pages arXiv–2107, 2021. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research , 2023. ISSN 2835-8856. URL https://openreview.net/forum?id= YfZ4ZPt8zd . Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you
https://arxiv.org/abs/2505.18011v1
have solved question answering? try arc, the ai2 reasoning challenge. arXiv:1803.05457v1 , 2018. K. Cobbe, V . Kosaraju, J. Tworek, H. Jun, M. Plappert, D. Hendrycks, S. Goel, M. Fath, S. Jha, and P. Ghasemi. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning. In The Twelfth International Conference on Learning Representations , 2024. URL https:// openreview.net/forum?id=mZn2Xyh9Ec . 10 DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y . X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . Guanting Dong, Keming Lu, Chengpeng Li, Tingyu Xia, Bowen Yu,
https://arxiv.org/abs/2505.18011v1
Chang Zhou, and Jingren Zhou. Self-play with execution feedback: Improving instruction-following capabilities of large language models. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=cRR0oDFEBC . Shirley Anugrah Hayati, Taehee Jung, Tristan Bodding-Long, Sudipta Kar, Abhinav Sethy, Joo- Kyung Kim, and Dongyeop Kang. Chain-of-instructions: Compositional instruction tuning on large language models. Proceedings of the AAAI Conference on Artificial Intelligence , 39(22): 24005–24013, Apr. 2025. doi: 10.1609/aaai.v39i22.34574. URL https://ojs.aaai.org/ index.php/AAAI/article/view/34574 . Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, and Yanghua Xiao. From complex to simple: Enhancing multi-constraint complex instruction following ability of large language models. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 10864–10882, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.637. URL https://aclanthology.org/2024.findings-emnlp.637/ . Yerin Hwang, Yongil Kim, Jahyun Koo, Taegwan Kang, Hyunkyung Bae, and Kyomin Jung. Llms can be easily confused by instructional distractions, 2025. URL https://arxiv.org/abs/2502. 04362 . Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. Camels in a changing climate: Enhancing lm adaptation with tulu 2, 2023. URL https://arxiv.org/abs/ 2311.10702 . Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. URL https://arxiv.org/ abs/2310.06825 . 11 Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie- Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mixtral of experts, 2024a. URL https://arxiv.org/abs/2401.04088 . Yuxin Jiang, Yufei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, and Wei Wang. FollowBench: A multi-level fine-grained constraints following benchmark for large language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4667–4688, Bangkok, Thailand, August 2024b. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.257. URL https://aclanthology. org/2024.acl-long.257/ . Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. In Proceedings of the 36th International Conference on Neural Information Processing Systems , NIPS ’22, Red Hook, NY , USA, 2022. Curran Associates Inc. ISBN 9781713871088. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: reasoning with a language model-
https://arxiv.org/abs/2505.18011v1
augmented code emulator. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024a. Jia Li, Ge Li, Yongmin Li, and Zhi Jin. Structured chain-of-thought prompting for code generation. ACM Transactions on Software Engineering and Methodology , 34(2):1–23, 2025. Shiyang Li, Jun Yan, Hai Wang, Zheng Tang, Xiang Ren, Vijay Srinivasan, and Hongxia Jin. Instruction-following evaluation through verbalizer manipulation. In Kevin Duh, Helena Gomez, and Steven Bethard, editors, Findings of the Association for Computational Linguistics: NAACL 2024 , pages 3678–3692, Mexico City, Mexico, June 2024b. Association for Computational Lin- guistics. doi: 10.18653/v1/2024.findings-naacl.233. URL https://aclanthology.org/2024. findings-naacl.233/ . Jiuheng Lin, Yuxuan Lai, and Yansong Feng. Chain of condition: Construct, verify and solve conditions for conditional question answering. In Yaser Al-Onaizan, Mohit Bansal, and Yun- Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 16596–16611, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.968. URL https://aclanthology.org/ 2024.findings-emnlp.968/ . Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V . Le, Barret Zoph, Jason Wei, and Adam Roberts. The flan collection: designing data and methods for effective instruction tuning. In Proceedings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org, 2023a. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688 , 2023b. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations , 2019. URL https://openreview.net/forum?id= Bkg6RiCqY7 . Renze Lou, Kai Zhang, Jian Xie, Yuxuan Sun, Janice Ahn, Hanzi Xu, Yu Su, and Wenpeng Yin. MUFFIN: Curating multi-faceted instructions for improving instruction following. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/ forum?id=1vrS1zwekw . 12 Mayank Mishra, Prince Kumar, Riyaz Bhat, Rudra Murthy, Danish Contractor, and Srikanth Tamil- selvam. Prompting with pseudo-code instructions. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 15178–15197, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.939. URL https://aclanthology.org/2023. emnlp-main.939/ . Mayank Mishra, Matt Stallone, Gaoyuan Zhang, Yikang Shen, Aditya Prasad, Adriana Meza Soria, Michele Merler, Parameswaran Selvam, Saptha Surendran, Shivdeep Singh, Manish Sethi, Xuan- Hong Dang, Pengyuan Li, Kun-Lung Wu, Syed Zawad, Andrew Coleman, Matthew White, Mark Lewis, Raju Pavuluri, Yan Koyfman, Boris Lublinsky, Maximilien de Bayser, Ibrahim Abdelaziz, Kinjal Basu, Mayank Agarwal, Yi Zhou, Chris Johnson, Aanchal Goyal, Hima Patel, Yousaf Shah, Petros Zerfos, Heiko Ludwig, Asim Munawar, Maxwell Crouse, Pavan Kapanipathi, Shweta Salaria, Bob Calio, Sophia Wen, Seetharami Seelam, Brian Belgodere, Carlos Fonseca, Amith Singhee, Nirmit Desai, David D. Cox, Ruchir Puri, and Rameswar Panda. Granite code models: A family of open foundation models for code intelligence, 2024. URL https://arxiv.org/abs/ 2405.04324 . Arindam Mitra, Luciano Del Corro, Guoqing Zheng, Shweti Mahajan, Dany Rouhana, Andres Codas, Yadong Lu, Wei ge Chen, Olga Vrousgos, Corby Rosset, Fillipe Silva, Hamed Khanpour, Yash Lara, and Ahmed Awadallah. Agentinstruct: Toward generative teaching with
https://arxiv.org/abs/2505.18011v1
agentic flows, 2024. URL https://arxiv.org/abs/2407.03502 . Niklas Muennighoff, Qian Liu, Armel Randy Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro V on Werra, and Shayne Longpre. Octopack: Instruction tuning code large language models. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=mw1PWNSWZP . Rudra Murthy, Praveen Venkateswaran, Prince Kumar, and Danish Contractor. Evaluating the instruction-following abilities of language models using knowledge tasks, 2025. URL https: //arxiv.org/abs/2410.12972 . Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277 , 2023. Chau Minh Pham, Simeng Sun, and Mohit Iyyer. Suri: Multi-constraint instruction following in long- form text generation. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 1722–1753, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/ 2024.findings-emnlp.94. URL https://aclanthology.org/2024.findings-emnlp.94/ . Haritz Puerto, Martin Tutek, Somak Aditya, Xiaodan Zhu, and Iryna Gurevych. Code prompt- ing elicits conditional reasoning abilities in Text+Code LLMs. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing , pages 11234–11258, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.629. URL https://aclanthology.org/2024.emnlp-main.629/ . Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, and Juanzi Li. Constraint back-translation improves complex instruction following of large language models. CoRR , abs/2410.24175, 2024. URL https://doi.org/10.48550/arXiv.2410.24175 . Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model, 2024. URL https://arxiv.org/abs/2305.18290 . 13 Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: an adver- sarial winograd schema challenge at scale. Commun. ACM , 64(9):99–106, August 2021. ISSN 0001-0782. doi: 10.1145/3474381. URL https://doi.org/10.1145/3474381 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathemat- ical reasoning in open language models, 2024. URL https://arxiv.org/abs/2402.03300 . Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters, 2024. URL https://arxiv.org/abs/ 2408.03314 . Haoran Sun, Lixin Liu, Junjie Li, Fengyu Wang, Baohua Dong, Ran Lin, and Ruohui Huang. Conifer: Improving complex constrained instruction-following ability of large language models. arxiv preprint arXiv:2404.02823 , 2024. URL https://arxiv.org/abs/2404.02823 . Zhenjie Sun, Naihao Deng, Haofei Yu, and Jiaxuan You. Table as thought: Exploring structured thoughts in
https://arxiv.org/abs/2505.18011v1
llm reasoning, 2025. URL https://arxiv.org/abs/2501.02152 . Joshua Vendrow, Edward Vendrow, Sara Beery, and Aleksander Madry. Do large language model benchmarks test reliability?, 2025. URL https://arxiv.org/abs/2502.03461 . Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. Super-NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 5085–5109, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.340. URL https://aclanthology.org/2022. emnlp-main.340/ . Zilong Wang, Hao Zhang, Chun-Liang Li, Julian Martin Eisenschlos, Vincent Perot, Zifeng Wang, Lesly Miculicich, Yasuhisa Fujii, Jingbo Shang, Chen-Yu Lee, and Tomas Pfister. Chain-of-table: Evolving tables in the reasoning chain for table understanding. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id= 4L0xnS4GQM . Bosi Wen, Pei Ke, Xiaotao Gu, Lindong Wu, Hao Huang, Jinfeng Zhou, Wenchuang Li, Binxin Hu, Wendy Gao, Jiaxing Xu, Yiming Liu, Jie Tang, Hongning Wang, and Minlie Huang. Bench- marking complex instruction-following with multiple constraints composition. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. URL https://openreview.net/forum?id=U2aVNDrZGx . Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang, Yihao Feng, Ran Xu, Wenpeng Yin, and Caiming Xiong. FOFO: A benchmark to evaluate LLMs’ format-following capability. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 680–699, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024. acl-long.40. URL https://aclanthology.org/2024.acl-long.40/ . Yu Xia, Rui Wang, Xu Liu, Mingyan Li, Tong Yu, Xiang Chen, Julian McAuley, and Shuai Li. Beyond chain-of-thought: A survey of chain-of-X paradigms for LLMs. In Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, and Steven Schockaert, editors, Proceedings of the 31st International Conference on Computational Linguistics , pages 10795–10809, Abu Dhabi, UAE, January 2025. Association for Computational Linguistics. URL https://aclanthology.org/2025.coling-main.719/ . 14 Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. Rewoo: Decoupling reasoning from observations for efficient augmented language models, 2023. URL https://arxiv.org/abs/2305.18323 . Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. WizardLM: Empowering large pre-trained language models to follow complex instructions. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=CfXh93NDgH . Enneng Yang, Li Shen, Guibing Guo, Xingwei Wang, Xiaochun Cao, Jie Zhang, and Dacheng Tao. Model merging in llms, mllms, and beyond: Methods, theories, applications and opportunities, 2024a. URL https://arxiv.org/abs/2408.07666 . Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang, Joseph E. Gonzalez, and Bin CUI. Buffer of thoughts: Thought-augmented
https://arxiv.org/abs/2505.18011v1
reasoning with large language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024b. URL https://openreview.net/forum?id=ANO1i9JPtb . Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 11809–11822. Curran Associates, Inc., 2023a. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 271db9922b8d1f4dd7aaef84ed5ac703-Paper-Conference.pdf . Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023b. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Anna Korhonen, David Traum, and Lluís Màrquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 4791–4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472/ . Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. Instruction-following evaluation for large language models, 2023. URL https://arxiv.org/abs/2311.07911 . A Overview of Appendix Our appendix is organized as follows. We begin by including PC and NL training samples in Appendix Section B. Then we include sample outputs from trained models in Appendix Section C. We then include the quantitative and qualitative results of running the Repair stage in Appendix Section D. We include data specific instances pseudo-code programs used in the creation of training data in Appendix Section E and then provide details on how model outputs are processed for evaluation in Appendix Section F. Finally, we include some details on statistical significance (Appendix Section G) and conclude the Appendix by discussing limitations of our work in Appendix Section H. B Training Sample Figure 3 and 4 shows training samples for PC and NL respectively. C Qualitative Samples C.1 IFEval C.1.1 Example 1 Input Instruction: 15 Figure 3: PC Training Sample Write a short story in third person narration about a protagonist who has to make an important career decision. Generate pseudocode chain-of-thought reasoning first and then generate the solution.Input [PSEUDOCODE] def code\_write\_short\_story\_career\_decision(input: str) -> str: \"\"\" Write a short story in third person narration about a protagonist who has to make an important career decision. Parameters: str: input Returns: str: code \"\"\" # this function takes the input string as an input protagonist\_name = extract\_name(input) # this function generates a code to write a short story in third person narration # about a protagonist who has to make an important career decision code = generate\_code(input={\"protagonist\_name\": protagonist\_name, \"task\": \"write short story about career decision\"}, story\_type=\"third person narration\") return code >>> code\_write\_short\_story\_career\_decision(input) [/PSEUDOCODE] Sophie sat at her desk, staring blankly at the computer screen. Her mind was racing as she weighed the options in front of her. She had been offered a promotion to a senior management position within her company, but it meant moving across the country and leaving her friends and family behind. At the same
https://arxiv.org/abs/2505.18011v1
time, her dream job at a rival company had finally opened up, and she had been invited for an interview. The thought of leaving her comfortable life behind, to take a chance on something new and uncertain, made Sophie feel anxious, but also excited. She had always been one to take risks, and it had paid off in the past. But now, with such a big decision to make, Sophie couldn’t help but feel overwhelmed. After much deliberation, Sophie made her choice. She packed her bags, said goodbye to her loved ones, and moved across the country for the new job. It was a difficult first few months, adjusting to a new city and a new company culture. But as time went by, Sophie realized that she had made the best decision for her career. The job was challenging, but also rewarding, and she was surrounded by passionate, driven colleagues who inspired her to grow and develop. In the end, Sophie’s bold move paid off. She was thriving in her new role, and had made new friends and connections in her new city. Looking back, Sophie was grateful for the opportunity to take a risk and make a change, and was confident that she had made the right decision for her career.Output Figure 4: NL Training Sample Write a short story in third person narration about a protagonist who has to make an important career decision.Input Sophie sat at her desk, staring blankly at the computer screen. Her mind was racing as she weighed the options in front of her. She had been offered a promotion to a senior management position within her company, but it meant moving across the country and leaving her friends and family behind. At the same time, her dream job at a rival company had finally opened up, and she had been invited for an interview. The thought of leaving her comfortable life behind, to take a chance on something new and uncertain, made Sophie feel anxious, but also excited. She had always been one to take risks, and it had paid off in the past. But now, with such a big decision to make, Sophie couldn’t help but feel overwhelmed. After much deliberation, Sophie made her choice. She packed her bags, said goodbye to her loved ones, and moved across the country for the new job. It was a difficult first few months, adjusting to a new city and a new company culture. But as time went by, Sophie realized that she had made the best decision for her career. The job was challenging, but also rewarding, and she was surrounded by passionate, driven colleagues who inspired her to grow and develop. In the end, Sophie’s bold move paid off. She was thriving in her new role, and had made new friends and connections in her new city. Looking back, Sophie was grateful for the opportunity to take a risk and make a change, and was confident that she had made the right decision for her career.Output Write a 300+ word summary of t h e
https://arxiv.org/abs/2505.18011v1
w i k i p e d i a page " h t t p s : / / en . w i k i p e d i a . org / w iki / Raymond_III , _ C o u n t _ o f _ T r i p o l i " . Do n o t use any commas and h i g h l i g h t a t l e a s t 3 s e c t i o n s t h a t has t i t l e s i n 16 markdown format , f o r example *h i g h l i g h t e d s e c t i o n p a r t 1 *,*h i g h l i g h t e d s e c t i o n p a r t 2 *,* h i g h l i g h t e d s e c t i o n p a r t 3 *. Output from LLama 3.1 8B-PC [PSEUDOCODE] d e f generate_summary ( u r l : s t r ) −> s t r : " " " G e n e r a t e a 300+ word summary of t h e w i k i p e d i a page . P a r a m e t e r s : s t r : u r l R e t u r n s : s t r : summary " " " # t h i s f u n c t i o n t a k e s t h e u r l as i n p u t and g e n e r a t e s a summary summary = g e n e r a t e _ w i k i p e d i a _ s u m m a r y ( u r l = u r l , word_count =300) # t h i s f u n c t i o n h i g h l i g h t s t h e s e c t i o n s wit h t i t l e s i n markdown f o r m a t summary = h i g h l i g h t _ s e c t i o n s ( summary ) r e t u r n summary >>> generate_summary ( u r l ) [ / PSEUDOCODE] Raymond I I I , Count of T r i p o l i ( c . 1140 − 1187) , a l s o known as Raymond of Antioch , was a p r i n c e of t h e County of T r i p o l i and t h e Kingdom of J e r u s a l e m . He was
https://arxiv.org/abs/2505.18011v1
t h e son of Count Pons of T r i p o l i and h i s wife , Melisende of J e r u s a l e m . Raymond was a member of t h e House of T r i p o l i , a b r a n c h of t h e House of Anjou . Raymond I I I i s b e s t known f o r h i s r o l e i n t h e T h i r d Crusade , where he was a v a s s a l of King R i c h a r d t h e L i o n h e a r t of England . He was one of t h e l e a d e r s of t h e C r u s a d e r army t h a t d e f e a t e d t h e Ayyubid f o r c e s a t t h e B a t t l e of Arsuf i n 1 1 9 1 . He a l s o p a r t i c i p a t e d i n t h e S i e g e of Acre , which l a s t e d f o r two y e a r s and i s c o n s i d e r e d one of t h e b l o o d i e s t b a t t l e s of t h e C r u s a d e s . In a d d i t i o n t o h i s m i l i t a r y e x p l o i t s , Raymond was a l s o i n v o l v e d i n diplomacy . He was one of t h e s i g n a t o r i e s of t h e T r e a t y of Ramla , which was s i g n e d i n 1157 between t h e Kingdom of J e r u s a l e m and t h e F a t i m i d C a l i p h a t e . He a l s o s e r v e d as a m e d i a t o r between t h e C r u s a d e r s t a t e s and t h e B y z a n t i n e Empire . Raymond I I I was m a r r i e d t w i c e . His f i r s t w if e was E s c h i v a of
https://arxiv.org/abs/2505.18011v1
Armenia , wi th whom he had two sons , Bohemond and Baldwin . A f t e r h e r death , he m a r r i e d A l i c e of Champagne , w it h whom he had two d a u g h t e r s , Melisende and P h i l i p p a . Raymond I I I d i e d i n 1187 d u r i n g t h e B a t t l e of Cresson . His d e a t h was a major blow t o t h e C r u s a d e r s t a t e s , as he was one of t h e i r most c a p a b l e l e a d e r s . He was s u c c e e d e d by h i s son Bohemond , who was onl y a c h i l d a t t h e time , and t h e County of T r i p o l i was r u l e d by r e g e n t s u n t i l Bohemond came of age . Output from LLama 3.1 8B-NL Raymond I I I , Count of T r i p o l i ( c . 1135 − 1187) was t h e r u l e r of t h e County of T r i p o l i from 1182 t o 1 1 8 7 . He was t h e son of Raymond I I of T r i p o l i and h i s wi fe Hodierna of Toron . Raymond I I I was i n v o l v e d i n t h e C r u s a d e s and was c a p t u r e d by S a l a d i n i n 1187 d u r i n g t h e B a t t l e of H a t t i n . He was r e l e a s e d a f t e r t h e payment of a ransom . Raymond I I I was a l s o i n v o l v e d i n t h e n e g o t i a t i o n s f o r t h e T h i r d Crusade . Raymond I I I was m a r r i e d t o E s c h i v a of Bures , w it h whom he had two c h i l d r e n , Raymond and A l i c e . Raymond I I I was i n v o l v e d i n
https://arxiv.org/abs/2505.18011v1
t h e C r u s a d e s and was c a p t u r e d by S a l a d i n i n 1187 d u r i n g t h e B a t t l e of H a t t i n . He was r e l e a s e d a f t e r t h e payment of a ransom . Raymond I I I was a l s o i n v o l v e d i n t h e n e g o t i a t i o n s f o r t h e T h i r d Crusade . Raymond I I I , Count of T r i p o l i was t h e son of Raymond I I of T r i p o l i and h i s w if e Hodierna of Toron . Raymond I I I was i n v o l v e d i n t h e C r u s a d e s and was c a p t u r e d by S a l a d i n i n 1187 d u r i n g t h e B a t t l e of H a t t i n . He was r e l e a s e d a f t e r t h e payment of a ransom . Raymond I I I was a l s o i n v o l v e d i n t h e n e g o t i a t i o n s f o r t h e T h i r d Crusade . Raymond I I I was m a r r i e d t o E s c h i v a of Bures , w it h whom he had two c h i l d r e n , Raymond and A l i c e . Comment: As can be seen, Llama 3.1 8B-PC model first generated pseudo-code and then the final response. Not all aspects of the instruction are included in the generated pseudo-code in this example. Further, both models do not get the requirement of using section highlights correctly. C.1.2 Example 2 Input Instruction: Write a d e t a i l e d r e v i e w of t h e movie " The S o c i a l Network " . Your e n t i r e r e s p o n s e s h o u l d be i n E n g l i s h and a l l lower c a s e ( no c a p i t a l l e t t e r s w h a t s o e v e r ) . Output from Qwen2.5 7B-PC d e f
https://arxiv.org/abs/2505.18011v1
r e v i e w _ m o v i e _ s o c i a l _ n e t w o r k ( ) −> s t r : " " " Write a d e t a i l e d r e v i e w of t h e movie " The S o c i a l Network " . The e n t i r e r e s p o n s e s h o u l d be i n E n g l i s h and a l l lower c a s e ( no c a p i t a l l e t t e r s w h a t s o e v e r ) . 17 R e t u r n s : s t r : r e s p o n s e " " " # t h i s f u n c t i o n g e n e r a t e s a d e t a i l e d r e v i e w of t h e movie " The S o c i a l Network " r e v i e w = g e n e r a t e _ m o v i e _ r e v i e w ( movie_name=" The S o c i a l Network " , l a n g u a g e =" E n g l i s h " , c a s e =" lower " ) r e t u r n r e v i e w >>> r e v i e w _ m o v i e _ s o c i a l _ n e t w o r k ( ) [ / PSEUDOCODE] t h e s o c i a l network i s a c a p t i v a t i n g and e n g a g i n g f i l m t h a t t e l l s t h e s t o r y of mark z u c k e r b e r g and t h e c r e a t i o n of f a c e b o o k . t h e f i l m i s well − c r a f t e d and t h e a c t i n g i s top − notch , w it h each c h a r a c t e r p o r t r a y e d i n a nuanced and r e a l i s t i c manner . t h e s t o r y i s e n g a g i n g and t h e p a c i n g
https://arxiv.org/abs/2505.18011v1
i s well − ba la n ce d , making f o r a smooth and e n j o y a b l e viewing e x p e r i e n c e . t h e f i l m a l s o does a g r e a t j o b of e x p l o r i n g t h e e t h i c a l and s o c i a l i m p l i c a t i o n s of facebook , making i t a t h o u g h t − p r o v o k i n g and r e l e v a n t a d d i t i o n t o t h e f i l m canon . o v e r a l l , t h e s o c i a l network i s a must − s e e f o r anyone i n t e r e s t e d i n t e c h n o l o g y , e n t r e p r e n e u r s h i p , or s i m p l y a well − t o l d s t o r y . Output from Qwen2.5 7B-PC The S o c i a l Network i s a 2010 b i o g r a p h i c a l drama f i l m d i r e c t e d by David F i n c h e r , based on t h e book " The A c c i d e n t a l B i l l i o n a i r e s : The Founding of Facebook , t h e E a r l y Years of Mark Z u c k e r b e r g " by Ben Mezrich . The f i l m s t a r s J e s s e E i s e n b e r g as Mark Zuckerberg , Andrew G a r f i e l d as Eduardo S a v e r i n , and J u s t i n T i m b e r l a k e as Sean P a r k e r . The movie t e l l s t h e s t o r y of t h e c r e a t i o n of Facebook and t h e l e g a l b a t t l e t h a t ensued between Z u c k e r b e r g and h i s f o r m e r b u s i n e s s p a r t n e r s
https://arxiv.org/abs/2505.18011v1
. The f i l m i s well − c r a f t e d and engaging , w it h a s t r o n g p e r f o r m a n c e from J e s s e E i s e n b e r g as Z u c k e r b e r g . E i s e n b e r g p e r f e c t l y c a p t u r e s t h e c h a r a c t e r ’ s mix of i n t e l l i g e n c e , charm , and a r r o g a n c e , making him b ot h l i k a b l e and r e l a t a b l e . The s u p p o r t i n g c a s t i s a l s o s t r o n g , wi th Andrew G a r f i e l d d e l i v e r i n g a nuanced p e r f o r m a n c e as Eduardo S a v e r i n , and J u s t i n T i m b e r l a k e as t h e c h a r i s m a t i c Sean P a r k e r . The S o c i a l Network i s a well − paced f i l m t h a t keeps t h e a u d i e n c e engaged t h r o u g h o u t . The s t o r y i s t o l d i n a non − l i n e a r f a s h i o n , jumping back and f o r t h between d i f f e r e n t tim e p e r i o d s , which adds t o t h e t e n s i o n and keeps t h e a u d i e n c e g u e s s i n g . The f i l m a l s o does a good j o b of b a l a n c i n g t h e drama and t h e humor , making i t a well − rounded and e n j o y a b l e e x p e r i e n c e . One of t h e h i g h l i g h t s of t h e f i l m i s t h e p o r t r a y a l
https://arxiv.org/abs/2505.18011v1
of t h e l e g a l b a t t l e between Z u c k e r b e r g and h i s f o r m e r b u s i n e s s p a r t n e r s . The t e n s i o n and drama of t h e courtroom s c e n e s a r e p a l p a b l e , and t h e f i l m does a good j o b of showing t h e i m p a c t t h a t t h e l a w s u i t had on Z u c k e r b e r g ’ s l i f e and c a r e e r . O v e r a l l , The S o c i a l Network i s a well − c r a f t e d and e n g a g i n g f i l m t h a t t e l l s a c o m p e l l i n g s t o r y . The p e r f o r m a n c e s a r e s t r o n g , t h e p a c i n g i s good , and t h e f i l m does a good j o b of b a l a n c i n g t h e drama and t h e humor . Highly recommended . Comment: As can be seen, Qwen2.5 7B-PC model first generated pseudo-code and then the final response. It follows the instruction correctly, while in this particular example Qwen2.5 7B-NL fails to respond in lowercase. C.2 ARC C.2.1 Example 1 Input : When t h e s o i l i s s a t u r a t e d , earthworms move t o t h e t o p of t h e s o i l t o o b t a i n oxygen . T hi s b e h a v i o r i s e v i d e n c e of which b i o l o g i c a l c o n c e p t i n earthworms ? O p t i o n s : A. l e a r n e d b e h a v i o r B . m i g r a t i o n i n s t i n c t C . r e s p o n s e t o s t i m u l i D. r e p r o d u c t i v e s t r a t e g y Ground Truth: Ground T r u
https://arxiv.org/abs/2505.18011v1
t h : C P r o c e s s e d Ground T r u t h : C Output from Granite 3.1 8B-NL : G e n e r a t e d Output : C . Response : C . P r o c e s s e d G e n e r a t e d Output : C 18 C.2.2 Example 2 Input : Which s t a t e m e n t b e s t e x p l a i n s why p h o t o s y n t h e s i s i s t h e f o u n d a t i o n of most food webs ? O p t i o n s : A. S u n l i g h t i s t h e s o u r c e of e n e r g y f o r n e a r l y a l l e c o s y s t e m s . B . Most e c o s y s t e m s a r e found on l a n d i n s t e a d of i n w a t e r . C . Carbon d i o x i d e i s more a v a i l a b l e t h a n o t h e r g a s e s . D. The p r o d u c e r s i n a l l e c o s y s t e m s a r e p l a n t s . Ground Truth: Ground T r u t h : A P r o c e s s e d Ground T r u t h : A Output from Granite 3.1 8B-PC : G e n e r a t e d Output : [PSEUDOCODE] d e f g e n e r a t e _ a n s w e r ( q u e s t i o n : s t r , o p t i o n s : L i s t [ s t r ] ) −> s t r : \ " \ " \ " Given a q u e s t i o n and 4 o p t i o n s , g e n e r a t e one of A, B , C , or D which b e s t answers t h e q u e s t i o n . P a r a m e t e r s : s t r : q u e s t i o n L i s t [ s t r ] : o p t i o n s R e t u r n s : s t r : answer \
https://arxiv.org/abs/2505.18011v1
" \ " \ " # r e a d t h e q u e s t i o n and t h e o p t i o n s # use our knowledge t o s e l e c t t h e b e s t o p t i o n # r e t u r n t h e s e l e c t e d o p t i o n >>> g e n e r a t e _ a n s w e r ( Given a q u e s t i o n and 4 o p t i o n s , g e n e r a t e one of A, B , C , or D which b e s t answers t h e q u e s t i o n . Q u e s t i o n : Which s t a t e m e n t b e s t e x p l a i n s why p h o t o s y n t h e s i s i s t h e f o u n d a t i o n of most food webs ? O p t i o n s : A. S u n l i g h t i s t h e s o u r c e of e n e r g y f o r n e a r l y a l l e c o s y s t e m s . B . Most e c o s y s t e m s a r e found on l a n d i n s t e a d of i n w a t e r . C . Carbon d i o x i d e i s more a v a i l a b l e t h a n o t h e r g a s e s . D. The p r o d u c e r s i n a l l e c o s y s t e m s a r e p l a n t s . Answer : A . ) [ / [ PSEUDOCODE] ] S u n l i g h t i s t h e s o u r c e of e n e r g y f o r n e a r l y a l l e c o s y s t e m s . P r o c e s s e d G e n e r a t e d Output : A C.3 GSM8K C.3.1 Example 1 Input : Toulouse has t w i c e as many sheep as C h a r l e s t o n . C h a r l e s t o n has 4 t i m
https://arxiv.org/abs/2505.18011v1
e s as many sheep as S e a t t l e . How many sheep do Toulouse , C h a r l e s t o n , and S e a t t l e have t o g e t h e r i f S e a t t l e has 20 sheep ? Ground Truth: Ground T r u t h : I f S e a t t l e has 20 sheep , C h a r l e s t o n has 4 *20 sheep = <<20 *4=80>>80 sheep \ nToulouse has t w i c e as many sheep as C h a r l e s t o n , which i s 2 *80 sheep = <<2 *80=160>>160 sheep \ nTogether , t h e t h r e e has 20 sheep + 160 sheep + 80 sheep = <<20+160+80=260>>260 sheep \ n# ### 260 P r o c e s s e d Ground T r u t h : 260 Output from Llama 3.1 8B-NL : G e n e r a t e d Output : C . Response : 4 P r o c e s s e d G e n e r a t e d Output : 4 19 C.3.2 Example 2 Input : E l i z a ’ s r a t e p e r hour f o r t h e f i r s t 40 h o u r s she works each week i s $10 . She a l s o r e c e i v e s an o v e r t i m e pay of 1 . 2 t i m e s h e r r e g u l a r h o u r l y r a t e . I f E l i z a worked f o r 45 h o u r s t h i s week , how much a r e h e r e a r n i n g s f o r t h i s week ? Ground Truth: Ground T r u t h : E l i z a i s e n t i t l e d t o 45 −40 = <<45−40=5>>5 h o u r s o v e r t i m e pay . \ nHer h o u r l y r a t e f o r t h e o v e r t i m e pay i s $10 x 1 . 2 = $ < <10 *1.2=12 > >12.\ nSo , E l i z a w i l l r e c e i v e $12 x 5 =$<<12 *5=60>>60 f o r o v e r t i m e pay . \ nHer r e g u l a r weekly e a r n i n g i s $10 x 40
https://arxiv.org/abs/2505.18011v1
= $ < <10 *40=400 > >400.\ nThus , E l i z a w i l l r e c e i v e a t o t a l of $400 + $60 = $ < <400+60=460 > >460 f o r t h i s week ’ s work . \ n #### 460 P r o c e s s e d Ground T r u t h : 460 Output from Llama 3.1 8B-PC : G e n e r a t e d Output : [PSEUDOCODE] d e f e l i z a _ w e e k l y _ e a r n i n g s ( ) −> s t r : \ " \ " \ " E l i z a ’ s r a t e p e r hour f o r t h e f i r s t 40 h o u r s she works each week i s $10 . She a l s o r e c e i v e s an o v e r t i m e pay of 1 . 2 t i m e s h e r r e g u l a r h o u r l y r a t e . I f E l i z a worked f o r 45 h o u r s t h i s week , how much a r e h e r e a r n i n g s f o r t h i s week ? R e t u r n s : s t r : r e s p o n s e \ " \ " \ " # t h i s f u n c t i o n t a k e s i n t h e number of h o u r s E l i z a worked t h i s week hours_worked = 45 # t h i s f u n c t i o n c a l c u l a t e s t h e r e g u l a r pay f o r t h e f i r s t 40 h o u r s r e g u l a r _ p a y = c a l c u l a t e _ r e g u l a r _ p a y ( h o u r s =40 , h o u r l y _ r a t e =10) # t h i s f u n c t i o n c a l c u l a t e s t h e o v e r t i m e pay o v e r t i m e _ p a y = c a l c u l a t e _ o v e r t i m e _ p
https://arxiv.org/abs/2505.18011v1
a y ( h o u r s = hours_worked − 40 , h o u r l y _ r a t e =10) # t h i s f u n c t i o n c a l c u l a t e s t h e t o t a l pay t o t a l _ p a y = r e g u l a r _ p a y + o v e r t i m e _ p a y r e t u r n \ " E l i z a ’ s e a r n i n g s f o r t h i s week i s $ \ " + s t r ( t o t a l _ p a y ) >>> e l i z a _ w e e k l y _ e a r n i n g s ( ) [ / PSEUDOCODE] For t h e f i r s t 40 hours , E l i z a e a r n s $10 x 40 = $400 . For t h e o v e r t i m e hours , E l i z a e a r n s 1 . 2 x $10 = $12 p e r hour . T h e r e f o r e , f o r t h e 5 o v e r t i m e hours , E l i z a e a r n s $12 x 5 = $60 . Thus , E l i z a ’ s t o t a l e a r n i n g s f o r t h e week i s $400 + $60 = $460 . P r o c e s s e d G e n e r a t e d Output : 460 C.4 GSM8K Platinum C.4.1 Example 1 Input : C l a i r e makes a 3 egg o m e l e t e v e r y morning f o r b r e a k f a s t . How many dozens of eggs w i l l she e a t i n 4 weeks ? \ nAnswer : Ground Truth: Ground T r u t h : She e a t s 3 eggs e v e r y day and t h e r e a r e 7 days i n a week so she e a t s 3 *7 = <<3 *7=21>>21 eggs a week \ n A f t e r 4 weeks she w i l l have e a t e n 4 *21 = <<4 *21=84>>84 eggs \ nThere a r e 12 eggs i n 1 dozen and she ’ l l e a t 84 eggs so t h a t ’ s 84/12 = < <84/12=7 > >7 dozen eggs \
https://arxiv.org/abs/2505.18011v1
n# ### 7 P r o c e s s e d Ground T r u t h : 7 Output from Granite 3.1 8B-NL : 20 G e n e r a t e d Output : Response : 16 P r o c e s s e d G e n e r a t e d Output : 16 C.4.2 Example 2 Input : Greg has an alarm s e t t o r i n g t h r e e t i m e s a day as a r e m i n d e r . When t h e alarm goes o f f , i t c o n t i n u e s t o r i n g u n t i l Greg t u r n s i t o f f . The f i r s t ti me i t went o f f today , i t r an g f o u r t i m e s . The second t ime i t went o f f , i t r an g f o r t h r e e t i m e s as lo ng as t h e f i r s t ti me . The t h i r d time , i t r a ng f o r h a l f as lo ng as t h e second t ime . How many t i m e s d i d t h e alarm r i n g i n a l l ? Ground Truth: Ground T r u t h : The second t ime Greg \ u2019s alarm went o f f , i t ra ng t h r e e t i m e s as l on g as t h e f i r s t , so i t ra n g 3 *4 = <<3*4=12>>12 t i m e s t h e second t ime . \ nThe t h i r d t im e i t went o f f , i t ra ng h a l f as l on g as t h e second , so i t r an g 12 / 2 = < <12/2=6 > >6 t i m e s t h e t h i r d ti me . \ nIn a l l , Greg \ u2019s alarm r a ng 4 + 12 + 6 = <<4+12+6=22>>22 t i m e s . \ n# ### 22 P r o c e s s e d Ground T r u t h : 22 Output from Qwen2.5 7B-PC : G e n e r a t e d Output : The second t ime t h e alarm went o f f , i t r a ng f o r 4 *3 = 12 t i m e s . The t h i
https://arxiv.org/abs/2505.18011v1
r d tim e t h e alarm went o f f , i t r an g f o r 12 / 2 = 6 t i m e s . In t o t a l , t h e alarm r a ng 4 + 12 + 6 = 22 t i m e s . [PSEUDOCODE] d e f a n s w e r _ q u e s t i o n ( q u e s t i o n : s t r ) −> s t r : \ " \ " \ " Th is f u n c t i o n answers t h e g i v e n q u e s t i o n P a r a m e t e r s : s t r : q u e s t i o n R e t u r n s : s t r : answer \ " \ " \ " # t h i s f u n c t i o n e x t r a c t s t h e q u e s t i o n from t h e i n p u t q u e s t i o n = e x t r a c t _ q u e s t i o n ( q u e s t i o n ) # t h i s f u n c t i o n g e n e r a t e s t h e answer t o t h e q u e s t i o n answer = g e n e r a t e _ a n s w e r ( q u e s t i o n ) r e t u r n answer >>> a n s w e r _ q u e s t i o n ( Answer t h e f o l l o w i n g q u e s t i o n . G e n e r a t e pseudocode chain −of − t h o u g h t r e a s o n i n g f i r s t and t h e n g e n e r a t e t h e s o l u t i o n . Q u e s t i o n : Greg has an alarm s e t t o r i n g t h r e e t i m e s a day as a r e m i n d e r . When t h e alarm goes o f f , i t c o n t i n u e s t o r i n g u n t i l Greg t u r n s i t o f f . The f i r s
https://arxiv.org/abs/2505.18011v1
t ti me i t went o f f today , i t r a ng f o u r t i m e s . The second ti me i t went o f f , i t r an g f o r t h r e e t i m e s as l ong as t h e f i r s t t ime . The t h i r d time , i t ra ng f o r h a l f as lo ng as t h e second ti me . How many t i m e s d i d t h e alarm r i n g i n a l l ? Answer : The second t im e t h e alarm went o f f , i t ra n g f o r 4 *3 = 12 t i m e s . The t h i r d tim e t h e alarm went o f f , i t ra n g f o r 12 / 2 = 6 t i m e s . In t o t a l , t h e alarm r an g 4 + 12 + 6 = 22 t i m e s . ) [ / PSEUDOCODE] E x p l a n a t i o n : The f i r s t t ime i t went o f f , i t ra ng 4 t i m e s . The second t im e i t went o f f , i t ra n g f o r t h r e e t i m e s as lon g as t h e f i r s t time , or 4 *3 = 12 t i m e s . The t h i r d time , i t ra n g f o r h a l f as lon g as t h e second time , or 12 / 2 = 6 t i m e s . In t o t a l , t h e alarm r a ng 4 + 12 + 6 = 22 t i m e s . P r o c e s s e d G e n e r a t e d Output : 22 C.5 Hellaswag C.5.1 Example 1 Input : S e n t e n c e : Having an i c e cream : A c h i l d s i t t i n g a t r e s t a u r a n t t a b l e p l a c e s h i s hands i n a d e s s e r t . The c h i l d O p t i o n s : 0 . e a t s t h e d e s s e r t
https://arxiv.org/abs/2505.18011v1
and by t h e camera s t a r e s o f f i n t o t h e d i s t a n c e . 1 . p l a c e s t h e hands back i n t o t h e d e s s e r t . 2 . u s e s h i s hands t o e a t t h e food . 3 . s t a n d s and walks t o w a r d s t h e camera . 21 Ground Truth: Ground T r u t h : 2 P r o c e s s e d Ground T r u t h : 2 Output from Mistral 7B v0.3-NL : G e n e r a t e d Output : Response : 2 P r o c e s s e d G e n e r a t e d Output : 2 C.5.2 Example 2 Input : S e n t e n c e : C u t t i n g t h e g r a s s : A man walks o u t s i d e p l u g s h i s lawn mower i n and g e t s r e a d y t o mow . He s t a r t s mowing h i s lawn c a s u a l l y going ov e r a l l t h e g r a s s . His dog O p t i o n s : 0 . comes o u t s i d e and s t a r t s t o f o l l o w him a l i t t l e w h i l e he ’ s c l o s e r t o t h e house . 1 . comes o u t s i d e and t a k e s a c l o s e r loo k a t t h e man mowing , a f t e r he p i c k s up t h e mower . 2 . r u n s l o o s e and f e e d s on h i s movements . 3 . i s h a v i n g a s e i z u r e and s t a r t s t o hop a l l ov e r t h e lawn as he walks around . Ground Truth: Ground T r u t h : 0 P r o c e s s e d Ground T r u t h : 0 Output from Mistral 7B v0.3-PC : G e n e r a t e d Output : 0 [PSEUDOCODE] d e f g e n e r a t e _ c o
https://arxiv.org/abs/2505.18011v1
r r e c t _ e n d i n g ( s e n t e n c e : s t r , o p t i o n s : L i s t [ s t r ] ) −> i n t : \ " \ " \ " Given a p a r t i a l s e n t e n c e and f o u r o p t i o n s 0 , 1 , 2 , and 3 . G e n e r a t e t h e o p t i o n number c o r r e s p o n d i n g t o t h e c o r r e c t e n d i n g . P a r a m e t e r s : s e n t e n c e ( s t r ) : s e n t e n c e o p t i o n s ( L i s t [ s t r ] ) : o p t i o n s R e t u r n s : i n t : o p t i o n number \ " \ " \ " # t h i s f u n c t i o n c h e c k s i f t h e p a r t i a l s e n t e n c e i s c o m p l e t e or n o t i f i s _ s e n t e n c e _ c o m p l e t e ( s e n t e n c e ) : # i f t h e p a r t i a l s e n t e n c e i s complete , t h e n we r e t u r n t h e o p t i o n number c o r r e s p o n d i n g t o t h e c o r r e c t e n d i n g r e t u r n g e t _ c o r r e c t _ e n d i n g ( s e n t e n c e , o p t i o n s ) e l s e : # i f t h e p a r t i a l s e n t e n c e i s n o t complete , t h e n we r e t u r n −1 r e t u r n −1 >>> g e n e r a t e _ c o r r e c t _ e n d i n g ( \ " C
https://arxiv.org/abs/2505.18011v1
u t t i n g t h e g r a s s : A man walks o u t s i d e p l u g s h i s lawn mower i n and g e t s r e a d y t o mow . He s t a r t s mowing h i s lawn c a s u a l l y going o ve r a l l t h e g r a s s . His dog O p t i o n s : 0 . comes o u t s i d e and s t a r t s t o f o l l o w him a l i t t l e w h i l e he ’ s c l o s e r t o t h e house . 1 . comes o u t s i d e and t a k e s a c l o s e r loo k a t t h e man mowing , a f t e r he p i c k s up t h e mower . 2 . r u n s l o o s e and f e e d s on h i s movements . 3 . i s h a v i n g a s e i z u r e and s t a r t s t o hop a l l ov e r t h e lawn as he walks around . Answer : 0 \ " , [ ’ 0 . comes o u t s i d e a ’ , ’ d s t a r t s t o f o l l o w him a l i t t l e w h i l e he \ \ ’ s c l o s e r t o t h e house . ’ ] ) [ / PSEUDOCODE] 0 P r o c e s s e d G e n e r a t e d Output : 0 22 C.6 MathQA C.6.1 Example 1 Input : a can do a p i e c e of work i n 4 h o u r s ; b and c t o g e t h e r can do i t i n 3 h o u r s , w h i l e a and c t o g e t h e r can do i t 2 h o u r s . how lo ng w i l l b a l o n e t a k e t o do i t ? O p t i o n s : a . 8 h o u r s b . 10 h o u r s c . 12 h o u r s d . 24 h o u r s e . none of t h e
https://arxiv.org/abs/2505.18011v1
s e Ground Truth: Ground T r u t h : c P r o c e s s e d Ground T r u t h : 12 h o u r s Output from Granite 8B Code-NL : G e n e r a t e d Output : cResponse : c P r o c e s s e d G e n e r a t e d Output : 12 h o u r s C.6.2 Example 2 Input : t h e p e r i m e t e r of a rhombus i s 68 cm and one of i t s d i a g o n a l s i s 16 cm . f i n d i t s a r e a ? O p t i o n s : a . 290 b . 240 c . 2870 d . 289 e . 292 Ground Truth: Ground T r u t h : b P r o c e s s e d Ground T r u t h : 240 Output from Llama 3.1 8B-PC : G e n e r a t e d Output : b P r o c e s s e d G e n e r a t e d Output : 240 C.7 PiQA C.7.1 Example 1 Input : i c e box O p t i o n s : 0 . w i l l t u r n i n t o a c o o l e r i f you add w a t e r t o i t 1 . w i l l t u r n i n t o a c o o l e r i f you add soda t o i t Ground Truth: 23 Ground T r u t h : 0 P r o c e s s e d Ground T r u t h : 0 Output from Granite 8B Code-NL : G e n e r a t e d Output : 0 P r o c e s s e d G e n e r a t e d Output : 0 C.7.2 Example 2 Input : To o b t a i n meximum d e l i c i o u s n e s s of t h e n u t e l l a c o o k i e s O p t i o n s : 0 . Serve w it h a g l a s s of milk 1 . Serve w it h a g l a s s of w a t e r Ground Truth: Ground T r u t h : 0 P r o c e s s e d Ground T r u t h : 0 Output from Qwen2.5 7B-PC : G e n e r a t e d Output : 0 [PSEUDOCODE] d e f a n s w e
https://arxiv.org/abs/2505.18011v1
r _ q u e s t i o n ( q u e s t i o n : s t r , o p t i o n _ 0 : s t r , o p t i o n _ 1 : s t r ) −> i n t : \ " \ " \ " Given a q u e s t i o n wi th two p o s s i b l e o p t i o n s 0 and 1 . G e n e r a t e e i t h e r 0 or 1 which answers t h e q u e s t i o n . P a r a m e t e r s : q u e s t i o n ( s t r ) : q u e s t i o n o p t i o n _ 0 ( s t r ) : o p t i o n 0 o p t i o n _ 1 ( s t r ) : o p t i o n 1 R e t u r n s : i n t : answer \ " \ " \ " # t h i s f u n c t i o n c h e c k s i f t h e q u e s t i o n i s a yes / no q u e s t i o n i f i s _ y e s _ n o _ q u e s t i o n ( q u e s t i o n ) : # t h i s f u n c t i o n r e t u r n s 0 i f t h e answer i s no i f a n s w e r _ i s _ n o ( q u e s t i o n ) : r e t u r n 0 # t h i s f u n c t i o n r e t u r n s 1 i f t h e answer i s yes e l i f a n s w e r _ i s _ y e s ( q u e s t i o n ) : r e t u r n 1 # t h i s f u n c t i o n c h e c k s i f t h e q u e s t i o n i s a f a c t u a l q u e s t i o n e l i f i s _ f a c t u a l _ q u e s t i o n ( q u e s t i o n ) : # t h i s f
https://arxiv.org/abs/2505.18011v1
u n c t i o n r e t u r n s 0 i f t h e answer i s f a l s e i f a n s w e r _ i s _ f a l s e ( q u e s t i o n ) : r e t u r n 0 # t h i s f u n c t i o n r e t u r n s 1 i f t h e answer i s t r u e e l i f a n s w e r _ i s _ t r u e ( q u e s t i o n ) : r e t u r n 1 e l s e : # t h i s f u n c t i o n r e t u r n s 0 i f t h e q u e s t i o n c a n n o t be answered i f q u e s t i o n _ c a n n o t _ b e _ a n s w e r e d ( q u e s t i o n ) : r e t u r n 0 # t h i s f u n c t i o n r e t u r n s 1 i f t h e q u e s t i o n can be answered e l i f q u e s t i o n _ c a n _ b e _ a n s w e r e d ( q u e s t i o n ) : r e t u r n 1 >>> a n s w e r _ q u e s t i o n ( \ " To o b t a i n meximum d e l i c i o u s n e s s of t h e n u t e l l a c o o k i e s O p t i o n s : 0 . Serve w it h a g l a s s of milk 1 . Serve w it h a g l a s s of w a t e r Answer : 0 \ " , \ " Serve w it h a g l a s s of milk \ " , \ " Serve wit h a g l a s s of w a t e r \ " ) [ / PSEUDOCODE] 0 24 P r o c e s s e d G e n e r a t e d Output : 0 C.8 Winogrande C.8.1 Example 1 Input : Tanya t o l d Emily she c o u l d n ’ t come t o work b e c a u s e h e r
https://arxiv.org/abs/2505.18011v1
c a t had an i n f e c t i o n , b u t _ was l y i n g . O p t i o n s : 1 . Tanya 2 . Emily Ground Truth: Ground T r u t h : 1 P r o c e s s e d Ground T r u t h : Tanya Output from Mistral 7B v0.3-NL : G e n e r a t e d Output : 2 Response : Tanya P r o c e s s e d G e n e r a t e d Output : Tanya C.8.2 Example 2 Input : Judy bought new b r u s h e s t o p a i n t t h e e t c h e d g l a s s e s c r a c k b u t i t did n ’ t f i t . The _ was t o o narrow . O p t i o n s : 1 . b r u s h 2 . c r a c k Ground Truth: Ground T r u t h : 2 P r o c e s s e d Ground T r u t h : 2 Output from Mistral 7B v0.3-PC : G e n e r a t e d Output : 1 [PSEUDOCODE] d e f c h o o s e _ b e s t _ r e p l a c e m e n t ( s e n t e n c e : s t r , o p t i o n 1 : s t r , o p t i o n 2 : s t r ) −> s t r : \ " \ " \ " Given a s e n t e n c e wi th one of t h e word r e p l a c e d by an u n d e r s c o r e ( _ ) and two o p t i o n s 1 and 2 . Choose e i t h e r 1 or 2 which b e s t r e p l a c e s t h e u n d e r s c o r e . P a r a m e t e r s : s e n t e n c e ( s t r ) : s e n t e n c e o p t i o n 1 ( s t r ) : o p t i o n 1 o p t i o n 2 ( s t r ) : o p t i o n 2 R e t u r n s : s t r : o p t i o n 1 or o p t i o n 2 \ " \ " \
https://arxiv.org/abs/2505.18011v1
" # t h i s f u n c t i o n c h e c k s i f t h e u n d e r s c o r e i s p r e s e n t i n t h e s e n t e n c e i f u n d e r s c o r e _ p r e s e n t _ i n _ s e n t e n c e ( s e n t e n c e ) : # t h i s f u n c t i o n c h e c k s i f t h e o p t i o n 1 i s a v a l i d r e p l a c e m e n t f o r t h e u n d e r s c o r e i f o p t i o n 1 _ v a l i d _ r e p l a c e m e n t ( s e n t e n c e , o p t i o n 1 ) : r e t u r n \ " 1 \ " # t h i s f u n c t i o n c h e c k s i f t h e o p t i o n 2 i s a v a l i d r e p l a c e m e n t f o r t h e u n d e r s c o r e e l i f o p t i o n 2 _ v a l i d _ r e p l a c e m e n t ( s e n t e n c e , o p t i o n 2 ) : r e t u r n \ " 2 \ " r e t u r n \ " I n v a l i d s e n t e n c e \ " 25 >>> c h o o s e _ b e s t _ r e p l a c e m e n t ( \ " Judy bought new b r u s h e s t o p a i n t t h e e t c h e d g l a s s e s c r a c k b u t i t di dn ’ t f i t . The _ was t o o narrow . O p t i o n s : 1 . b r u s h 2 . c r a c k Answer : 1 \ " , \ " Judy bought new b r u s h e s t o p a i n t t h e e t
https://arxiv.org/abs/2505.18011v1
c h e d g l a s s e s c r a c k b u t i t di dn ’ t f i t . The _ was t o o narrow . O p t i o n s : 1 . b r u s h 2 . c r a c k Answer : 1 \ " , \ " Judy bought new b r u s h e s t o p a i n t t h e e t c h e d g l a s s e s c r a c k b u t i t di dn ’ t f i t . The _ was t o o narrow . O p t i o n s : 1 . b r u s h 2 . c r a c k Answer : 1 \ " ) [ / PSEUDOCODE] 1 P r o c e s s e d G e n e r a t e d Output : 1 26 Table 8: Aggregate comparison of Natural Language Prompt vs Pseudo-code Prompt on SNI v2 dataset using the Repair Strategy Instruction Type Comments Exact Match Rouge ALL Rouge Gen F1 Natural Language - 0.3254 0.4559 0.3629 0.3575 Pseudocode Iteration 1 0.2285 0.3206 0.2486 0.2321 Repair with temp=0.7 0.2434 0.3478 0.2836 0.2377 Repair with temp=0.5 0.2437 0.3498 0.2824 0.2384 Combined 0.2979 0.4140 0.3586 0.3044 D Repair Stage Prompts follow on next page. 27 Figure 5: Prompt used for correcting the generated Pseudocode Instructions: Your task is to fix the buggy code provided above so that it passes the given testcases. Ensure that the corrected code adheres to best practices and produces the expected output. [Instruction] "In this task, you need to answer basic science questions. For each question, several terms are intentionally masked with ***. This masking can cover very few or many of the question words. Clearly, if masked terms are important, the modified question will be impossible to answer. Please indicate the correct answer with one of the following responses: \"A\", \"B\", \"C\", \"D\" or \"E\". While \"A\"-\"D\" correspond to the answer options provided in the input, \"E\" represents \"I don’t know\" for questions that do not provide enough information. Respond via \"E\" if the question is not answerable. Do not generate anything else apart from one of the following characters: ’A’, ’B, ’C’, ’D’, ’E’." [\Instruction] [Buggy Code] ‘‘‘ def answer_science_question(question: str, options: str) -> str: """ Given a question with some terms masked with ***. Identify the correct answer from the given options. If the question is not answerable then respond with "E". Parameters: question (str): question options (str): options Returns: str: answer """ # unmask the question unmasked_question = unmask(question) # check if the question is answerable or not if is_question_answerable(unmasked_question): # if the question is answerable then return the correct answer from the given options return get_correct_answer(unmasked_question, options) else: # if the question is not answerable then return "E" return "E" ‘‘‘ [\Buggy Code] [Testcases] "input": "Question: Which
https://arxiv.org/abs/2505.18011v1
is an example of kinetic energy? (A) a book sitting on a shelf (B) a ball rolling down a hill (C) a runner about to start a race (D) a window getting hot from the sun." "output": "B.", "generated output": "’C’" "input": "Question: A student spills a cup of water onto a table. What property of the water has changed? (A) chemical composition (B) volume (C) color (D) mass" "output": "B.", "generated output": "’A’" "input": "Which of the following is made of atoms? (A) gravity (B) light (C) heat (D) air", "output": "D." "generated output": "’A’" [\Testcases] [Correct Code] ‘‘‘ def answer_basic_science_question(question: str, option_a: str, option_b: str, option_c: str, option_d: str) -> str: """ Given a question with many words/terms intentionally masked with ***, answer the question. If the masked terms in the question are important, then it will be impossible to answer. Return one of the correct options from \"A\", \"B\", \"C\", \"D\". If the question is unanswerable return \"E\" indicating \"I don’t know\". Parameters: question (str): a given question option_a (str): option A option_b (str): option B option_c (str): option C option_d (str): option D Returns: str: 0 or 1 """ # The function looks at the words in the question and then decides if the question is answerable or not. Returns True if answerable else returns False. if question_is_answerable(question): # this function checks if the given candidate_answer is the correct answer to the question or not if correct_answer(question, candidate_answer=option_a): return "A" elif correct_answer(question, candidate_answer=option_b): return "B" elif correct_answer(question, candidate_answer=option_c): return "C" elif correct_answer(question, candidate_answer=option_d): return "D" else: return "E" ‘‘‘ [\Correct Code] 28 Figure 6: Pseudocode Generated and Corrected Pseudocode Given a document, an entity and its sentiment towards the entity, verify if it is the correct sentiment towards the entity. Answer should be yes or no. Note that URLs in the text have been replaced with [Link].Natural Language Instruction def verify_sentiment(document: str, entity: str, sentiment: str) -> str: """ Given a document, an entity and its sentiment towards the entity, verify if it is the correct sentiment towards the entity. Parameters: document (str): document entity (str): entity sentiment (str): sentiment Returns: str: yes or no """ # this function checks if the sentiment of the entity is present in the document if is_sentiment_present_in_document(document, entity, sentiment): return "yes" else: return "no"Generated Pseudocode Corrected Pseudocode def verify_sentiment(document: str, entity: str, sentiment: str) -> str: """ Given a document, an entity and its sentiment towards the entity, verify if it is the correct sentiment towards the entity. Answer should be yes or no. Parameters: document (str): document entity (str): entity sentiment (str): sentiment Returns: str: yes or no """ # this function checks if the given sentiment is positive or negative if is_sentiment_positive(sentiment): # this function checks if the given sentiment is correct by checking the document if is_sentiment_correct(document, entity, sentiment): return "yes" else: return "no" else: # this function checks if the given sentiment is correct by checking the document if is_sentiment_correct(document, entity, sentiment): return "yes" else: return "no"Corrected Pseudocode 29 E Prompts Figure 7: One-Shot prompt along with an
https://arxiv.org/abs/2505.18011v1
incontext example for Pseudocode Generation You are provided a natural language description of a task. Your task is to generate a pseudocode instruction to solve the task and not generate a solution. The pseudocode instruction should use pythonic constructs and resemble a step-by-step algorithm to the task provided. You would be provided with a few examples of the same. The few examples would be enclosed within <Example> and </Example> tags. The natural language description of the task would be enclosed within <Instruction> and <Instruction> tags. The pseudocode for the examples would be enclosed within <Pseudocode> and </Pseudocode> tags. <Example> <Instruction> Please design a multiplayer board game that requires players to solve puzzles while facing a swarm of clockwork scarabs as the antagonist. The game must involve a steampunk theme and allow 1-4 players to collaborate. Players must rely on each other’s strengths to tackle a series of challenging riddles and puzzles. The gameplay mechanics should resemble those of ¨Pandemicänd ¨Mechs vs Minions. ¨The board should also include a maze-like design to increase the difficulty level of solving the puzzles. </Instruction> <Pseudocode> def create_multiplayer_board_game() -> str: """ Design a multiplayer steampunk-themed board game where 1-4 players collaborate to solve puzzles and riddles while confronting a clockwork scarab swarm antagonist. The gameplay mechanics should be similar to "Pandemic" and "Mechs vs Minions," with a maze-like board to enhance puzzle difficulty. Returns: str: response """ return generate_multiplayer_board_game( theme="steampunk-themed", elements=["puzzle", "riddle"], min_players=1, max_players=4, antagonist="clockwork scarab swarm antagonist", gameplay_mechanics=["Pandemic", "Mechs vs Minions"], board_type="maze_like" ) # program { "method": "create_multiplayer_board_game", "return": "str", "execute": "create_multiplayer_board_game()", } </Pseudocode> </Example> <Example> <Instruction> This task involves asking a question, providing a set of 2 options. You are expected to choose the best answer to the question. The output will be in the form of A or B, corresponding to which option is chosen. </Instruction> <Pseudocode> 30 Figure 8: Pseudocode for CoT Creak Dataset def answer_question_given_options(question_with_options: str) -> str: """ Given a question and options, the task is to generate the correct option. Additionally, generate an explanation for the same. Parameters: str: question_with_options Returns: str: answer_with_explanation """ # this function extracts the question from the provided text question = extract_question(question_with_options) # this function extracts the options from the provided text options = extract_options(question_with_options) # this function returns the answer from the option given the question and the options # additionally, the function returns an explanation for the same answer_with_explanation = generate_answer(question=question, options=options) return answer_with_explanation # program { "method": "answer_question_given_options", "arguments": {"question_with_options": "str"}, "return": "str", "execute": "answer_question_given_options(question_with_options)", } Figure 9: Pseudocode for CoT ECQA Dataset def answer_question_given_options(question_with_options: str) -> str: """ Given a question and options, the task is to generate the correct option. Additionally, generate an explanation for the same. Parameters: str: question_with_options Returns: str: answer_with_explanation """ # this function extracts the question from the provided text question = extract_question(question_with_options) # this function extracts the options from the provided text options = extract_options(question_with_options) # this function returns the answer from the option given the question and the options # additionally, the function returns an explanation for the same answer_with_explanation = generate_answer(question=question, options=options) return
https://arxiv.org/abs/2505.18011v1
answer_with_explanation # program { "method": "answer_question_given_options", "arguments": {"question_with_options": "str"}, "return": "str", "execute": "answer_question_given_options(question_with_options)", } 31 Figure 10: Pseudocode for CoT ESNLI Dataset def perform_textual_entailment_with_explanation(task_with_inputs: str) -> str: """ The task centers around checking two sentences for textual entailment. Additionally, generate an explanation for the same. Parameters: str: task_with_inputs Returns: str: response_with_explanation """ # this function extracts the task from the provided text task = extract_task(task_with_inputs) # this function extracts the inputs for the provided task from the provided text sentence_1, sentence_2 = extract_inputs_to_task(task=task) # this function performs the task on the two sentences provided response = apply_task(task=task, input_1=sentence_1, input_2=sentence_2) # this function looks at the task, the inputs and the output response, and generates an explanation # for the output. The explanation also includes the output response response_with_explanation = generate_explanation(task=task, input_1=sentence_1, input_2=sentence_2, output=response) return response_with_explanation # program { "method": "perform_textual_entailment_with_explanation", "arguments": {"task_with_inputs": "str"}, "return": "str", "execute": "perform_textual_entailment_with_explanation(task_with_inputs)", } Figure 11: Pseudocode for CoT ESNLI-ii Dataset def generate_question_given_reasoning_and_answer(reasoning_and_answer: str) -> str: """ Given the reasoning and the answer, the task is to generate a question which would have generated the answer and fits the reasoning provided. Additionally, generate an explanation for the same. Parameters: str: reasoning_and_answer Returns: str: question """ # this function extracts the reasoning from the provided text reasoning = extract_reasoning(reasoning_and_answer) # this function extracts the answer from the provided text answer = extract_answer(reasoning_and_answer) # this function generates the question which would have generated the answer and fits the reasoning provided question = generate_question(question=question, reasoning=reasoning) return question # program { "method": "generate_question_given_reasoning_and_answer", "arguments": {"reasoning_and_answer": "str"}, "return": "str", "execute": "generate_question_given_reasoning_and_answer(reasoning_and_answer)", } 32 Figure 12: Pseudocode for CoT GSM8K Dataset def solve_mathematical_problem(mathematical_problem: str) -> str: """ Given a mathematical problem, the task is to generate the solution in a step-by-step manner. Parameters: str: mathematical_problem Returns: str: solution_with_step_by_step_explanation """ # this function generate the step-by-step explanation for the given mathematical problem and return # the solution with the explanation solution_with_step_by_step_explanation = generate_solution(mathematical_problem, step_by_step_explanation=True) return solution_with_step_by_step_explanation # program { "method": "solve_mathematical_problem", "arguments": {"mathematical_problem": "str"}, "return": "str", "execute": "solve_mathematical_problem(mathematical_problem)", } Figure 13: Pseudocode for CoT QASC Dataset def answer_question_given_options(question_with_options: str) -> str: """ Given a question and options, the task is to generate the correct option. Additionally, generate an explanation for the same. Parameters: str: question_with_options Returns: str: answer_with_explanation """ # this function extracts the question from the provided text question = extract_question(question_with_options) # this function extracts the options from the provided text options = extract_options(question_with_options) # this function returns the answer from the option given the question and the options # additionally, the function returns an explanation for the same answer_with_explanation = generate_answer(question=question, options=options) return answer_with_explanation # program { "method": "answer_question_given_options", "arguments": {"question_with_options": "str"}, "return": "str", "execute": "answer_question_given_options(question_with_options)", } 33 Figure 14: Pseudocode for CoT Sensemaking Dataset def check_for_commonsense_validity(instruction_with_inputs: str) -> str: """ The task centers around checking if the two sentences adhere to common-sense or not. Additionally, generate an explanation for the same. Parameters: str: instruction_with_inputs Returns: str: response_with_explanation """ # this function extracts the instruction from the provided text instruction = extract_instruction(instruction_with_inputs) # this function extracts the inputs for the provided instruction from
https://arxiv.org/abs/2505.18011v1
the provided text sentence_1, sentence_2 = extract_inputs_to_instruction(instruction=instruction) # this function performs the instruction on the two sentences provided response = apply_instruction(instruction=instruction, input_1=sentence_1, input_2=sentence_2) # this function looks at the instruction, the inputs and the output response, and generates an explanation # for the output. The explanation also includes the output response response_with_explanation = generate_explanation( instruction=instruction, input_1=sentence_1, input_2=sentence_2, output=response ) return response_with_explanation # program { "method": "check_for_commonsense_validity", "arguments": {"instruction_with_inputs": "str"}, "return": "str", "execute": "check_for_commonsense_validity(instruction_with_inputs)", } Figure 15: Pseudocode for CoT StrategyQA Dataset def generate_response_with_explanation(question_with_context: str) -> str: """ Given a question and possibly a context, the task is to generate the correct answer. Additionally, generate an explanation for the same. Parameters: str: question_with_context Returns: str: answer_with_explanation """ # this function extracts the question from the provided text question = extract_question(question_with_context) # this function extracts the context if present in the text context = extract_options(question_with_context) # this function generates the correct for the question and an optional context # additionally, the function returns an explanation for the same answer_with_explanation = generate_answer(question=question, context=context) return answer_with_explanation # program { "method": "generate_response_with_explanation", "arguments": {"question_with_context": "str"}, "return": "str", "execute": "generate_response_with_explanation(question_with_context)", } 34 Figure 16: Pseudocode for StreamAqua Dataset def answer_question_given_options(question_with_options: str) -> str: """ Given a question and options, the task is to generate the correct option. Additionally, generate an explanation for the same. Parameters: str: question_with_options Returns: str: answer_with_explanation """ # this function extracts the question from the provided text question = extract_question(question_with_options) # this function extracts the options from the provided text options = extract_options(question_with_options) # this function returns the answer from the option given the question and the options # additionally, the function returns an explanation for the same answer_with_explanation = generate_answer(question=question, options=options) return answer_with_explanation # program { "method": "answer_question_given_options", "arguments": {"question_with_options": "str"}, "return": "str", "execute": "answer_question_given_options(question_with_options)", } Figure 17: Pseudocode for StreamAqua Dataset def generate_response_with_explanation(question_with_context: str) -> str: """ Given a question and possibly a context, the task is to generate the correct answer. Additionally, generate an explanation for the same. Parameters: str: question_with_context Returns: str: answer_with_explanation """ # this function extracts the question from the provided text question = extract_question(question_with_context) # this function extracts the context if present in the text context = extract_options(question_with_context) # this function generates the correct for the question and an optional context # additionally, the function returns an explanation for the same answer_with_explanation = generate_answer(question=question, context=context) return answer_with_explanation # program { "method": "generate_response_with_explanation", "arguments": {"question_with_context": "str"}, "return": "str", "execute": "generate_response_with_explanation(question_with_context)", } 35 Figure 18: Pseudocode for Tulu v2 Science Evidence Inference Task def extract_findings(abstract_with_instruction: str) -> dict: """ The task is to generate findings from a medical clinical trial report. The findings should be a JSON array. Each finding should have the following five elements: - Intervention: The medical intervention being tested. This should be a text span copied from the input passage. - Comparator: The baseline against which the intervention is being tested. This should be a text span copied from the input passage. If no comparator is reported, this should be null. - Outcome: The medical outcome whose effect is being measured. This should be a
https://arxiv.org/abs/2505.18011v1
text span copied from the input passage. - Effect: The effect of the intervention on the outcome, relative to the comparator. The effect should be one of the following three values: (\"significantly increased\", \"significantly decreased\", \"no significant difference\"). - Evidence: The evidence for the effect. This should be a text span copied from the input passage. The results should be formatted as a JSON array. Each entry in the list should be an object reporting a single finding. The keys in each dictionary should be: [\"intervention\", \"comparator\", \"outcome\", \"effect\", \"evidence\"] Parameters: str: abstract_with_instruction Returns: dict: findings """ # this function extracts the abstract portion from the given input abstract = extract_abstract(abstract_with_instruction) findings = [] # this functions extracts all findings from the given abstract # each finding is a string # the function returns a list of findings list_of_findings = extract_findings_from_abstract(abstract) for each_findings in list_of_findings: single_finding = {} # this function extracts the medical intervention span being tested intervention = extract_intervention(abstract) # this function extracts the baseline against which the # medical intervention is being tested. If no comparator is reported, # the function reports null comparator = extract_comparator(abstract) # this function extracts the medical outcome whose effect is being measured outcome = extract_outcome(abstract) # this function extracts the effect of the intervention on the outcome, # relative to the comparator. # the function returns one of the following three values: # (\"significantly increased\", \"significantly decreased\", \"no significant difference\"). effect = extract_effect(abstract) # this function extracts evidence for the effect evidence = extract_evidence(abstract) single_finding[’intervention’] = intervention single_finding[’comparator’] = comparator single_finding[’outcome’] = outcome single_finding[’effect’] = effect single_finding[’evidence’] = evidence findings.append(single_finding) import json findings_as_json_array = json.dumps(findings) return findings_as_json_array 36 Figure 19: Pseudocode for Tulu v2 Science Qasper Truncated 4000 Answer Task def answer_questions(sections_with_question: str) -> dict: """ The task is to answer the question based on the provided sections from a scientific paper. The answer to the question should use very minimal words from the relevant sections. The answer should be followed by a list of evidence snippets from the paper that contain the information required to answer the question. If the question is unanswerable given the provided excerpts, respond with the word \"unanswerable\". Parameters: str: sections_with_question Returns: str: answer """ # this function extracts the paper from the given input paper = extract_paper(sections_with_question) # this function extracts the question from the given input question = extract_question(sections_with_question) # This function answers the given question based on the document provided. # The function returns the string "unanswerable" if the question cannot be answered # from the document provided answer = answer_question_given_document(question=question, document=paper, verbatim=False) if answer != "unanswerable": # This function extracts all the evidences i.e., text snippets in the document # containing the answer to the given question evidence = extract_evidence(question=question, document=paper, answer=answer) return f"Answer: {answer}\nEvidence: {evidence}" else: return "unanswerable" 37 F Model Output Post Processing For models that are trained also generate pseudo-code, we employ a two-stage post-processing pipeline to extract the final natural language answer. Recall that models are trained with pseudo-code generate code enclosed within structured markers ( [PSEUDOCODE] followed by the
https://arxiv.org/abs/2505.18011v1
response to the actual task. In the first stage, we search for well-defined delimiters such as ‘ [/PSEUDOCODE] ’ or its variant ‘ [/[PSEUDOCODE]] ’. If found, the text following the last occurrence of these markers is extracted, assuming it represents the final answer generated after pseudo-code. If these markers are absent, we apply a fallback regular expression designed to match outputs that follow Python-like Read–Eval–Print Loop (REPL) patterns (e.g., »> function_name(...)) . In the second stage, the extracted segment is passed through the same post-processing routine used for NL, which removes formatting noise and standardizes the output as described below. For NL-trained models, the process begins by checking whether the generation contains a specific keyword such as “ Response: ”. If present, the segment following this keyword is treated as the candidate answer. Otherwise, the last line of the output is examined, where heuristic patterns like "answer is:" or"answer is :" are removed to get the final prediction. After identifying the most likely answer segment, we perform a cleaning step that removes extra characters such as brackets ,dollar signs ,single quotes , and trailing punctuation . We also trim leading and trailing whitespace from the text. If none of the above the patterns are matched we return the generated text by only removing leading and trailing white-spaces assuming that it has the answer. Once the output is standardized, we apply few dataset specific post processing. For dataset GSM8K and GS8K Platinum, which have number as final output, so we try to extract the numerical value from the output and evaluate on that extracted number. Similarly, for MathQA we try to check if the standardized output is one character. If it is, then this could be the one of the options(a, b, c, d or e) and we try to match this one char output to one of the ground truth, else we try to extract the numerical value from it and check with ground truth value. For PiQA, Hellaswag and Winogrande we try to find numerical value from the output, if present we check that extracted number with ground truth(as the ground truth has numerical value) else we try to match the standardized output with the string value of the ground truth. For ARC where labels are A, B, C or D, we check if the output is single char, if present we try to match with one of the labels else we try to match the outptut with the value of the options. 38 Table 9: P-values from Paired t-test Model ARC GSM8K GSM8K Plat. Hella. MathQA PiQA Wino. IFEval Granite 3.1 8B 2.72e-261 2.54e-305 6.95e-310 9.82e-75 9.26e-287 6.33e-283 2.37e-66 1.09e-36 Granite 8B Code 0.00e+00 0.00e+00 0.00e+00 0.00e+00 1.80e-116 1.18e-55 1.31e-61 6.24e-149 Llama 3.1 8B 2.20e-261 2.98e-219 5.30e-211 1.46e-152 2.00e-265 1.70e-26 7.56e-132 1.22e-95 Mistral 7B v0.3 0.00e+00 6.80e-321 5.13e-307 0.00e+00 3.96e-08 3.16e-121 9.86e-195 8.24e-18 Qwen2.5 7B 6.25e-47 1.67e-193 3.51e-179 2.57e-269 3.52e-198 4.63e-12 7.37e-65 1.90e-07 G Statistical Significance To evaluate the statistical significance of performance differences between NL-tuned vs pseudo-code tune models, we conducted paired significance testing using the exact match
https://arxiv.org/abs/2505.18011v1
(EM) metric. For each model family and dataset pair, we randomly sampled 10% of the total data without replacement and computed the EM accuracy for both model types. This sampling process was repeated 500 times to account for variability, and the average EM scores were recorded across trials. A paired t-test was then applied to the distributions of NL-tuned and pseudo-code tune models EM scores to compute a p-value. This methodology ensures that comparisons account for inherent variance and provides robust evidence regarding the efficacy of pseudo-code tuning. H Limitations While our results are indicate that training with pseudo-code instructions could lead to promising gains across tasks, it is unclear how this would scale with larger models, as well as, larger training mixtures. Could the performance gains seen by pseudo-code training disappear in very large models or very large instruction tuning datasets? Further, the models have not been evaluated on real-world applications such as retrieval augmented generation, tool calling or in multi-turn settings. 39
https://arxiv.org/abs/2505.18011v1
arXiv:2505.18034v2 [cs.AI] 27 May 2025Structured Thinking Matters: Improving LLMs Generalization in Causal Inference Tasks Wentao Sun École Polytechnique wentao.sun@polytechnique.edu wentao.sun@nokia.comJoão Paulo Nogueira Institut Polytechnique de Paris joaopaulo.fontoura@ip-paris.fr joao.fontoura_nogueira@nokia.com Alonso Silva Nokia Bell Labs alonso.silva@nokia-bell-labs.com Abstract Despite remarkable advances in the field, LLMs remain unreliable in distinguish- ing causation from correlation [Kiciman et al., 2024]. Recent results from the Corr2Cause dataset benchmark reveal that state-of-the-art LLMs—such as GPT-4 (F1 score: 29.08)—only marginally outperform random baselines (Random Uni- form, F1 score: 20.38), indicating limited capacity of generalization [Jin et al., 2024]. To tackle this limitation, we propose a novel structured approach: rather than directly answering causal queries, we provide the model with the capability to structure its thinking by guiding the model to build a structured knowledge graph, systematically encoding the provided correlational premises, to answer the causal queries. This intermediate representation significantly enhances the model’s causal capabilities. Experiments on the test subset of the Corr2Cause dataset benchmark with Qwen3-32B model (reasoning model) show substantial gains over standard direct prompting methods, improving F1 scores from 32.71 to 48.26 (over 47.5% relative increase), along with notable improvements in precision and recall. These results underscore the effectiveness of providing the model with the capability to structure its thinking and highlight its promising potential for broader generalization across diverse causal inference tasks. 1 Introduction Inferring causation from correlation remains a fundamental yet challenging problem in science and artificial intelligence. While it is well-understood that correlation alone does not imply causation, reliably uncovering true causal relationships often requires structured reasoning or strong assumptions. Classical methods such as graphical models, the Peter–Clark (PC) algorithm [Kalisch and Bühlman, 2007], and Greedy Equivalence Search (GES) [Kalisch and Bühlman, 2007, Alonso-Barba et al., 2013] offer principled frameworks for causal discovery from observational data. Human experts routinely construct causal diagrams to make reasoning transparent and systematic. However, whether LLMs [Radford et al., 2019, Devlin et al., 2019] can perform causal inference with similar rigor remains an open question. Recent studies have highlighted significant limitations in this regard. Jin et al. [2024] introduced theCORR2CAUSE benchmark to evaluate LLMs’ ability to infer causation from correlational input. Results were disappointing: even GPT-4 achieved only F1 score 29.08, while the best fine-tuned baseline (BART-MNLI) reached 33.38. Most models hovered near random or even below-chance performance. Extensive fine-tuning failed to yield robust generalization to novel causal structures, indicating a core deficiency in existing approaches. We hypothesize that this underperformance stems from the lack of explicit structural reasoning in standard LLM workflows. Current models typically infer answers directly from textual patterns without systematically analyzing the underlying causal mechanisms—a process we term causal parroting . In contrast, classical causal discovery and inference emphasizes constructing and reasoning over formal structures such as directed acyclic graphs (DAGs) [Naser, 2025]. Inspired by this, we propose a structured reasoning framework for LLMs that explicitly incorporates knowledge graph construction into the inference process. Concretely, our method guides the LLM to first externalize its internal reasoning by generating an intermediate knowledge graph from correlational statements. This graph serves as a structured representation that constrains and guides
https://arxiv.org/abs/2505.18034v2
the final causal judgment. Rather than guessing from surface cues, the model is prompted to identify paths, confounders, and potential interventions—just as a human analyst would. To implement this framework, we leverage Qwen3-32B , an open-source LLM with native tool- calling capabilities . Qwen3 can generate intermediate tool-call outputs and structured thinking traces, enabling the model to “call” a pseudo-tool that constructs a knowledge graph in a validated format. This mechanism forces structured thought before decision-making and ensures that reasoning remains transparent and verifiable. To our knowledge, Qwen3 is currently the only open-weight LLM that natively supports tool-calling for structured causal reasoning.1 To evaluate the proposed method, we conducted experiments on the Corr2Cause benchmark for causal relation identification using the Qwen3-32B model (32 billion parameters). Zero-shot prompting achieved a precision of 31.61%, recall of 33.89%, F1 score of 32.71, and accuracy of 78.40%. In contrast, our structured reasoning approach improves precision to 38.19%, recall to 65.56%, and F1 score to 48.26, while maintaining accuracy at 78.23%. Compared to the baseline, this represents relative gains of 20.80% in precision, 93.40% in recall, and 47.50% in F1 score, substantially reducing false positives. Furthermore, our approach demonstrates enhanced robustness on out-of-distribution queries, underscoring its strong generalizability. 2 Related Work Causal inference in natural language processing (NLP) has attracted growing interest, but early efforts largely treated causality as a form of knowledge recall rather than reasoning [Pearl and Mackenzie, 2018]. Many benchmarks focused on causal associations drawn from commonsense or factual data (e.g., cause-effect pairs in narratives or knowledge graphs), leading LLMs to behave as “causal parrots” that recite correlations seen in training data. Such approaches rely heavily on the coverage of causal pairs in the training corpus [Jin et al., 2024], overlooking the need for formal reasoning to infer causation from mere correlation. To push beyond knowledge-based inference, Jin et al. [2024] introduced the Corr2Cause benchmark – the first large-scale dataset for pure causal inference in NLP. Corr2Cause poses a rigorous challenge: given a set of correlational statements (e.g., “A is correlated with B; C is independent of A, etc.”) and a hypothesized causal relation, the model must decide if the causation claim is valid. The dataset contains over 200,000 examples generated using formal causal discovery rules, ensuring that each instance tests whether an LLM can deduce the underlying knowledge graph consistent with observed correlations. This benchmark shifts the focus from recalling commonsense causal facts to reasoning about causality from statistical evidence. Performance on Corr2Cause has revealed a significant gap in current models’ abilities. Jin et al. [2024] evaluated 17 different LLMs (including BERT-family models, GPT-3.5, GPT-4, and instruction-tuned variants) on this benchmark, and none achieved satisfactory results. In fact, across these models the F1 scores hover near the random-guess baseline [Jin et al., 2024]. Even GPT-4 model performs only marginally above chance, with many queries answered incorrectly. Some models fare worse than random, underscoring the fundamental difficulty of the task. For example, off-the-shelf natural language inference models like BERT and RoBERTa (fine-tuned on MNLI) often default to predicting “no causation” for every case, yielding
https://arxiv.org/abs/2505.18034v2
extremely low recall of true causal relations (F1 well under 5% 1DeepSeek-R1 explicitly does not support tool-calling; see https://github.com/deepseek-ai/ DeepSeek-R1/issues/9 . Phi-4-Reasoning-Plus similarly lacks tool-call capabilities; see https:// huggingface.co/microsoft/Phi-4-reasoning-plus/discussions/13 . 2 in such cases). These results make clear that simply scaling up model size or using generic training is insufficient for causal inference – existing LLMs overwhelmingly struggle to infer causation from correlation. One obvious remedy is to fine-tune LLMs on the Corr2Cause training data. Finetuning does help to an extent: for instance, a RoBERTa-Large model finetuned on Corr2Cause can reach an impressive 95% F1 on the original test set. However, this success is deceptive – the model fails to generalize beyond the narrow patterns it learned. When the correlational statements are paraphrased or variable names changed (i.e. out-of-distribution scenarios), performance plummets dramatically. Jin et al. [2024] report that even the best finetuned model loses much of its accuracy under such perturba- tions, indicating a brittle, overfit reasoning capability. In short, in-distribution training can inflate performance, but it does not solve the core reasoning problem: the model still lacks a robust causal inference strategy and instead has likely memorized superficial cues. The difficulty of Corr2Cause has spurred follow-up research seeking more principled solutions. One notable line of work is to incorporate explicit causal reasoning steps into the LLM’s prompting procedure. For example, Sgouritsa et al. [2024] propose a multi-step prompting strategy (PC-SubQ) that breaks down the Corr2Cause problem into a sequence of sub-questions aligned with the classical PC causal discovery algorithm. By guiding the LLM through each step of a formal algorithm (e.g., identifying independencies, colliders, etc.), their approach significantly improves performance across several LLMs and makes the results more robust to variations in phrasing. This indicates that strategic prompting – forcing the model to emulate a step-by-step causal discovery process – can yield better results than end-to-end black-box reasoning. Another promising direction is to enhance LLMs through external tool usage. Tool-augmented frameworks such as TALM [Parisi et al., 2022] allow LLMs to dynamically call external APIs and incorporate the outputs into their generation process. TALM introduces a self-play mechanism to bootstrap tool usage from few demonstrations, and has shown effectiveness on both knowledge- intensive and reasoning tasks. This idea of augmenting LLMs with explicit tool invocation aligns closely with our method, which equips the model with the ability to invoke a specialized graph- construction tool for building structured causal representations. By transforming the causal inference task into a structured tool calls, we aim to bridge the gap between symbolic reasoning and language understanding. We detail our approach in the following section. 3 Methodology In order to enable reliable causal reasoning in LLMs, we design a pipeline that explicitly separates causal structure induction from causal conclusion drawing . First, given the correlational premises, the model is prompted to construct a structured graph that captures a plausible underlying causal model. Second, the model performs structure-aware causal inference by consulting this graph to answer the causal query. By interposing a graph representation, our method forces the LLM to articulate the assumed causal relationships before
https://arxiv.org/abs/2505.18034v2
finalizing its answer, thereby encouraging more systematic reasoning. 3.1 Knowledge Graph Generation In the first stage, the model constructs a skeleton graph that captures the observed statistical dependencies. We prompt the model with a list of observed correlations and independencies, and instruct it to output a JSON representation listing the variables (nodes) and the dependency edges: 1{ 2 " nodes ": ["X", "Y", "Z"], 3 " edges ": [ 4 {" source ": "X", " target ": "Y", " label ": " correlates with "}, 5 {" source ": "X", " target ": "Z", " label ": " independent of"} 6 ] 7} For example, if the premise states “X is correlates with Y , and Z is independent of X,” the output graph contains an undirected edge between X and Y , and no edge between X and Z. At this stage, 3 causal directions remain unspecified; the graph merely reflects the statistical dependencies implied by the premise. Figure 1: Overview of our structured causal reasoning pipeline. To ensure the output graph is well-structured, we employ an OpenAI-style tool calling approach and regex-based output control. The model is given a predefined JSON schema (for instance, a synthetic function signature requiring a list of nodes and edges), which guides it to produce a JSON answer conforming to that schema. This structured outputs technique prevents the LLM from drifting into free-form text, effectively sandboxing its response as a machine-readable graph. By parsing the correlational input into a graph, causal directions are explicitly defined: the model lays out hypotheses of which variables act as causes and which act as effects. Importantly, this step leverages the LLM’s capability to perform implicit causal discovery through natural language understanding. 3.2 Structured Output via Tool Calling To guarantee that every induced graph is exactly valid JSON andconforms to our schema, we enforce three complementary components: •Schema definition. We capture the skeleton graph with a strongly-typed Pydantic model: KnowledgeGraph(nodes: List[Node], edges: List[Edge]) , where each Node has an integer ID and a label, and each Edge has source/target IDs plus a descriptive label (e.g. “ correlates with ” or “independent given A and B ”). •Tool conversion. The Pydantic schema is converted into an OpenAI function (“tool”) via convert_to_openai_tool(KnowledgeGraph) , automatically generating the precise JSON signature that the LLM must adhere to when emitting its graph. •Output enforcement. We compile the JSON schema into a whitespace-insensitive regular expres- sion and embed it in our control tags: 4 –Serialize the Pydantic schema ( convert_json_schema_to_str ). –Build a regex from it ( build_regex_from_schema ). –Enclose the pattern in <think>...</think> and<tool_call>...</tool_call> . –Inject the resulting regex into the model’s logits processor ( RegexLogitsProcessor ). This pipeline prevents any free-form drift: every model output is guaranteed to match our schema exactly. 3.3 Structure-Aware Causal Inference In the second stage, the LLM uses the generated graph to inform its final answer to the causal question or hypothesis. We provide the model with the previously constructed graph (for example, by including the JSON graph in the prompt or via a tool interface) alongside the original
https://arxiv.org/abs/2505.18034v2
causal query. The model’s task is now framed as: given this candidate knowledge graph, determine whether the stated hypothesis holds. Because the model has an explicit graph to consult, its reasoning can be grounded in the graph’s structure rather than relying on superficial cues from the textual premise. Concretely, the model checks the hypothesis against the graph. For instance, if the hypothesis is “X directly causes Z,” the model will verify whether there is a direct path from X to Z in the graph and no confounding paths that violate the claim). The structured graph provides a clear criterion: the hypothesis is true if and only if it is consistent with the graph’s causal relationships. The LLM, guided by the graph, can thus make a determination with greater confidence. In our prompt design, we may ask the model to explain its answer using the graph – for example, “According to the graph, Xcauses Ywhich in turn causes Z, soXindirectly affects Z(not a direct cause),” thereby ensuring the model’s reasoning remains faithful to the structure. By making the model structure-aware, this step markedly improves reliability. The LLM is no longer attempting to implicitly juggle correlation and causation in a single pass; instead, it has a concrete graph to reference. This separation of concerns reduces reasoning complexity: the first stage distills the raw text into an abstract causal form, and the second stage is a straightforward evaluation of the hypothesis against that form. Our approach thus injects a form of algorithmic transparency into the LLM’s reasoning process, as the intermediate graph can be inspected and even evaluated by humans or other systems. In summary, the methodology ensures that any causal conclusion the model draws is backed by an explicit structural justification, mitigating the risk of spurious correlations leading the model astray. 4 Experimental Setup We conduct our experiments on the Corr2Cause benchmark, a recent dataset designed to test the pure causal-inference capabilities of LLMs. Each instance includes a set of correlational statements (the premise ) and a causal hypothesis. The task is to determine whether the hypothesis causally follows from the given premise. The dataset is heavily imbalanced, containing approximately 80% negative examples (“No”) and 20% positive examples (“Yes”). Due to this skew, accuracy becomes a less informative metric—a trivial classifier that always predicts “No” would achieve approximately 80% accuracy. Therefore, we emphasize the F 1score, particularly for the minority “Yes” class, to more fairly assess model performance under class imbalance. As a baseline, we employ a direct zero-shot prompting approach similar to Jin et al. The model is given a natural-language question of the form: Question: [Premise]. Can we deduce the following: [Hypothesis]? Just answer "Yes" or "No". The model then responds with either “Yes” or “No”. This setup evaluates the model’s raw ability to distinguish causation from correlation without any structured reasoning or intermediate representa- tions. Our proposed method enhances reasoning through a structured tool-calling framework. Instead of directly answering the yes/no question, the model is first prompted to generate a structured knowledge 5 graph representing the relationships between variables described in
https://arxiv.org/abs/2505.18034v2
the premise. This graph is constructed using a predefined tool-calling schema, capturing correlations, independencies, and other potential links. Subsequently, the model reasons over this structured graph to assess whether the hypothesized causal relation follows. This two-step process—(1) parsing the premise into a formal graph and (2) applying reasoning over the graph—this can be seen as a kind of chain-of-thought , improving transparency and separation of comprehension and inference. Algorithm 1 Causal Inference with KG-Augmented LLM Require: Dataset D={(xi, yi)}, where xi=“Premise + Hypothesis” Ensure: Predicted labels ˆyi∈ {Yes,No} 1:Load pretrained LLM with tool-calling capability 2:Convert KG schema into Regex-constrained JSON pattern 3:for all example xiin filtered subset of Ddo 4: SplitxiintoPremise andHypothesis 5: Build prompt pito instruct LLM to generate KG before answering 6: Use Regex-constrained decoding to use tool-calling to generate valid KG JSON from pi 7: Parse JSON output KG ias(nodes ,edges ) 8: Convert KG iinto DOT format string DOT i 9: Give the result of the tool call to the LLM: qi=pi+KG i+DOT i 10: Query LLM with qito obtain final prediction ˆyi 11:end for 12:return Allˆyi We evaluate two LLMs from the Qwen family, both with 32 billion parameters. Qwen2.5-32B is a previous-generation model without specialized reasoning capabilities, while Qwen3-32B is a newer model with improved structured reasoning abilities. Both models are evaluated in a zero-shot setting, with no fine-tuning on Corr2Cause. Qwen3 is designed to handle tool-use and chain-of-thought prompting more effectively, enabling it to follow our structured reasoning pipeline [QwenLM, 2025]. In contrast, Qwen2.5 is included as a reference model and is expected to struggle with such complex multi-step prompts[Yang et al., 2024]. To assess the impact of KG serialization syntax on LLM comprehension, we introduce an auxiliary experiment using a smaller variant, Qwen3-4B, while holding the rest of the pipeline constant. We generate KGs in three different edge-notation styles: Style 1:a -> b [dir=none] Style 2:a -> b + b -> a Style 3:a – b Style 1 uses a directed arrow with an explicit ‘dir=none’ attribute to signal an undirected relation; style 2 encodes an undirected link via two opposing directed edges; and style 3 employs a simple undirected edge notation. By comparing F 1, precision, and recall on the positive class across these styles, we identify which syntax the model parses most reliably. Results in Table 1 show that Style 2 yields the best F 1for the “Yes” class, and we adopt this notation in our main KG generation. We report F 1score (main metric), recall, precision, and accuracy on the Corr2Cause test set. Our key interest lies in the model’s ability to correctly identify positive (“Yes”) cases of true causation, without being overwhelmed by the dominant “No” class. All evaluations are conducted in the zero-shot setting: the models are presented with each instance for the first time and must rely entirely on their pre-trained knowledge and reasoning capabilities. 5 Results and Analysis 5.1 Graph Style Evaluation (Qwen3-4B) Before analyzing the full performance of different models and prompting strategies, we first investi- gate how the serialization format of the generated knowledge graph
https://arxiv.org/abs/2505.18034v2
affects model comprehension. We conduct a controlled experiment using a smaller model variant (Qwen3-4B), holding all other components fixed while varying the KG output style. 6 Table 1 reports the results of three KG notations: Among these, Style 2 consistently yields the highest F 1score (0.3617) and recall (0.5075), sug- gesting that the model parses this format more reliably. We therefore adopt this style across all subsequent experiments and generate visual KGs using Graphviz via the code call dot.edge(src, tgt, label=lbl) . Table 1: Performance of Qwen3-4B under different KG styles. Style Accuracy Precision Recall F1 Style 1 (a -> b [dir=none] ) 0.7447 0.2632 0.4478 0.3315 Style 2 (a -> b + b -> a ) 0.7468 0.2810 0.5075 0.3617 Style 3 (a – b ) 0.7321 0.2500 0.4478 0.3209 5.2 Main Performance Comparison We now turn to the evaluation of our full structured reasoning framework on the Corr2Cause benchmark. Before comparing different prompting strategies on Qwen variants, we benchmark our best- performing model against two strong off-the-shelf baselines: GPT-4 and BART MNLI. Table 2 summarizes their performance alongside Qwen3-32B. Both GPT-4 and BART MNLI achieve only moderate F 1scores (29.08 and 33.38), with GPT-4 suffering from low precision and BART MNLI trading precision for recall. In contrast, Qwen3-32B under our structured framework achieves a much higher F 1of 48.26, driven by strong recall (65.56) and improved precision (38.19), while maintaining competitive accuracy. Table 2: Performance of off-the-shelf baselines vs. Qwen3-32B (Structured Reasoning) on Corr2Cause. Model F1 Precision Recall Accuracy GPT-4 29.08 20.92 47.66 64.60 BART MNLI 33.38 31.59 35.38 78.50 Qwen3-32B (Structured) 48.26 38.19 65.56 78.23 We further compare Qwen2.5 and Qwen3 under direct vs. structured approach (Table 3). Structured reasoning notably boosts Qwen3’s F 1score from 32.71 to 48.26, nearly doubling recall and improving precision. Qwen2.5 also benefits modestly, but its lack of structural understanding limits gains. Table 3: Evaluation results on Corr2Cause using two prompting strategies: direct vs. structured reasoning. Model (32B) Approach F1 Recall Precision Acc Qwen2.5-32B Unstructured (Baseline) 19.51 17.27 22.43 81.60 Qwen2.5-32B Structured 24.71 23.33 26.25 77.97 Qwen3-32B Unstructured (Baseline) 32.71 33.89 31.61 78.40 Qwen3-32B Structured 48.26 65.56 38.19 78.23 Overall, Qwen3 with intermediate KG reasoning achieves superior precision and recall, suggesting it better distinguishes genuine causal relationships. Its use of DAG structures during inference leads to clearer and more interpretable answers, often avoiding logical contradictions seen in baseline responses. 7 6 Conclusions and Discussion We have presented a novel structured reasoning framework for causal inference that guides LLMs to first construct explicit knowledge-graph representations of correlational premises before making causal judgments. Our approach leverages the Qwen2.5-32B model and the recent Qwen3-32B model equipped with tool-calling, allowing the LLM to output intermediate, machine-validated knowledge graphs. This structured pipeline transforms the Corr2Cause task into a graph reasoning problem, effectively decoupling correlation analysis from causal judgment. Empirically, this process significantly improves performance: on the Corr2Cause benchmark, our method achieves substantially higher F1 and recall scores than baseline models, and it is markedly more robust to out-of-distribution query perturbations. Compared to conventional end-to-end prompt- ing [Xu et al.,
https://arxiv.org/abs/2505.18034v2
2023] or chain-of-thought [Wei et al., 2022] strategies, the graph-based pipeline has several key advantages. The intermediate graph representation makes the model’s reasoning more transparent and allows explicit consistency checks, reducing reliance on spurious linguistic patterns in the input. Grounding the model’s decisions in this explicit structure leads to more reliable outputs and higher generalization, especially in challenging, distribution-shifted settings. We observe that the graph-guided method maintains its accuracy and recall even when variable names or textual descriptions are paraphrased or replaced, demonstrating strong robustness to adversarial query variations. Despite these benefits, our approach has limitations. Generating detailed knowledge graphs for each query is computationally intensive and requires careful design of the graph schema. Errors in graph generation or tool invocation can cascade into incorrect causal judgments. The framework’s effectiveness depends on the fidelity of the knowledge schema and the model’s ability to correctly populate it, which may degrade on very complex or ambiguous inputs. We also observe that adversarial or unusually phrased inputs can sometimes confound the graph construction process, indicating potential brittleness in those edge cases. Looking forward, there are several promising directions for future work. One avenue is to develop automated verification or validation mechanisms for the generated graphs, for example by integrating symbolic or logical consistency checks to catch erroneous edges or missing relations. Extending the structured reasoning approach to other complex reasoning tasks (such as multi-hop question answering, scientific reasoning, or richer causal discovery scenarios) could further demonstrate its generality. Integrating the LLM-based graph reasoning with external symbolic inference engines or knowledge bases is another promising direction, which may improve factual grounding and inference efficiency. Additionally, optimizing the tool-calling pipeline for efficiency (e.g., by streamlining the graph construction or pruning irrelevant details) or exploring joint training of graph schemas and generation could address current scalability and robustness challenges. In summary, our work underscores the importance of structured, modular reasoning in improving LLM performance on causally challenging tasks. By explicitly bridging correlational input and causal inference through an interpretable graph-based intermediate, we enable better generalization and robustness. Our results confirm that encouraging structured thinking in LLMs significantly enhances their reasoning capabilities. We hope this study inspires further exploration into hybrid neuro-symbolic approaches that leverage the strengths of LLMs while ensuring more faithful and verifiable reasoning [Amizadeh et al., 2020]. 8 References Juan I Alonso-Barba, Jose A Gámez, Jose M Puerta, et al. Scaling up the greedy equivalence search algorithm by constraining the search space of equivalence classes. International journal of approximate reasoning , 54 (4):429–451, 2013. Saeed Amizadeh, Hamid Palangi, Alex Polozov, Yichen Huang, and Kazuhito Koishida. Neuro-symbolic visual reasoning: Disentangling. In International Conference on Machine Learning , pages 279–290. Pmlr, 2020. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186, 2019. Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona T. Diab, and Bernhard Schölkopf.
https://arxiv.org/abs/2505.18034v2
Can large language models infer causation from correlation? In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=vqIH0ObdqL . Markus Kalisch and Peter Bühlman. Estimating high-dimensional directed acyclic graphs with the pc-algorithm. Journal of Machine Learning Research , 8(3), 2007. Emre Kiciman, Robert Ness, Amit Sharma, and Chenhao Tan. Causal reasoning and large language models: Opening a new frontier for causality. Transactions on Machine Learning Research , 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=mqoxLkX210 . Featured Certification. MZ Naser. Discovering causal models for structural, construction and defense-related engineering phenomena. Defence Technology , 43:60–79, 2025. Aaron Parisi, Yao Zhao, and Noah Fiedel. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255 , 2022. Judea Pearl and Dana Mackenzie. The book of why: the new science of cause and effect . Basic books, 2018. QwenLM. Qwen3 Technical Report. https://github.com/QwenLM/Qwen3/blob/main/Qwen3_ Technical_Report.pdf , 2025. Accessed: 2025-05-16. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. Eleni Sgouritsa, Virginia Aglietti, Yee Whye Teh, Arnaud Doucet, Arthur Gretton, and Silvia Chiappa. Prompt- ing strategies for enabling large language models to infer causation from correlation. arXiv preprint arXiv:2412.13952 , 2024. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, and Jun Huang. Making pre-trained language models end-to-end few-shot learners with contrastive prompt tuning. In Proceedings of the sixteenth ACM international conference on web search and data mining , pages 438–446, 2023. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. 9
https://arxiv.org/abs/2505.18034v2
arXiv:2505.18040v1 [cs.CL] 23 May 2025Contrastive Distillation of Emotion Knowledge from LLMs for Zero-Shot Emotion Recognition Minxue Niu Emily Mower Provost University of Michigan, Ann Arbor, Michigan, USA {sandymn, emilykmp}@umich.edu Abstract The ability to handle various emotion labels without dedicated training is crucial for build- ing adaptable Emotion Recognition (ER) sys- tems. Conventional ER models rely on training using fixed label sets and struggle to general- ize beyond them. On the other hand, Large Language Models (LLMs) have shown strong zero-shot ER performance across diverse label spaces, but their scale limits their use on edge devices. In this work, we propose a contrastive distillation framework that transfers rich emo- tional knowledge from LLMs into a compact model without the use of human annotations. We use GPT-4 to generate descriptive emo- tion annotations, offering rich supervision be- yond fixed label sets. By aligning text samples with emotion descriptors in a shared embedding space, our method enables zero-shot prediction on different emotion classes, granularity, and label schema. The distilled model is effective across multiple datasets and label spaces, out- performing strong baselines of similar size and approaching GPT-4’s zero-shot performance, while being over 10,000 times smaller. 1 Introduction Emotion Recognition systems are increasingly in- tegral to applications such as mental health mon- itoring (Trotzek et al., 2018; Liu et al., 2024a), customer service (Płaza et al., 2022), and human- computer interaction (Cowie et al., 2001). There is a growing need for models that can flexibly adapt to application-specific requirements. Zero-shot ER, the ability of models to generalize to new text do- mains and label spaces without extra annotation or retraining, has attracted growing research inter- est (Zhan et al., 2019; Olah et al., 2021; Lian et al., 2024). While significant progress has been made on generalizing across text domains (Chochlakis et al., 2023; Feng and Chaspari, 2020), zero-shot in- ference on new emotion label spaces (which may in- clude unseen labels) remains an arguably more un- After years of hard work, seeing my book on the shelf felt unreal, like a dream I was afraid to touch. GPT -4: Pride, Disbelief, Overwhelmed joy, Fear of disappointment Training: Inference: (ISEAR ) Joy, Fear (SemEval ) Joy, Fear (GoEmotions ) Fear, Pride, Gratitude (EmoBank -Valence ) 4(High) … Emotion ModelFigure 1: Our model is trained with rich emotion de- scriptors generated by GPT-4. During inference, this much smaller model can flexibly perform classification or regression on new label spaces. derexplored and challenging problem (Chochlakis et al., 2023). In this work, we propose a compact, generalizable framework that learns emotion rep- resentations from LLM supervision and supports flexible zero-shot prediction across diverse emotion taxonomies. Different applications often involve different emotional expressions and recognition needs. For instance, customer service systems often focus on identifying coarse-grained emotions such as pos- itive, neutral, or negative emotions to understand customer experiences (Lee and Narayanan, 2005; Yurtay et al., 2024), while mental health monitoring can benefit from more fine-grained emotional sig- nals to capture subtle psychological states (Trotzek et al., 2018; Rasool et al., 2025). Consequently, ER datasets have been developed to meet
https://arxiv.org/abs/2505.18040v1
differ- ent needs. Common design choices include la- bel schema (e.g., whether to use categorical or dimensional labels), label granularity (whether to adopt an extensive set of fine-grained emotions or a smaller set of distinct categories), and label selection (which specific emotions or dimensions to include in the label set) (Demszky et al., 2020; Rashkin et al., 2019; Wallbott and Scherer, 1986; Buechel and Hahn, 2017). While models can be trained to support multiple label spaces in multitask setups (Vu et al., 2021), they are not designed to handle novel labels at inference. As a result, ded- icated data collection and retraining is needed for each new application, limiting these models’ use across various downstream tasks. Recent advances in LLMs have opened new pos- sibilities for ER. Through text interactions, LLMs can reason about emotions and their causes (Wang et al., 2023; Zhang et al., 2024). They achieve strong zero-shot performance on various emotion spaces, especially on large, fine-grained ones (Liu et al., 2024b; Niu et al., 2024). Moreover, they can generate rich descriptions of emotion states, offer- ing a level of expressiveness beyond the constraints of traditional fixed label spaces (Niu et al., 2024; Bhaumik and Strzalkowski, 2024a). However, several factors limit the direct use of LLMs in applications. First, leading LLMs can contain trillions of parameters, making it infeasi- ble to deploy them on personal devices (Zheng et al., 2025). Yet, many good or even state-of-the- art ER models rely on much smaller architectures (Al Maruf et al., 2024; Ameer et al., 2023), sug- gesting that extensive computational overhead may not always be necessary. Additionally, the pow- erful language representation capability of LLMs enables them to retain rich linguistic content, which can raise privacy concerns in sensitive settings (Yao et al., 2024). Finally, LLMs typically generate free-form text rather than structured emotion repre- sentations, requiring additional post-processing to extract usable emotion labels (Xia et al., 2024). This work proposes a method to build compact models that enable zero-shot ER across various label spaces without additional human supervi- sion and without additional training. As illus- trated in Figure 1, we propose a novel approach that distills emotion knowledge from GPT-4 into a small, BERT-based model. Instead of relying on fixed categorical labels, we prompt GPT-4, as a rep- resentative leading LLM, to generate nuanced, free- form emotion descriptors that offer rich supervision beyond predefined label sets. We train our model using a contrastive learning framework inspired by CLIP (Radford et al., 2021), to align text samples and emotion terms in a shared latent emotion em- bedding space. Our model shows good zero-shot performance across multiple datasets with diverse label spaces. It outperforms strong baseline modelsof comparable size and approaches GPT-4 perfor- mance on multi-label emotion classification while being much smaller. Finally, we perform a nearest- neighbor analysis to show that our model learns meaningful clustering of semantically related emo- tions, enhancing the interpretability of our results. In summary, we make the following contri- butions: 1) We introduce a method for generat- ing rich, descriptive emotion annotations using GPT-4. 2) We propose
https://arxiv.org/abs/2505.18040v1
a contrastive framework that extends CLIP to multi-label tasks, enabling zero-shot ER across variable label spaces with- out additional human annotation. 3) We evaluate our model against multiple supervised and zero- shot baselines, showing strong performance with a compact model size. Our data and code are re- leased at https://github.com/chailab-umich/ uni_emotion_release . 2 Related Work 2.1 Zero-shot ER Models Zero-shot ER has attracted broad research inter- est. Most existing work focuses on generalizing to new text domains, where common strategies include using encoders trained on more diverse texts (Chochlakis et al., 2023), domain adaptation and adversarial learning (Feng and Chaspari, 2020; Gao et al., 2023), and data augmentation (Bo-Hao et al., 2025). However, these approaches assume a fixed label space between training and testing. An arguably more challenging setup is general- ization to unseen label spaces. Earlier approaches have used soft labels on a set of seen emotions as features to predict unseen ones, effectively learn- ing correlations between emotion labels (Tesfager- gish et al., 2022; Buechel et al., 2021). These ap- proaches are typically applied to coarse-grained emotion classes and often benefit from few-shot tuning. Similarity-based methods are also popu- lar. For example, Olah et al. (2021) proposed to obtain semantic embeddings of the emotion label and text input and use cosine similarity to mea- sure their alignment. Stanley et al. (2023) applies a similar approach to speech ER. Another line of work is entailment-based ER, which frames the task as predicting sentence entailment using crafted in- put like “this text expresses [emotion]” (Yin et al., 2019; Del Arco et al., 2022; Bareiß et al., 2024). In general, zero-shot ER remains highly challenging: many studies report a gap of 0.3-0.5 in F1 between finetuned and zero-shot setups (Olah et al., 2021; Del Arco et al., 2022; Chochlakis et al., 2023). 2.2 ER with LLMs With their growing Natural Language Understand- ing (NLU) capabilities, LLMs have been evaluated as an alternative for zero-shot ER. Several studies have shown that LLMs are capable of interpreting emotions and reasoning about their causes (Wang et al., 2023; Zhao et al., 2023; Tak and Gratch, 2023). LLMs also demonstrate strong zero-shot performance across various label spaces, approach- ing finetuned smaller models or even human per- formance (Liu et al., 2024b; Tak and Gratch, 2024; Niu et al., 2024). Despite their promise, LLMs produce free-text outputs rather than structured emotion predictions. As a result, existing meth- ods rely on prompt engineering to elicit responses in a specific format, requiring extra processing to extract usable emotion representations. Moreover, due to their extreme scale, they are not practical for deployment on edge devices or in resource- constrained environments (Zheng et al., 2025). 2.3 Descriptive Emotion Labels Categorical emotion label spaces face critiques for not being flexible enough to capture the full nuance of human emotions (Cowie and Cornelius, 2003). In addition to expanding label granularity in prede- fined taxonomies (Demszky et al., 2020; Rashkin et al., 2019), recent studies have proposed using un- restricted generative labels as an alternative, both in human annotation and in ER model outputs (Hoe-
https://arxiv.org/abs/2505.18040v1
mann et al., 2024; Stanley et al., 2023; Bhaumik and Strzalkowski, 2024b; Niu et al., 2024). Gen- erative descriptions enhance label richness and ac- curacy, but there is no standardized way to map open-ended textual descriptions to existing label sets or apply them in downstream applications. 3 Methods 3.1 Obtaining Rich Emotion Annotations We adopt a zero-shot prompting strategy following previous work (Niu et al., 2024) to encourage GPT- 4 to generate multiple descriptive terms (words or phrases) that capture the nuanced emotion states conveyed in the text. We use the following system prompt to obtain annotations for the GoEmotions dataset (see Section 4.1): You are an emotionally- intelligent and empathetic agent. You will be given a piece of text, and your task is to identify the emo- tions expressed by the writer of the text. Reply withonly the emotion descriptors (words or phrases), separated by commas. If no emotion is clearly expressed, reply with “neutral”. We use the “gpt-4-1106-preview” version of GPT-4 deployment through the Azure API. Across 43.4k samples in the GoEmotions training split, GPT-4 generated 2,173unique emotion terms, in contrast to 27labels in human annotations. Each sample has 1.91(standard deviation 0.65) terms on average and 13.0(standard deviation 5.9) char- acters per term. We show the human classifica- tion and GPT-4 generated labels on 10 randomly selected samples in Appendix A. Upon manual inspection, we notice that most terms in human la- bels also occur in GPT-4 generations, while GPT-4 tends to produce more complex or less common terms, such as “reminiscence” and “foreboding”. It also uses extent descriptors like “faint optimism” and “a little bummed”, or specifies emotional nu- ances or causes, e.g., “fear of abandonment”, “fear of embarrassment”, and “fear of failure”. Finally, the format of the generated terms is not normal- ized; e.g., it includes a mixture of nouns (“fear”) and adjectives (“fearful”) for the same emotion. We intentionally retain this variability, hypothesizing that exposure to diverse expressions may improve the generalizability of the model. 3.2 Contrastive Distillation Our model structure is illustrated in Figure 2. Dur- ing training, a batch of text samples is encoded into hidden representations using a text encoder. We collect all the emotion labels associated with the texts in the batch and embed them through a separate label encoder. We adopt pretrained BERT models for both the text and label encoders, keep- ing the label encoder frozen during training. BERT is chosen due to its strong text representation capa- bilities among models of similar sizes (around 100 million parameters). Also, BERT-based models are frequently used in zero-shot ER baselines (Olah et al. (2021); Del Arco et al. (2022); Bareiß et al. (2024), making them a strong and representative base model for our experiments. Then, two separate projection modules transform the text and label embeddings into the same space, which we consider as our emotion representation space. On the text side, we adopt a Q-Former (Li et al., 2023) to capture salient emotional informa- tion from the contextualized word representations. On the label side, since labels are relatively short, we apply a simple
https://arxiv.org/abs/2505.18040v1
linear projection to their [CLS] “I finally got the promotion!” Slight frustrationApprovalA little bummed… UneaseText Encoder Label EncoderT0 T1 T2 T3… TM L0 L1 L2 L3 … LNL4… L0·T0L0·T1L0·T2L0·T3 L0·TM … L1·T0L1·T1L1·T2L1·T3 L1·TM … L2·T0L2·T1L2·T2L2·T3 L2·TM … L3·T0L3·T1L3·T2L3·T3 L3·TM … L4·T0L4·T1L4·T2L4·T3 L4·TM ……………… … LN·T0LN·T1LN·T2LN·T3 LN·TM Positive LabelsTraining Inference “Ah, the movie was fine.”Text Encoder Cls. JoyFearAnger… ShameReg. High ActivationNegativityPositivity Low ActivationLabel EncoderL0 L1 L2 L3 … LN L4T0L0·T0L1·T0L2·T0L3·T0L4·T0… LN·T0Prediction Project (Linear)Project (Q-former) Project ProjectFigure 2: Overview of our contrastive distillation model structure. token embedding (Devlin et al., 2019). We denote T∈RB×das the text embeddings after normaliza- tion, and L∈RN×das the label embeddings. B is batch size, N is the number of labels in that batch (N varies across batches). The dimension of the learned emotion space dcan be tuned as a hyper- parameter. The model outputs an alignment matrix M=TL⊤/τ. Since TandLare both normal- ized, the dot product returns the cosine similarity for each text-label pair. τis a temperature scalar. We set d= 768 for fair comparison with baseline models (Section 4.2), and we set τ= 0.1. We modify the loss function in CLIP to enable multi-label training. We first apply a sigmoid func- tion to the alignment matrix to obtain the predicted probabilities. Then, using the ground-truth binary label alignment matrix Y∈ {0,1}B×N, we aim to maximize the log probability for matched (positive) text-label pairs, while minimizing the log probabil- ity for unmatched (negative) pairs. The resulting contrastive sigmoid loss is defined as: L=−1PYBX i=1NX j=1Yijlog(σ(Mij))− 1P(1−Y)BX i=1NX j=1(1−Yij) log(1 −σ(Mij)) where σ(·)denotes the sigmoid function. The align- ment matrices are generally very sparse, so we av- erage the log probability for positive and negative pairs separately before adding them together to bal- ance the precision and recall. 3.3 Zero-shot Inference We consider three common formulations in text- based ER: single-label classification, multi-label classification, and regression.As discussed in Section 3.2, the model is trained to predict the GPT-4 generated emotion labels. During inference, our trained model can flexibly adapter to different emotion label spaces by replac- ing the GPT-4 labels with the fixed set of human labels that are specific to each test dataset, as shown in Figure 2 (Inference). For single-label classifica- tion, we output the label from the fixed set that has the highest probability. For multi-label classifica- tion, we apply label-specific thresholds, calibrated on a small validation set (see Section 4.3) and out- put all labels that are above this threshold. For re- gression tasks we convert the original valence and activation labels to four discrete categories: “Posi- tivity”, “Negativity”, “High Activation”, and “Low Activation”. The predicted valence score is then computed as the difference M[text,Positivity ]− M[text,Negativity ], and the predicted activation score is computed as M[text,High Activation ]− M[text,Low Activation ]. With contrastive distillation, our model learns a compact and reusable emotion space from GPT- 4’s descriptive supervision, and it can generalize seamlessly to various label spaces. 4 Experiments 4.1 Data We use multiple text-based ER datasets for train- ing and evaluation, covering diverse emotion label spaces. The training set is constructed using text samples
https://arxiv.org/abs/2505.18040v1