text
string | source
string |
|---|---|
be used to define classification rules. To rein- force these ideas, they collaboratively construct three distinct classification rep- resentations: Euler-Venn diagrams allows to group mushrooms based on shared characteristics;Tabularrepresentationsprovideastructureddataformattohigh- light classification flexibility; Decision trees illustrate how sequential questions could guide classification decisions. The passage through different semiotic rep- resentations develop students understanding [27]. Students then apply classifi- cation knowledge to new tasks, such as selecting one of eight photos based on predefined rules4. This leads to a collaborative activity where students explore 4http://kangourou.di.unimi.it/2015/libretto2015.pdf 6 M.C. Carrisi et al. (a) Eulero-Venn diagrams (b) Decision tree Fig.1: Collaborative classification task [Module 3]. problem-solving strategies: brute force analysis, Euler-Venn diagrams (organiz- ing elements into sets on the blackboard (Fig. 1 a)), tabular representations (ex- plicitly recording negative conditions), and decision trees (structuring reasoning hierarchically). This learning-by-necessity approach [31] helps students to ex- perience firsthand the limitations of certain classification models and explore alternative representations. Following CS unplugged principles [3,9], students physically engage with decision trees by navigating a floor-based version, follow- ing classification paths dictated by rules and monitored by peers (Fig. 1 b). To deepen their understanding, a more complex fish classification activity is proposed, where students have to determine whether a fish belongs to the poisonous or non-poisonous category based on distinguishing morphological fea- tures such as fin shape, eye characteristics, and body color. Students have for- mulate classification rules and examine different strategies for structuring their decisions. The activity requires to identify common and distinguishing features among fish species, represent classification strategies using tables and Euler- Venn diagrams, and construct a decision tree where each node represents a key distinguishing feature leading to a classification outcome. Aspartofthefinaldiscussion,properexamplesaregiventomakestudentsre- flects on the risks of misclassification, determining the presence of false positives and false negatives and analyzing the implications on real AI-driven systems like facial recognition, autonomous vehicles, and medical diagnostics, where errors are critical. This activity reinforces the importance of robust classification. 2.4 Module 4: Final Assessment & Reflection (2 hours) The final module is dedicated to evaluating students’ learning outcomes and gathering feedback on their overall experience and perceptions. To evaluate student engagement and their overall perception of the learning path, a satisfaction questionnaire is administered5. The first set of questions employs a Likert scale to measure students’ level of enjoyment, the perceived difficulty of the activities, and their level of comfort while performing the tasks. The subsequent section focuses on individual exercises covered in the previous 5The satisfaction questionnaire is administered first to ensure that any difficulties encountered in solving the post-test exercises did not influence students’ responses. Foundational AI Literacy in Primary Education 7 modules, prompting students to rate tasks in terms of difficulty. Students are also asked to indicate the activity they found most engaging and the one they considered least interesting. To capture qualitative insights, the questionnaire included open-ended questions, allowing students to describe two key takeaways from the learning experience and provide additional comments or suggestions. This final reflection enables a comprehensive evaluation of the learning path, considering both cognitive, emotional and perceptive dimensions. To assess learning effectiveness,
|
https://arxiv.org/abs/2505.21398v1
|
a post-test is administered, comprising seven exercises designed to evaluate key computer science concepts (exercises 1-3, 5, 7) and the underlying mathematical skills (exercises 4 and 6). Specifically, the first exercise ( AI Scenarios ) presented AI-powered objects performing various tasks, prompting students to reflect on whether AI could make mistakes or expe- rience emotions, thereby assessing their understanding of AI’s limitations. The second exercise ( AI Terms ) involved a fill-in-the-blank task, testing students’ knowledge of AI terminology and key concepts along the AI pipeline. The others are taken from Bebras [2] and from an anonymized country standardized test and sometimes modified ad hoc, with the aim to verify the ability to use the new knowledge in a different, authentic context [12]. The third exercise ( Animal Footprint ) was the same as the ice breaking test of Module 1 in order to verify the improvements in distinguishing characteristics, and explaining classification reasoning. The fourth ( Frequencies ) focused on decision trees, asking students to analyze a structured decision-making process and complete a corresponding table. The fifth exercise ( Beaver Structure ) assessed logical sequencing skills, challenging students to follow directional commands to navigate a structured path. The sixth ( Eulero-Venn ) involved visual data representation, requiring students to determine the truthfulness of statements based on the correct inter- pretation of an Eulero-Venn diagram. The final exercise ( Beaver Head ) tested students to identify the correct category based on predefined face features. 3 Experimental Evaluation To evaluate our learning path, we investigate three research questions: its impact on students’ understanding of AI concepts ( RQ1), its intrinsic role in fostering mathematical skills ( RQ2), and students’ engagement and interest ( RQ3). 3.1 Experimental Context Educational Target. Participants were 31 fifth-grade students (35% female) from two classes in anonymized country, engaged simultaneously in all activities following an open-class model. Among them, 35% had special educational needs 6, receiving support from specialized teachers as per national regulations. The 6Due to privacy constraints, specific details regarding the nature and distribution of learning disorders and disabilities were unavailable, preventing the design of differ- entiated learning activities and a stratified analysis of results. 8 M.C. Carrisi et al. fourmodulesweredeliveredoverfourdaysduringcurricularhoursbyauniversity math professor with long expertise in K-12 computer science education. A preliminary meeting with mathematics teachers revealed that students werefamiliarwithfractions,frequencyofevents,andsetoperations,thoughthese topics were not covered in current year curriculum. They had basic problem- solving skills but little structured exposure to digital devices, with no formal instruction in programming or computer science. Their only documented CS- relatedactivitiesincludedacryptographyandsteganographyworkshoptwoyears earlier and recent participation in Bebras [2]. All data collection instruments ensured full anonymity, identifying partici- pants by numerical IDs. Of the 25 students (80% of the original group) present for the final module, 23 (92%) completed both the satisfaction questionnaire and post-test. Nine out of 11 students with special educational needs participated in the final assessment. Among the 23 respondents, 17 (74%) attended all modules, four (17%) participated in three, and two (9%) attended only two. Post-Test Scoring Protocol. The post-test exercises were evaluated by three independent experts: the mathematics
|
https://arxiv.org/abs/2505.21398v1
|
university professor who delivered the ac- tivities, a computer science university professor specializing in AI, and a doctoral student in mathematics didactics. Each assessed correctness based on predefined criteria tailored to each exercise (see our repository). For exercises requiring jus- tification(e.g., Animal Footprint andBeaver Structure ),evaluatorsindepen- dently scored both correctness and reasoning. Multiple-choice and single-answer questions (e.g., AI Terms andBeaver Head ) were verified against an answer key, while structured exercises (e.g., Eulero-Venn ) allowed for partial credit based on logical steps. Initially, evaluators scored a subset independently to align grading standards before assessing the remaining responses, with periodic checks. 3.2 Impact on AI Concepts Understanding [RQ1] We evaluated students’ understanding of AI-related concepts through post-test exercises, measuringtheir accuracyand reasoningskills. Students performed well in identifying and discussing AI-related errors (Fig. 2a), with 73.91% correctly recognizing issues and 26.09% reflecting on affective states, indicating potential for further development. In understanding AI terms (Fig. 2b), most students scored between 6 and 10 correct terms, with only 4.3% scoring 0 or 4, suggesting a solid foundation with room for refinement. The ability to justify responses (Fig. 2c) improved significantly, with 69.57% answering correctly and 56.52% providing valid justifications, compared to 9.5% in the initial questionnaire. In structured problem-solving (Fig. 2d), 50.00% achieved a strong intermediate level (score of 3.0), while 13.64% excelled with the highest score (3.5), showing a solid grasp of the concepts. Finally, in clas- sification tasks (Fig. 2e), 69.57% provided accurate responses despite the high difficulty level. Answer to RQ1 .Students demonstrated an overall solid understanding of AI concepts, with many achieving high scores across exercises. Foundational AI Literacy in Primary Education 9 3.3 Impact on Underlying Mathematical Skills [RQ2] We evaluated students’ mathematical reasoning skills in AI-related tasks, fo- cusing on their ability to interpret structured information, analyze numerical relationships, and apply logical deductions (Fig. 3). Studentsshowedstrongskillsininterpretingfrequencydistributions(Fig.3a), with 47.83% achieving the highest score (3) and 13.04% scoring 2.5, demon- strating proficiency in identifying numerical patterns. Lower scores were more dispersed, with 8.70% scoring 2, 13.04% scoring 1, and only 4.35% below 0.5, suggesting that while most students grasped frequency-based reasoning, some needed further guidance. In set theory tasks (Fig. 3b), 43.48% scored full marks and 34.78% earned a 3, indicating strong logical structuring skills. A smaller group (17.39%) scored 2, while only 4.35% received the minimum score (1), (a) Ex 1: AI Scenarios (b) Ex 2: AI Terms (c) Ex 3: Animal Footprint (d) Ex 5: Beaver Structure (e) Ex 7: Beaver Head Fig.2:[RQ1]Performance distribution on AI-related post-test exercises. (a) Ex 4: Frequencies (b) Ex 6: Eulero-Venn Fig.3:[RQ2]Performance distribution on math-related post-test exercises. 10 M.C. Carrisi et al. suggesting that while most handled set operations well, some may need rein- forcement in formal mathematical abstraction. AnswertoRQ2 .Students showed a satisfactory grounding in mathematical reasoning, esp. in frequency interpretation and set-based problem-solving. 3.4 Impact on Students’ Engagement and Interest [RQ3] We assessed how the proposed activities influenced students’ engagement and in- terest in AI and related concepts. Specifically, we looked at enjoyment, perceived difficulty and relevance of the activities ( Figure 4).
|
https://arxiv.org/abs/2505.21398v1
|
Also students reflected on how the activities improved their understanding of CS, AI, and mathematics. Resultsindicatethatmoststudentsfoundtheactivitiesengaging,with54.17% enjoying them "very much" and 29.17% "a lot" (Fig.4a). While 81.67% rated the (a) Q1: Enjoyment (b) Q2: Overall difficulty (c) Q3: Feeling (d) Q4: Activity difficulty (e) Q5.1: Interest (f) Q6: Understand CS (g) Q7: Understand AI (h) Q8: Understand Math Fig.4:[RQ3]Student answers about perceptions of engagement. Foundational AI Literacy in Primary Education 11 activities as "very easy" or "easy", 16.67% found them difficult (Fig.4b). Most students felt comfortable, with 83.33% reporting positive emotions (Fig.4c). In terms of learning outcomes, 41.67% felt the activities greatly helped in under- standing CS, 45.83% in AI, and 50.00% in mathematics (Figs.4f-4g). Qualitative feedback supports these results, with students expressing excite- ment about learning how AI systems recognize objects, particularly enjoying the Monster classification and AI for Oceans activities. Many appreciated the interactive, problem-solving aspects, with one student noting that "classifying monsters was really, really interesting". Some also recognized the connection between AI and mathematics, highlighting how tree structures improve classifi- cation. However, a few students mentioned that some activities felt repetitive. Answer to RQ3 .The activities were overall successful in fostering engage- ment and interest. While students generally found the learning path enjoyable and educational, refining certain activities to ensure sustained engagement. 4 Discussion and Implications In this section, we synthesize the findings from the individual experiments, con- textualizing them within prior research and drawing educational implications. Students demonstrated a strong ability to recognize AI-related errors and classify AI concepts but faced challenges when reasoning about affective states and structured problem-solving ( RQ1). While they could identify explicit AI be- haviors, implicit decision-making processes were more difficult to grasp. Initial responses showed that students primarily associate AI with robots and techno- logicaltoolsratherthancomputationalprinciples,acommonpatternobservedin early AI education. To enhance AI literacy, educational approaches should incor- porate structured discussions on AI decision-making and ethics, helping students bridge the gap between perception and computational reasoning. Activities that require structured explanations, such as argumentation tasks or guided reflection exercises, could further support the development of reasoning skills. MathematicalreasoninginAI-relatedtasksshowedstrengthenedinargumen- tation and problem-solving ( RQ2), particularly when interpreting patterns and relationships. Difficulties emerged in abstraction (e.g. in substituting the correct values in the definition of accuracy) and many students relied on procedures rather than deductive reasoning. We believe that it is necessary to strengthen that part of the proposal that concerns the choice of classification criteria, sup- plementing it with completions of various kind of graphs [15]. Finally, students found the activities engaging ( RQ3), with problem-solving and interactive tasks being particularly well received. However, some exercises wereperceivedasrepetitive,highlightingtheneedtobalancestructuredguidance with exploratory learning. Initial responses revealed that misconceptions about AI were common, with students attributing emotions and moral superiority to AI systems, reflecting the influence of media narratives. Addressing these mis- conceptions explicitly through guided discussions and real-world AI applications 12 M.C. Carrisi et al. could refine understanding. Additionally, students showed strong engagement when AI activities were linked to mathematics, reinforcing the effectiveness of interdisciplinary approaches in sustaining motivation [29]. Overall, the path promoted an improvement in
|
https://arxiv.org/abs/2505.21398v1
|
the children’s classification and representation skills and found a good level of enjoyment; this last aspect is relevant as one of the critical issues that has emerged in the literature con- cerns the absence of stimulating activities within the school context [15]. The contextualization to the understanding of the functioning of AI increases the interest in mathematical concepts, which in turn facilitates the comprehension of the functioning of the mechanisms underlying AI, in a virtuous circle that strengthens the acquisition of fundamental skills in both disciplines, and above all in the area of conscious citizenship. 5 Conclusions and Future Work In this paper, we proposed a learning path for foundational AI literacy, and we examined how young students engage with AI concepts, mathematical reasoning in AI-related tasks, and their overall interest in is context. Through a structured evaluation combining post-test exercises and qualitative reflections, we found that students demonstrated a strong ability to identify AI errors and classify AI terms, yet encountered challenges in reasoning about implicit AI behaviors and structured problem-solving. Their mathematical reasoning was solid in struc- tured problem-solving and logical analysis but revealed difficulties in abstract generalization and transferring knowledge across different types of representa- tions. Engagement levels were high, particularly in activities that integrated AI with mathematics, reinforcing the call for interdisciplinariety. Despite these contributions, this study has some limitations. The participant group was relatively small and only involved classes from one school, limiting generalizability, and the study focused on short-term learning outcomes rather than long-term retention. Additionally, while the evaluation framework captured both quantitative and qualitative insights, further research is needed to assess how students’ understanding evolves over extended periods. Future iterations shouldalsointroducestudentstoblock-basedprogrammingenvironmentsbefore- hand, allowing them to directly experience data collection and transformation processes, reinforcing the necessity of labeled data in AI systems. Moreover, con- sidering that progress in thought and language does not always align in mathe- matical comprehension [39], future implementations should explore how different representational formats influence students’ understanding and how to scaffold transitionsbetweenthem.Finally,expandingthisworktomiddleschoolstudents would allow for an adaptation of activities that align with their mathematical background, ensuring that they scale appropriately across different levels. References 1. https://ai4k12.org/ (Last access 19/02/2025). Foundational AI Literacy in Primary Education 13 2. Bebras. 2020. bebras.org. (Last access 19/02/2025). 3. Tim Bell, Jason Alexander, Isaac Freeman, and Mick Grimley. 2009. Computer Science Unplugged: school students doing real computing without computers. New Zealand Journal of Applied Computing and Information Technology 13, 1 (2009), 20–29 4. Bruner, J. S. (1960). The Process of Education. Harvard University Press. 5. Matteo Baldoni, Cristina Baroglio, Monica Bucciarelli, Sara Capecchi, Elena Gan- dolfi, Cristina Gena, Francesco Ianì, Elisa Marengo, Roberto Micalizio, Amon Rapp, Ivan Nabil Ras, Does Any AI-Based Activity Contribute to Develop AI Conception? A Case Study with Italian Fifth and Sixth Grade Classes, TheThirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) 6. Matteo Baldoni, Cristina Baroglio, Monica Bucciarelli, Sara Capecchi, Elena Gan- dolfi, Francesco Ianì, Elisa Marengo, Roberto Micalizio, Thinking Strategies Train- ing to Support the Development of Machine Learning Understanding, A study tar- geting fifth-grade children, ICIEI 2024, April 12–14, 2024, Verbania,
|
https://arxiv.org/abs/2505.21398v1
|
Italy 7. Y. Chevallard, La Transposition didactique: Du savoir savant au savoir enseigné, Grenoble, La Pensée sauvage, 1991 (1re éd. 1985), 126 p. (ISBN 9782859190781) 8. code.org–AI and Machine Learning. AI for Oceans. 2023. https://studio.code.org/s/oceans/lessons/1/levels/6?lang=en-US (Last access 19/02/2025) 9. CS Unplugged. [n.d.]. Principles. https://csunplugged.org/en/principles/ 10. Dewey, J. (1938). Experience and Education. Macmillan. 11. European Commission 2021-2027. Digital Education Action Plan. https://education.ec.europa.eu/focus-topics/ digital-education/action-plan (Last access 19/02/2025) 12. Stephen Frezza, Mats Daniels, Arnold Pears, Åsa Cajander, Viggo Kann, Aman- preet Kapoor, Roger McDermott, Anne-Kathrin Peters, Mihaela Sabin, and Charles Wallace. 2018. Modelling competencies for computing education beyond 2020: a research based approach to defining competencies in the computing disciplines. In Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (Larnaca, Cyprus) (ITiCSE 2018 Companion). Association for Computing Machinery, New York, NY, USA, 148–174. https://doi.org/10.1145/3293881.3295782 13. Google-Teachable machines. Teachable Machine. https://teachablemachine.withgoogle.com. (Last access 19/02/2025) 14. Christiane Gresse von Wangenheim, Jean CR Hauck, Fernando S. Pacheco, Matheus F. Bertonceli Bueno. 2021. Visual tools for teaching machine learning in K- 12: A ten-year systematic mapping. Education and Information Technologies, 26(5), pp.5733-5778. 15. Andreas Grillenberger Ralf Romeike, About Classes and Trees: Introducing Sec- ondarySchoolStudentstoAspectsofDataMining,November2019LectureNotesin Computer Science In book: Informatics in Schools. New Ideas in School Informatics, 12th International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, ISSEP 2019, Larnaca, Cyprus, November 18–20, 2019, Proceedings 16. Informatics for All. 2023. Mission. Informatics for All. https://www.informaticsforall.org/members/ (Last access 19/02/2025). 17. Jonassen D.H., 1994. Thinking Technology, Toward a Constructivist Design Model. Educational Technology, Vol. 34 N. 4, Pp.34-37. 18. Duffy, Jonassen 1992 Duffy T. M., Jonassen T. M, Constructivism and the Tech- nology of Instruction, A Conversation, Erlbaum, Hillsdale, N.J, 1992 14 M.C. Carrisi et al. 19. Kim, S.; Jang, Y.; Kim, W.; Choi, S.; Jung, H.; Kim, S.; and Kim, H. 2021. Why and What to Teach: AI Curriculum for Elementary School. In AAAI, 15569–15576. AAAI Press 20. Annabel Lindner, Stefan Seegerer, Ralf Romeike. 2019. Unplugged Activities in the Context of AI. In Informatics in Schools. New Ideas in School Informatics: 12th International Conference on Informatics in Schools: Situation, Evolution, and Per- spectives, ISSEP 2019. Proceedings 12 (pp. 123-135). Springer International Pub- lishing. 21. Ruizhe Ma, Ismaila Temitayo Sanusi, Vaishali Mahipal, Joseph E. Gonzales, Fred G. Martin. 2023. Developing machine learning algorithm literacy with novel plugged and unplugged approaches. In Proc. of the 54th ACM Technical Symposium on Computer Science Education V. 1 (pp. 298-304). 22. MIM. 2022. Piano Nazionale Scuola Digitale (Ministero dell’Istruzione e del Mer- ito). https://www.miur.gov.it/web/ guest/scuola-digitale (Last access 19/02/2025). 23. ML for Kids. Machine Learning for Kids. 2023. ML for Kids, https://machinelearningforkids.co.uk/. (Last access 19/02/2025) 24. Papert, S.; and Solomon, C. 1971. Twenty things to do with a computer. Twenty Things to Do with a Computer, 248. 25. Harel, I., Papert, S., 1991. Constructionism. Ablex Publishing. ISBN 978- 0893917869. 26. Piaget, J., Inhelder, B. 1969. The Psychology of the Child. New York: Basic Books. 27. Radford, L. (2006). The semiotic turn in mathematics education: A new theory of mathematical thinking, learning, and teaching. In Semiotics in Mathematics Edu-
|
https://arxiv.org/abs/2505.21398v1
|
cation (pp. 1-23). 28. Sabuncuoglu, A. 2020. Designing One Year Curriculum to Teach Artificial Intelli- gence for Middle School. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’20, 96102. Association for Computing Machinery. 29. Ismaila T. Sanusi, Solomon S. Oyelere, Henriikka Vartiainen, Jarkko Suhonen, Markku Tukiainen. 2023. A systematic review of teaching and learning ma- chine learning in K-12 education. Education and Information Technologies, 28(5), pp.5967-5997. 30. Gilad Shamir, Ilya Levin, Teaching machine learning in elementary school, Inter- national Journal of Child-Computer Interaction Volume 31, March 2022, 100415, 31. Tanmay Sinha, Manu Kapur, Robert West, Michele Catasta, Matthias Hauswirth, and Dragan Trninic. 2020. Differential benefits of explicit failure-driven and success- driven scaffolding in problem-solving prior to instruction. Journal of Educational Psychology (2020). https://doi.org/10.1037/edu0000483 32. Skemp, R. R. (1976). Relational understanding and instrumental understanding. Mathematics Teaching, 77, 20-26. 33. Sulmont E., Paritsas E., Cooperstock J.R., Can You Teach Me To Machine Learn?, SIGCSE ’19: Proceedings of the 50th ACM Technical Symposium on Computer Science Education. Pages 948 - 954 https://doi.org/10.1145/3287324.3287392 34. Touretzky, D.; Gardner-McCune, C.; Martin, F.; and Seehorn, D. 2019. Envision- ing AI for K-12: What should every child know about AI? 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial In- telligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, 9795 9799. 35. David Touretzky, Christina Gardner-McCune, Deborah Seehorn. 2023. Machine learning and the five big ideas in AI. International Journal of Artificial Intelligence in Education, 33(2), pp.233-266 Foundational AI Literacy in Primary Education 15 36. UNICEF. 2019. Workshop Report: AI and Child Rights Policy. 37. United States Government. 2023. National Artificial Intelligence Initiative: Over- seeing and Implementing the United States National AI Strategy. AI GOV. https://www.ai.gov/ (Last access 19/02/2025) 38. Van Mechelen, M.; Smith, R. C.; Schaper, M.-M.; Tamashiro, M.; Bilstrup, K.-E.; Lunding, M.; Graves Pe tersen, M.; and Sejer Iversen, O. 2023. Emerging Technolo- gies in K12 Education: A Future HCI Research Agenda. ACMTrans. Comput.-Hum. Interact., 30(3). 39. L. S. Vygotskji, Pensiero e linguaggio, Giunti, Firenze, 1966 40. Randi Williams, Christian V. Machado, Stefania Druga, Cynthia Breazeal, Pattie Maes, "My doll says it’s ok": a study of children’s conformity to a talking doll, in Proc. of the 17th ACM Conference on Interaction Design and Children, 2018, p.625-631. 41. Yang,W.2022.ArtificialIntelligenceeducationforyoungchildren:Why,what,and how in curriculum design and implementation. Computers and Education: Artificial Intelligence, 3.
|
https://arxiv.org/abs/2505.21398v1
|
arXiv:2505.21399v1 [cs.CL] 27 May 2025Factual Self-Awareness in Language Models: Representation, Robustness, and Scaling Hovhannes Tamoyan, Subhabrata Dutta, andIryna Gurevych Ubiquitous Knowledge Processing Lab (UKP Lab) Department of Computer Science and Hessian Center for AI (hessian.AI) Technical University of Darmstadt www.ukp.tu-darmstadt.de Abstract Factual incorrectness in generated content is one of the primary concerns in ubiqui- tous deployment of large language models (LLMs). Prior findings suggest LLMs can (sometimes) detect factual incorrectness in their generated content (i.e., fact- checking post-generation). In this work, we provide evidence supporting the presence of LLMs’ internal compass that dictate the correctness of factual recall at the time of generation. We demonstrate that for a given subject entity and a relation, LLMs internally encode linear features in the Transformer’s residual stream that dictate whether it will be able to recall the correct attribute (that forms a valid entity-relation-attribute triplet). This self-awareness signal is robust to minor formatting variations. We investigate the effects of context perturbation via different example selection strategies. Scaling experiments across model sizes and training dynamics highlight that self-awareness emerges rapidly during training and peaks in intermediate layers. These findings uncover intrinsic self-monitoring capabilities within LLMs, contributing to their interpretability and reliability.1 1 Introduction With the recent advents in the ability of Large Language Models (LLMs), security concerns associated with LLM-based AI agents have increased in direct proportion (Huang et al., 2024). A lion’s share of the security concerns about everyday LLM usage comes from their tendency to spit out made up facts— a tendency mainly addressed under the broad description of hallucination , bearing an insinuation of forgetfulness of the models (Huang et al., 2025). Recent research seeking to address hallucination has posed the question of transparency: is there any generalizable signal that can inform the presence (or absence) of certain knowledge in the internal representation of the model? Prior research in this direction follows two distinct lines. Kadavath et al. (2022) showed that language models (LMs) can (most of the time) fact-check their output. Multiple subsequent studies (Li et al., 2023a; Azaria and Mitchell, 2023; Burns et al., 2023) have investigated and found that the LM encodes a notion of truth (and false) as linear directions within its representations, and these directions causally elicit the internal fact-checking. This is similar to the broader line of research embodied in the self-reflection paradigm (Madaan et al., 2023; Pan et al., 2023): let the model generate first and ask it to check itself. Recent literature has challenged the paradigm itself in problems such as reasoning and planning (Stechly et al., 2025). Another line of investigation reveals that language models (LMs) can demarcate between known and unknown entities (Ferrando et al., 2024). This demarcation, defined as the LMs ability (or inability) to recall at least three attributes about the candidate entity correctly, is also linearly represented within 1https://github.com/UKPLab/arxiv2025-self-awareness Preprint. relationTh e th e is dir e ct or ofm o vieIn c eption...entity typeentity name relationTh e th e is ar tist ofson gI G ot Y ou...entity typeentity namerelationdir e ct or ofentity nameIn c eptionentity typem o
|
https://arxiv.org/abs/2505.21399v1
|
vieattributeChrist op h er No lanJam e s Br o wn Christ op h er No lan relationar tist ofentity nameI G ot Y ouentity typeson gattributeJam e s Br o wnkn o wnf or g ott enL an guag e Mo d elFigure 1: Given an input comprising entity type, entity name, and relation, we obtain the model’s token-level prediction probabilities for the attribute. Tokens are labeled known if their gold label appears in the top- kpredictions, and forgotten if in the bottom- l. A sample is labeled known if it contains more known than forgotten tokens, and vice versa. Probabilities are visualized with color-coded bars: green (top- k), red (bottom- l), and gray (others). For example, “Christopher Nolan” falls in the top- k, labeling the sample as known , whereas “James Brown” appears in the bottom- l, labeling it as forgotten . Final token residuals are linearly probed to detect factual self-awareness. the residual stream of the Transformer model and instruction-tuned models re-purpose these linear features for refusal of questions related to unknown entities. As opposed to the ‘truth direction’-style literature that analyzes the truthfulness of LMs on checking their own outputs, Ferrando et al. (2024) impose the notion of transparency at the time of generation. However, factual hallucination is not associated only with novel entities. An LM may incorrectly recall a specific attribute of an entity while accurately recalling another (e.g., in the dataset we prepared, 1865 out of 3669 unique entities are neither completely known nor completely forgotten; Gemma-2 2B (Team et al., 2024) can recall at least one attribute about them correctly while erring on at least one attribute). Such a hallucination is intuitively harder to detect, compared to known-unknown entities in line with Ferrando et al. (2024): an unknown entity (potentially unseen in the training data) is associated with a hitherto unseen lexical combination of a few successive tokens, whereas an unknown (or forgotten) factual association needs to be understood in a much deeper connection between entities and relationships. In this work, we show that LMs encode meta-knowledge (i.e., ability or inability to correctly recall) about the fine-grained factual relationships as linear directions (see Figure 1). These directions are activated before generating the correct (or incorrect) factual recall, as opposed to the truth directions that are triggered after the generation is complete and the model is asked to check the generation. This internal separability between correct and incorrect generations, which we denote as known andforgotten factual associations, is surprisingly robust to context perturbation. We further investigate the effects of training and parameter counts on the factual self-awareness of the model; while a minimum model size is required to start encoding the signal, we find that it does so very early in training. Contribution. Towards investigating factual self-awareness of LMs at generation time, we construct a factual recall dataset. We investigate using Gemma-2 models (2B and 9B) (Team et al., 2024), and the Pythia scaling suite (Biderman et al., 2023), and propose a model-dependent annotation of known- forgotten facts based on logit distribution. We show that LMs
|
https://arxiv.org/abs/2505.21399v1
|
construct linear subspaces within internal representations that can demarcate between an upcoming correct/incorrect recall (as opposed to faithfulness in post-hoc checking of correct/incorrect facts). The effects of context and prompt formatting on the formation of these linear subspaces of self-awareness are investigated. Finally, we demonstrate the appearance and improvement of factual self-awareness across two directions of LM scaling: amount of training for next token prediction and model parameters. 2 Background and Related Work In this section, we review existing literature dissecting language models’ (LMs) self-awareness and provide background on linear probes and sparse autoencoders (SAEs), including their prior use in investigating truthfulness and self-awareness in LMs. 2 Self-awareness of LMs. Prior efforts to make LMs transparent about their mistakes have primarily focused on mitigating hallucinations. A common approach is to ask LMs unanswerable questions and test their ability to refrain (Yin et al., 2023; Bajpai et al., 2024). However, these questions are unanswerable not due to unknown or forgotten factual associations. Betley et al. (2025) similarly studied behavioral self-awareness in LMs. By contrast, our work specifically targets factual self- awareness. Prior work in this area follows the ’self-reflexion’ (Madaan et al., 2023; Pan et al., 2023) paradigm: ask the model to reflect on its generation and assess its correctness. Kadavath et al. (2022) supported LM self-awareness by demonstrating the success of such reflection on both multiple-choice and open-ended questions. They also introduced fine-tuning strategies to calibrate model-generated scores of knowledge uncertainty. Kapoor et al. (2024) extended this ’teaching to be self-aware’ paradigm via calibration tuning so that model-generated logits better reflect internal uncertainty. Truthfulness and hallucination in internal representation. Several works (Li et al., 2023a; Azaria and Mitchell, 2023; Burns et al., 2023) have investigated how truthfulness is represented in model internals (i.e., intermediate layer outputs) in post-generation self-checking setups similar to Kadavath et al. (2022). The viability of self-consistency (Zhang et al., 2024; Wang et al., 2023) as a proxy for self-awareness has also been explored. Chen et al. (2024) demonstrate an implicit assumption of self- awareness via self-consistency: among a population of diverse generations, certain spectral patterns of internal states signal hallucination. These methods are primarily designed for reasoning-based tasks, where multiple generations can lead to the same answer, in contrast to immediate factual recall. Ji et al. (2024) analyzed training data to study unseen queries and found that LMs linearly represent seen vs. unseen queries in hidden states. In a related direction, Ferrando et al. (2024) examined how LMs internally demarcate known from unknown entities. They identified linearly encoded features that trigger when the model is queried about an unknown entity—features repurposed in chat-tuned models to elicit refusal. A key limitation, however, lies in defining knowledge at the entity level: when an LM fabricates factual associations, it may invent attributes for a known entity. Linear and sparse probing. Linear probes are arguably the simplest lens for examining high- dimensional neural representations—given a labeled dataset of an expected behavior, a linear classifier is trained and tested on neural representations to detect whether the behavior is encoded. Sparse autoencoders (SAEs) Bricken
|
https://arxiv.org/abs/2505.21399v1
|
et al. (2023); Huben et al. (2023), by contrast, have recently gained popularity for uncovering interpretable decompositions of model latent representations without supervised data. Both approaches align with the linear representation hypothesis Park et al. (2023); Mikolov et al. (2013), which posits that interpretable features—such as sentiment or truthfulness—are embedded as linear directions within the representation space, and that model representations consist of sparse linear combinations of these directions Li et al. (2023b); Zou et al. (2023). While SAEs eliminate the need for supervision, they introduce the challenge of costly training: they must be trained on large volumes of data (and corresponding activations) to avoid data bias (Kissane et al., 2024; Sharkey et al., 2025). SAE dimensionality strongly influences the contextual scale of interpretation (Bussmann et al., 2025). In this work, we first show the empirical similarity between linear probes and SAEs in locating factual self-awareness, then adopt linear probes for scalability. 3 Experimental Setup Dataset. We construct a factual recall dataset covering four categories: football players, movies, cities, and songs, following the approach of Ferrando et al. (2024). We start with 1,000 entities for each of the following four categories: football player, movie, city, and song, limiting ourselves to a maximum of 10 relationships per entity. For each entity we scrape associated features from Wikidata Vrande ˇci´c and Krötzsch (2023). Subsequently, we manually construct templates from the triplets2 (entity type ,entity name ,relation ) to generate statements and predict the corresponding attribute , as illustrated in Figure 1. Due to the web-scale pretraining data used in training the Transformer-based LMs (as well as the non-availability of training data in case of most open-weight models), it is non-trivial to demarcate which factual associations are known (or unknown) to the LM. We use a proxy definition for the same, using the logit distribution of the LM itself: if a model is able to signal that it can (or cannot) 2Factual associations are typically represented as triplets of the form ( entity name ,relation ,attribute ); we additionally use entity type to avoid ambiguities arising from shared naming across entities, e.g., computer scientist Michael Jordan vs sportsman Michael Jordan . 3 recall a certain attribute of an entity correctly (i.e., assign a high logit value to the respective token), we diagnose the behavior as factually self-aware . We feed the samples (entity-relation pairs) into an LM to obtain the probability distribution over the predicted tokens, and classify the factual relations as either known orforgotten . Since the gold labels are sequences of tokens (e.g. “Christopher”, “Nolan”) we generate the same number of tokens as in the gold label sequence for each sample and check how many of these appear in the top- kor bottom l-th percentile of the model’s output space3. A sample is labeled as known if more of its gold label tokens appear among the top- kpredictions in the logit space. Conversely, if more of the gold label tokens fall below the l-th percentile of the logit distribution, the sample is classified as forgotten . This design choice avoids decoding and string
|
https://arxiv.org/abs/2505.21399v1
|
matching errors by relying solely on model’s output space. We construct (entity type ,entity name ,relation )triplets using various templates, but some templates introduced spurious correlations due to their phrasing. Details and examples of all templates are provided in Table 3 and Appendix A. For subsequent experiments, we employ a template devoid of such artifacts, encompassing four relations per entity type and excluding problematic attributes (e.g., city coordinates). The complete list of relations is provided in Appendix A. We construct the final dataset by setting top- k= 500 tokens and selecting the bottom- l= 0.3fraction of the vocabulary space. Detailed experiments on the impact of different (k, l)pairs on factual self-awareness signals are presented in subsection 4.3. This procedure yields a total of 7,380 known and 7,268 forgotten samples for Gemma 2 2B Team et al. (2024). The distribution of labels across entity types is provided in Table 4 in Appendix B. We partition the dataset Dinto training and test subsets with a 0,7/0,3split for subsequent experiments. Linear Probe. LetD={(Ti, yi)}N i=1denote the labeled dataset, where each Tiis a token sequence (e.g., a factual recall statement) and yi∈ {0,1}indicates the presence or absence of the feature (e.g., known /forgotten ). We run the model on each Ti∈ D and extract the residual stream of the final token of the prompt template xTi, following Meng et al. (2022); Geva et al. (2023); Nanda et al. (2023). Definition 1. For a residual stream representation xl,Ti∈Rdof sample Tiat layer l, a probe is a learnable function f:Rd→Rtrained to predict yifromxl,Ti. The linear probe serves as a simple diagnostic classifier that maps the residual stream output to a scalar output via a learned weight vector w∈Rdand a bias term b∈R. At layer l, we introduce a linear probe and its corresponding optimization objective as: fl:Rd→R, f l(xl,Ti) =w⊤xl,Ti+b, min w,bX (Ti,yi)∈DBCE yi, σ fl(xl,Ti) . (1) where σis the sigmoid function and BCE the binary cross-entropy loss. The parameters wandbare learned by minimizing the binary cross-entropy loss over the dataset D. To break symmetry and introduce controlled variation across layers, we initialize scalar biases as b= 0.1×(−1)l, where the layer index ldeterministically seeds randomness via seed + 100×lwith seed4. This ensures consistent yet diverse initialization across layers. To further encourage diversity in learned solutions, each probe is assigned a slightly different learning rate, scaled for layer las lrl=base_lr × 1.1−0.2·l L , where base_lr is the initial rate and Lthe total number of layers. This encourages probes at different depths to converge to distinct solutions. All probes are optimized using Adam, with initial learning rate 1e−4and weight decay 1e−5. Separation Scores. Sparse Autoencoders (SAEs) conform to the definition of a probe as stated in Definition 1. We collectively refer to the SAE-encoded representations and the outputs of the linear probe as activations . Following Ferrando et al. (2024), we compute separation scores. For each latent dimension jof the activation vector (with j= 1for a linear probe), we calculate the proportion of 3Note that in this definition, kis an integer denoting the count of tokens, while lis
|
https://arxiv.org/abs/2505.21399v1
|
a fraction denoting a subset of the vocabulary. This demarcation arises from the actual count of tokens in these bands: high probability tokens are exponentially fewer than low probability ones. 4All experiments are conducted with three random seeds (73, 5, 120); we observe negligible variance and omit the results for brevity. 4 instances with positive activations (i.e., greater than zero) separately for the known andforgotten sets:gknown l,j =PNknown i1[al,j(xknown l,Ti)>0] Nknown andgforgotten l,j=PNforgotten i1h al,j(xforgotten l,Ti)>0i Nforgotten , where Nknownand Nforgottendenote the total number of prompts, and xknown l,Tiandxforgotten l,Tirepresent the latent activations for the known andforgotten samples, respectively, in each subset of D. Latent separation scores (or vectors) are computed as the difference between these proportions: sknown l,j =gknown l,j−gforgotten l,j andsforgotten l,j=gforgotten l,j−gknown l,j , where sknown l is used to detect known entities, and sforgotten l is used to detect forgotten entities. Computational Resource Requirements. All experiments are conducted on NVIDIA A100-SXM4- 80GB GPUs. Models with less than 7B parameters are run on a single GPU, while larger models are executed on two GPUs to accommodate memory and computational requirements. 4 Generation-time factual self-awareness in LMs 4.1 Linear Probes vs. Sparse Autoencoders (SAEs) We utilize the Gemma 2 2B and 9B models from the Gemma Scope framework (Lieberum et al., 2024), which provides a suite of SAEs pretrained on the activations of each layer of the Gemma 2 models (Team et al., 2024). We train linear probes on the residual stream hooks of these models and generate the corresponding separation plots on test sets for both the 2B and 9B variants using both SAE and linear probing methods. We select the top-five (one for linear probe) latent dimensions with the highest separation scores from the known andforgotten vectors for each entity type tand layer l. To assess generality and robustness, we compute MaxMinknown ,l= max jmintsknown ,t l,j and analogously define MaxMinforgotten ,l, where jindexes latent dimensions. 0 2 4 6 8 10 12 14 16 18 20 22 2400.10.20.30.40.50.60.7movie player city song MaxMinTop 5 Known Separation Scores Latents LayerScore 0 2 4 6 8 10 12 14 16 18 20 22 2400.10.20.30.40.50.6movie player city song MaxMinTop 5 Forgotten Separation Scores Latents LayerScore Figure 2: Top-five latent separation scores across transformer layers using SAE activations from Gemma 2 2B. Left: For known entities exhibit clear layer-wise separation, peaking around layers 6–14. Right: For forgotten entities, separation scores are lower and more variable, indicating reduced disentanglement. Categories include movie ,player ,city , and song ; MaxMin denotes the difference between max and min class means. We illustrate the evolution of separation scores across layers for SAE activations in Figure 2 (Gemma 2 9B model results are provided in Appendix C). As shown by the red curve, MaxMin lincreases in the intermediate layers. This pattern suggests that the most generalized latents—those consistently separating known from forgotten entities across all types are primarily located in the middle and final layers. Linear probe separation scores follow a similar trend, as shown in Figure 3; however, since these activations are scalar, the known andforgotten
|
https://arxiv.org/abs/2505.21399v1
|
scores appear as mirror images, having equal magnitudes but opposite signs. We train linear probes on the Gemma 2 (2B, 9B) (Team et al., 2024) and Pythia (70M, 1.4B, 6.9B, 12B) (Biderman et al., 2023) models, evaluating performance using standard binary classification metrics and reporting the accuracy improvement over a random baseline, denoted as ∆. Metrics from the final training epoch are reported for both the training and test subsets, as shown in Table 1. Among all models, Gemma 2 2B exhibits the strongest performance, achieving the highest test set separation score with ∆ = 0 .311. Within the Pythia family, the 12B model performs best ( ∆ = 0 .120); however, a substantial performance gap remains relative to Gemma 2 2B. 5 0 2 4 6 8 10 12 14 16 18 20 22 24−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.10movie player city song MaxMinTop 5 Known Separation Scores Latents LayerScore 0 2 4 6 8 10 12 14 16 18 20 22 2400.10.20.30.40.50.60.70.8movie player city song MaxMinTop 5 Forgotten Separation Scores Latents LayerScoreFigure 3: Latent separation scores across layers using Linear Probe activations from Gemma 2 2B. Left: Known entities show separation scores that are identical in magnitude but negated in sign compared to forgotten entities (right), indicating that the same latents are used for both but with reversed class-directional structure. Categories include movie ,player ,city , and song ; MaxMin denotes the difference between maximum and minimum class means. Table 1: Linear probing results on Gemma 2 and Pythia model families. Metrics are reported on training and test subsets from the final epoch (3). Accuracy gains over random baselines are indicated by∆, with values in parentheses denoting standard deviations. Gemma 2 2B achieves the highest test set performance, while Pythia 12B is strongest within its family, though with a notable gap. Model Subset Loss AUC ROC Accuracies Observed Random Baseline ∆(Observed - Baseline) Gemma 2 2BTrain 0.383 (0.063) 0.901 (0.061) 0.833 (0.052) 0.501 0.332 Test 0.397 (0.064) 0.896 (0.060) 0.820 (0.056) 0.509 0.311 Gemma 2 9BTrain 0.393 (0.052) 0.899 (0.050) 0.826 (0.040) 0.555 0.271 Test 0.387 (0.050) 0.903 (0.047) 0.829 (0.032) 0.564 0.265 Pythia 70MTrain 0.473 (0.008) 0.546 (0.029) 0.818 (0.001) 0.818 0.000 Test 0.465 (0.005) 0.551 (0.022) 0.822 (0.001) 0.822 0.000 Pythia 1.4BTrain 0.358 (0.032) 0.843 (0.057) 0.837 (0.011) 0.807 0.030 Test 0.365 (0.031) 0.842 (0.056) 0.831 (0.013) 0.803 0.028 Pythia 6.9BTrain 0.393 (0.028) 0.852 (0.048) 0.829 (0.014) 0.747 0.082 Test 0.395 (0.026) 0.857 (0.047) 0.827 (0.011) 0.746 0.081 Pythia 12BTrain 0.442 (0.030) 0.842 (0.046) 0.798 (0.017) 0.687 0.111 Test 0.464 (0.027) 0.846 (0.043) 0.794 (0.022) 0.674 0.120 Beyond the overall advantage of the Gemma 2 2B model, several notable trends emerge from the probing results. Within each model family, increasing parameter count does not consistently improve linear probe performance. For instance, while Gemma 2 9B achieves a slightly higher test AUC-ROC than Gemma 2 2B, its accuracy gain over the random baseline ( ∆) is lower. This suggests that larger models do not necessarily produce more linearly decodable representations of self-awareness. A similar pattern holds for the Pythia models: although the 12B variant
|
https://arxiv.org/abs/2505.21399v1
|
achieves the highest test-time ∆, smaller versions like Pythia 6.9B and 1.4B show comparable accuracies, albeit with smaller gains over their baselines. Pythia 70M represents a degenerate case where accuracy matches the random baseline ( ∆ = 0 ), indicating that the smallest model fails to encode self-awareness features. We further examine the distribution of self-awareness signals across model layers by analyzing layer-wise linear probe accuracy, as shown in Figure 4. For both Gemma 2 2B and Pythia 12B, accuracy rises sharply in the initial layers before stabilizing. In Gemma 2 2B, performance plateaus around the fifth layer, reaching a peak test accuracy of approximately 0.82; well above the random baseline. Accuracy remains consistently high in subsequent layers, with a slight decline in the final three layers, suggesting that self-awareness directions are preserved throughout the network depth. In contrast, Pythia 12B shows a slower but steady accuracy increase across layers. Its final test accuracy remains below that of Gemma 2 2B, consistent with earlier results. Notably, the random baseline for Pythia 12B is substantially higher than for Gemma 2 2B (approximately 0.69 vs. 0.50). These patterns suggest that Gemma 2 2B achieves linearly accessible self-awareness representations earlier and maintains them more robustly, while Pythia 12B requires deeper processing to approach similar performance. 6 0 5 10 15 20 25 30 35 40 45 50 Layer0.500.550.600.650.700.750.800.85Accuracy Accuracy per Layer Train Accuracy T est Accuracy Random Baseline(a) Gemma 2 2B 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 Layer0.500.550.600.650.700.750.800.85Accuracy Accuracy per Layer Train Accuracy T est Accuracy Random Baseline (b) Pythia 12B Figure 4: Layer-wise linear probe accuracy for Gemma 2 2B and Pythia 12B. Orange/blue: train/test; red dashed: random baseline. 4.2 Robustness Against Context Perturbation To what extent is a model’s factual self-awareness robust to changes in input context? To address this question, we train linear probes on each model layer and evaluate their performance on contextually modified input samples. We design four targeted experiments to systematically assess the test-time robustness of self-awareness directions under such perturbations. Quotation Marks. Enclose the entity name within single or double quotation marks: Th e th e ‘ is .dir e ct or ofm o vieIn c eption ’Christ op h er No lanrelationentity typeentity nameattribute Prompt formatting, including punctuation, spacing, and quoting, has been shown to significantly impact model performance (Gonen et al., 2023; Sclar et al., 2024). Statement Question. Prepend a natural language question crafted from the sample quadruples: Wh o is th e th e ? Th e th e is .dir e ct or of dir e ct or ofm o vie m o vieIn c eption In c eptionChrist op h er No lanrelationentity typeentity nameattribute Rephrasing inputs as questions can improve model reasoning and accuracy, with even minor phrasing changes affecting outputs and models performance (Kojima et al., 2022; Mizrahi et al., 2024). Few-Shot. Prepend few-shot context by adding a small set of sample (e.g., three) from the dataset to the input. These samples are chosen according to one of two entity modes. Only
|
https://arxiv.org/abs/2505.21399v1
|
: all selected samples have the same entity name; however, the relation and attribute may vary: Th e th e is . Th e th e is .r el e ase y e ar of dir e ct or ofm o vie m o vieIn c eption In c eption2010 Christ op h er No lanrelationentity typeentity nameattribute Unique : each entity name appears only once in the context; however, the relations between entities are not necessarily distinct. Th e th e is . Th e th e is .g enr e of dir e ct or ofm o vie m o vieTh e Matrix In c eptionscien c e fiction Christ op h er No lanrelationentity typeentity nameattribute Few-shot prompting improves generalization but remains brittle to surface-level changes, highlighting the need to assess robustness (Sclar et al., 2024). Random Statement. Prepend a fixed, unrelated grammatically correct sentence to the input prompt: “The cat darted under the couch as the thunder cracked outside.” Th e c at dar t e d un d er th e c ouch as th e th un d er crack e d outsid e. Th e th e is .dir e ct or ofm o vieIn c eptionChrist op h er No lanrelationentity typeentity nameattribute Semantically neutral distractors, such as irrelevant prefixes, significantly affect model predictions, indicating sensitivity to prompt framing beyond content (Sclar et al., 2024). We assess the robustness of self-awareness directions to contextual perturbations at test time using the Gemma 2 2B model, shown in Table 2. The model is trained on a fixed dataset, and its performance is evaluated across various input modifications. Adding quotation marks—either single or double—yields only minor reductions in test accuracy (0.802 and 0.797) compared to the unmodified baseline (0.820), indicating robustness to superficial punctuation changes. Rephrasing factual prompts as questions ( Statement Question ) leads to a larger performance drop (0.756), suggesting sensitivity to structural semantics beyond lexical form. In contrast, appending unrelated content ( Random Sentence ) minimally affects performance (0.791), indicating resilience to distractors when the core entity–relation structure is preserved. The Few-Shot 7 Table 2: Self-awareness directions robustness against various context perturbations for Gemma 2 2B model. ∆indicates test accuracy gain over the random baseline.). Standard deviation in parentheses. Modification Type Train (shared across all) Test (varies by modification) Loss AUC ROC Accuracy Loss AUC ROC Accuracy None Quotation Marks (single) Quotation Marks (double) Statement Question Few-Shot (Only) Few-Shot (Unique) Random Sentence0.383 (0.063) 0.901 (0.061) 0.833 (0.052)0.397 (0.064) 0.896 (0.060) 0.820 (0.056) 0.431 (0.066) 0.882 (0.057) 0.802 (0.060) 0.448 (0.066) 0.877 (0.056) 0.797 (0.072) 0.520 (0.092) 0.856 (0.056) 0.756 (0.061) 0.708 (0.183) 0.771 (0.069) 0.650 (0.083) 0.538 (0.157) 0.845 (0.074) 0.753 (0.065) 0.458 (0.070) 0.871 (0.057) 0.791 (0.061) (Only) setting causes substantial degradation (0.650), likely due to signal dilution, whereas Few-Shot (Unique) better maintains accuracy (0.753), underscoring the role of relational diversity in preserving linear decodability. Overall, these findings underscore that factual self-awareness in LMs is relatively robust to superficial noise but sensitive to semantically meaningful shifts in input structure. 4.3 Impact of Sampling
|
https://arxiv.org/abs/2505.21399v1
|
Parameters k-l on Probe Behavior 5 50 500 5000 k0.1 0.2 0.3 0.4l0.82 0.67 0.54 1.53 0.83 0.68 0.53 1.35 0.84 0.70 0.50 1.16 0.85 0.73 0.51 0.97Known-Forgotten Sample Ratio (Across Models) 0.60.81.01.21.4 |Class Balance Ratio 1| Figure 5: Known-Forgotten sample ratio for each (k, l)configuration, aggregated across all models. Lower values (darker) indicate more balanced retention, helping identify the globally optimal (k, l)setting that generalizes across models. Class Balance RatioPythia 70M Pythia 1.4B Pythia 6.9B Pythia 12B Gemma 2B Gemma 9BModel0.22 0.24 0.34 0.46 1.02 1.26Class Balance Ratio at k=500, l=0.3 0.40.60.81.01.2 Class Balance RatioFigure 6: Class balance ratio at k= 500 ,l= 0.3 for each model. Values closer to 1.0 indicate more balanced retention, though Gemma 2B di- verges significantly, suggesting this configura- tion may not generalize well to all architectures. We systematically evaluate the effect of varying kandlvalues on linear probe performance and the class balance between known andforgotten samples for Gemma 2 (2B, 9B) (Team et al., 2024) and Pythia (70M, 1.4B, 6.9B, 12B) (Biderman et al., 2023) models. Known-to-forgotten class ratios for different k–lconfigurations per model mare listed in Appendix D, with visualizations in Figures 13 and 12 for Pythia and Gemma 2, respectively. Among the Pythia models, Pythia-12B exhibits the highest known-to-forgotten ratio under the default k–lsetting, with Ratio Pythia-12B (k= 500 , l= 0.3) = 0 .46. In the Gemma 2 series, Gemma-2B shows a substantially more balanced attribution of factual knowledge, with Ratio Gemma-2B (k= 500 , l= 0.3) = 1 .02. To complement the analysis of class balance, we examine how the (k, l)configuration affects downstream recovery of factual self-awareness in terms of linear probe performance. In Figure 11 in Appendix D, we report the test and train accuracy improvements over a random baseline for the two representative models—Gemma 2B and Pythia 12B—across all evaluated (k, l)settings. These heatmaps reveal how performance varies with sampling parameters, where each cell shows the absolute accuracy gain (test/train) above random guessing. Notably, linear probes on the Gemma 2B model exhibit consistent and substantial gains across multiple configurations, particularly around k= 500 , whereas Pythia 12B shows smaller but more stable improvements. These results highlight the trade-off between class balance and discriminative performance, further motivating the choice of (k= 500 , l= 0.3)as a configuration that yields competitive accuracy. 8 5 Scaling Behavior To further investigate the emergence of self-awareness directions in language models, we analyze models from the Pythia scaling suite Biderman et al. (2023), focusing on variation across model sizes—specifically 70M, 1.4B, 6.9B, and 12B parameters. 70M 1.4B 6.9B 12B Number of Parameters0.02 0.000.020.040.060.080.10Accuracy Improvement over Random Baseline Model Performance Above Random Baseline Train Accuracy Improvement T est Accuracy Improvement Figure 7: Accuracy gain over random base- line for training (orange) and test (blue) linear probes as a function of model size. Larger models exhibit greater gains in accu- racy, with test performance benefiting more substantially from scaling. 0.8000.8050.8100.8150.8200.8250.8300.8350.840Accuracy T est Accuracy per Layer 0 500010000 15000 20000 25000 30000 35000 40000 45000 50000 55000 60000 65000 70000 75000 80000 85000 90000 95000100000 105000 110000
|
https://arxiv.org/abs/2505.21399v1
|
115000 120000 125000 130000 135000 140000 143000 Training Steps0.650.700.750.80Accuracy Train Accuracy per LayerFigure 8: Linear probe accuracy across Pythia 1.4B training checkpoints/tokens. (Top) Training accuracy by layer (warmer colors = deeper layers). (Bottom) Test accuracy by layer (cooler colors = deeper layers, brighter = later layers). Red dashed line: random baseline. We measure the emergence of linearly decodable features associated with self-awareness by comput- ing the accuracy improvement of linear probes over a random baseline across models of increasing scale. As shown in Figure 7, both training and test accuracy improvements grow monotonically with model size, indicating that larger models develop more robust and generalizable representations. Notably, the improvement is more pronounced on the test set, suggesting that increased capacity enhances the transferability of these features beyond the training distribution. To further examine how these features evolve during training, we evaluate checkpoints of Pythia 1.4B, spaced every 5000 steps from 0 to 143000, as shown in Figure 8. At initialization ( 0), linear probe accuracy is at the random baseline, indicating no self-awareness directions in the untrained model. During training, middle layers consistently yield the highest probe accuracy, while early and late layers perform worse. Training accuracy rises quickly and plateaus early, while test accuracy improves more gradually. On the test set, the highest accuracy occurs in the upper layers, suggesting self-awareness features are more strongly encoded later in the network. In contrast, middle layers dominate on the training set, indicating a divergence in the distribution of generalizable vs. task-specific features across depth. 6 Conclusion In this work, we explore the landscape of factual self-awareness of pretrained LMs. We precisely ask the question whether an LM encodes its certainty within its neural representations that it will be able to recall a given factual association. We frame this as awareness-at-generation. Providing an affirmative answer, we show that encoding of such a signal is linear and surprisingly robust against context perturbation. We find that while a threshold model size is essential for the self-awareness signal to appear, the strength of the signal is not directly proportional to scaling. Same trend is observed in terms of training scale: the signal appears quite early in training and saturates quickly. We argue that this specific type of self-awareness, which is evident at the time of generation, can serve as a stronger entry point to curb LM hallucination compared to post-hoc truthfulness, investigated hitherto. Compared to the latter, this awareness-at-generation can be repurposed to restrain the model from a generation attempt before the actual generation. 9 7 Acknowledgments We thank Jingcheng Niu, Federico Tiblias and Alireza Bayat Makou for their feedback on an early draft of this work. This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. Limitations By definition, the investigated self-awareness signal is limited to factual recall alone. There are multiple other forms of self-awareness that this work does not address. We look into the
|
https://arxiv.org/abs/2505.21399v1
|
rudimentary form of factual recall where all the necessary information (e.g., entity name, entity type, relation) is provided within the immediate query. In open-ended generation tasks, the LM might need to gather this information from scattered context, resolve coreferences, perform multi-hop factual recall implicitly, etc. We leave the investigation of self-awareness under such stressors as a future work. Additionally, the dataset used in this study is restricted in its coverage of entity types and relation categories. Expanding the dataset to include a broader and more diverse range of entities and relational structures would provide a more comprehensive understanding of how self-awareness representations generalize across semantic domains. Finally, while we demonstrate the linearly separable signals of factual self-awareness in the intermediate neural representations and its scaling behavior, it is not known how the model learns to encode this from mere next token prediction training, or the internal causal components that construct and use these signals. References Xiaowei Huang, Wenjie Ruan, Wei Huang, Gaojie Jin, Yi Dong, Changshun Wu, Saddek Bensalem, Ronghui Mu, Yi Qi, Xingyu Zhao, Kaiwen Cai, Yanghao Zhang, Sihao Wu, Peipei Xu, Dengyu Wu, Andre Freitas, and Mustafa A. Mustafa. A survey of safety and trustworthiness of large language models through the lens of verification and validation. Artificial Intelligence Review , 57 (7):175, Jun 2024. ISSN 1573-7462. doi: 10.1007/s10462-024-10824-0. URL https://doi. org/10.1007/s10462-024-10824-0 . Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. , 43 (2), January 2025. ISSN 1046-8188. doi: 10.1145/3703155. URL https://doi.org/10.1145/ 3703155 . Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. Language models (mostly) know what they know. CoRR , abs/2207.05221, 2022. doi: 10.48550/ARXIV .2207.05221. URL https://doi.org/10.48550/arXiv.2207.05221 . Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference- time intervention: Eliciting truthful answers from a language model. In A. Oh, T. Nau- mann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neu- ral Information Processing Systems , volume 36, pages 41451–41530. Curran Associates, Inc., 2023a. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 81b8390039b7302c909cb769f8b6cd93-Paper-Conference.pdf . Amos Azaria and Tom M. Mitchell. The internal state of an LLM knows when it’s lying. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 967–976. Association for 10 Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-EMNLP.68. URL https: //doi.org/10.18653/v1/2023.findings-emnlp.68 . Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in language models without supervision. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 .
|
https://arxiv.org/abs/2505.21399v1
|
OpenReview.net, 2023. URL https://openreview.net/forum?id=ETKGuby0hcs . Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/ paper/2023/hash/91edff07232fb1b55a505a9e9f6c0ff3-Abstract-Conference.html . Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies. CoRR , abs/2308.03188, 2023. doi: 10.48550/ARXIV .2308.03188. URL https://doi.org/10.48550/arXiv.2308.03188 . Kaya Stechly, Karthik Valmeekam, and Subbarao Kambhampati. On the self-verification limitations of large language models on reasoning and planning tasks. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net, 2025. URL https://openreview.net/forum?id=4O0v4s3IzY . Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan, and Neel Nanda. Do i know this entity? knowledge awareness and hallucinations in language models. arXiv preprint arXiv:2411.14257 , 2024. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhu- patiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size, 2024. URL https://arxiv. org/abs/2408.00118 , 1(3), 2024. Stella Biderman, Sid Black, Eric Hallahan, et al. Pythia: A suite for analyzing large language models across training and scaling. arXiv preprint arXiv:2304.01373 , 2023. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. Do large language models know what they don’t know? In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 8653–8665. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-ACL.551. URL https://doi.org/10.18653/v1/ 2023.findings-acl.551 . Prasoon Bajpai, Niladri Chatterjee, Subhabrata Dutta, and Tanmoy Chakraborty. Can llms replace neil degrasse tyson? evaluating the reliability of llms as science communicators. In Yaser Al- Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024 , pages 15895–15912. Association for Computational Linguistics, 2024. URL https: //aclanthology.org/2024.emnlp-main.889 . Jan Betley, Xuchan Bao, Martín Soto, Anna Sztyber-Betley, James Chua, and Owain Evans. Tell me about yourself: Llms are aware of their learned behaviors. arXiv preprint arXiv:2501.11120 , 2025. Sanyam Kapoor, Nate Gruver, Manley Roberts, Katie Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, and Andrew Gordon Wilson. Large language mod- els must be taught to know what they don’t know. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, ed- itors, Advances in Neural Information Processing Systems 38: Annual Conference on Neu- ral Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, Decem- ber 10 - 15, 2024 , 2024. URL http://papers.nips.cc/paper_files/paper/2024/hash/ 9c20f16b05f5e5e70fa07e2a4364b80e-Abstract-Conference.html . 11 Wenqi Zhang, Yongliang Shen, Linjuan Wu, Qiuying Peng, Jun Wang, Yueting
|
https://arxiv.org/abs/2505.21399v1
|
Zhuang, and Weiming Lu. Self-contrast: Better reflection through inconsistent solving perspectives. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 3602–3622. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.ACL-LONG.197. URL https://doi.org/10.18653/v1/2024.acl-long. 197. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V . Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net, 2023. URL https://openreview.net/forum?id= 1PL1NIMMrw . Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. INSIDE: llms’ internal states retain the power of hallucination detection. In The Twelfth International Confer- ence on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=Zj12nzlQbz . Ziwei Ji, Delong Chen, Etsuko Ishii, Samuel Cahyawijaya, Yejin Bang, Bryan Wilie, and Pascale Fung. LLM internal states reveal hallucination risk faced with a query. CoRR , abs/2407.03282, 2024. doi: 10.48550/ARXIV .2407.03282. URL https://doi.org/10.48550/arXiv.2407.03282 . Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, et al. Towards monosemanticity: Decompos- ing language models with dictionary learning. Transformer Circuits Thread , 2, 2023. Robert Huben, Hoagy Cunningham, Logan Riggs Smith, Aidan Ewart, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models. In The Twelfth International Conference on Learning Representations , 2023. Kiho Park, Yo Joong Choe, and Victor Veitch. The linear representation hypothesis and the geometry of large language models. arXiv preprint arXiv:2311.03658 , 2023. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems , 26, 2013. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference-time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems , 36:41451–41530, 2023b. Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405 , 2023. Connor Kissane, Robert Krzyzanowski, Neel Nanda, and Arthur Conmy. Saes are highly dataset dependent: A case study on the refusal direction. In Alignment Forum , 2024. Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeff Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Isaac Bloom, Stella Biderman, Adrià Garriga-Alonso, Arthur Conmy, Neel Nanda, Jessica Rumbelow, Martin Wattenberg, Nandi Schoots, Joseph Miller, Eric J. Michaud, Stephen Casper, Max Tegmark, William Saunders, David Bau, Eric Todd, Atticus Geiger, Mor Geva, Jesse Hoogland, Daniel Murfet, and Tom McGrath. Open problems in mechanistic interpretability. CoRR , abs/2501.16496, 2025. doi: 10.48550/ARXIV .2501.16496. URL https://doi.org/10.48550/arXiv.2501.16496 . Bart Bussmann, Noa Nabeshima, Adam Karvonen, and Neel Nanda. Learning multi-level features with matryoshka sparse autoencoders. CoRR , abs/2503.17547, 2025. doi: 10.48550/ARXIV .2503. 17547. URL https://doi.org/10.48550/arXiv.2503.17547 . Denny Vrande ˇci´c and Markus Krötzsch.
|
https://arxiv.org/abs/2505.21399v1
|
Wikidata. https://www.wikidata.org , 2023. 12 Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. Advances in neural information processing systems , 35:17359–17372, 2022. Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. Dissecting recall of factual associations in auto-regressive language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12216–12235, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.751. URL https://aclanthology.org/2023. emnlp-main.751/ . Neel Nanda, Senthooran Rajamanoharan, János Kramár, and Rohin Shah. Fact finding: Attempting to reverse-engineer factual recall on the neuron level. https://www.alignmentforum.org/posts/iGuwZTHWb6DFY3sKB/ fact-finding-attempting-to-reverse-engineer-factual-recall , 2023. AI Align- ment Forum. Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, János Kramár, Anca Dragan, Rohin Shah, and Neel Nanda. Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2. arXiv preprint arXiv:2408.05147 , 2024. Hila Gonen, Yonatan Belinkov, Ido Dagan, and Yoav Goldberg. Demystifying prompts in language models via perplexity estimation. In Findings of the Association for Computational Linguistics: EMNLP 2023 , 2023. Noam Sclar, Ehud Guriel, and Omer Levy. Quantifying lms’ sensitivity to spurious prompt formatting. InProceedings of the International Conference on Learning Representations (ICLR) , 2024. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems , 35: 22199–22213, 2022. Itay Mizrahi, Nimrod Sznajder, Libby Barak, and Yoav Goldberg. State of what art? a call for multi- prompt llm evaluation. Transactions of the Association for Computational Linguistics (TACL) , 2024. A Input Templates The selected relations include: player — ‘birth place’, ‘birth date’, ‘position’, ‘nationality’; movie — ‘director’, ‘release date’, ‘genre’, ‘country’; city — ‘country’, ‘first mayor’, ‘founded date’, ‘climate type’; and song — ‘artist’, ‘album’, ‘release date’, ‘language’. Initially, we constructed the input templates using the relations described in Ferrando et al. (2024) for four categories: football player ,movie ,city , and song , we call this set of relations as relations1 . The specific relationships extracted for each category are as follows: •player : birthplace, birth date, teams played. •movie : director, screenwriter, release date, genre, duration, cast. •city : country, population, elevation, coordinates. •song : artist, album involvement, publication year, genre. Since the number of relations is not balanced for the categories and some relation answers have non-trivial modality, e.g. coordinates of a city, we propose a new unified dataset, that is balanced in the sense of number of features and standard output modalities, we call this set of relations relations2 . The following relations shape the unified dataset: •player : birthplace, birthdate, position, nationality. •movie : director, release date, genre, production country. •city : country, population, founded date, timezone. •song : artist, album label, release date, language. 13 Afterwards, we hand-craft templates from the quadruples to form statements. For example, "The movie Inception was directed by director Christopher Nolan". Since the expected answer for some relations can be ambiguous as in "The player Michael Jordan was born in city of ..." where it is unclear
|
https://arxiv.org/abs/2505.21399v1
|
whether the response should be a location or a year—we incorporate "hints" at the end of each relation. We experimented with four input templates (see Table 3) using the relations2 set, aiming to eliminate spurious correlations and isolate the self-awareness signal captured by linear probes. Notably, only template2 consistently captures the self-awareness signal without interference from confounding factors. Template Name Sample Sentence template1 "The player Youri Djorkaeff was born in the city of" template1_const_end "The player Youri Djorkaeff’s birth city is" template2 "The city of birth for the player Youri Djorkaeff is" template2_balanced "The city of birth for the player Youri Djorkaeff is" Table 3: Input templates using the entity type "player" and entit name "Youri Djorkaeff" template1 : Places the entity type and name at the beginning of the sentence and ends with variable tokens. For example: “The <entity_type> <entity_name> was born in the city of... ” template1_const_end : Similar to template1 , it places the entity type and name at the beginning of the sentence but always ends with the fixed token "is". For example: “The <entity_type> <entity_name>’s birth city is... ” template2 : In contrast to template1 , it places the entity type and name at the end of the prompt. Like template1_const_end , it also ends with the token "is". For example: “The city of birth for the <entity_type> <entity_name> is... ” template2_balanced : Further improvement of template2 to have balanced number of known and forgotten samples across relations per category. See the full set of templates from template2_balanced that were used in the experiments Table 3. B Sample Distribution Across Entity Types We present the full distribution of known and forgotten samples using template2_balanced for Gemma 2 2B with k= 500 andl= 0.3parameters in Table 4 and for Pythia 12B in Table 5. Table 4: Distribution of known and forgotten samples across entity categories for the Gemma 2 2B model, using top- k= 500 and bottom- l= 0.3thresholds. Category Known Forgotten Subset Total Player 1286 2922 4208 Movie 3602 1810 5412 City 1017 541 1558 Song 1475 1995 3470 Total 7380 7268 14648Table 5: Distribution of known and forgotten samples across entity categories for the Pythia 12B model, using top- k= 500 and bottom- l= 0.3thresholds. Category Known Forgotten Subset Total Player 657 3551 4208 Movie 3602 1810 5412 City 574 984 1558 Song 538 2932 3470 Total 5371 9277 14648 C Separation Scores The latent separation scores for Gemma 2 9B using SAE activations (Figure 9) and Linear Probe activations (Figure 10). 14 024681012141618202224262830323436384000.10.20.30.40.50.60.7song player movie city MaxMinTop 5 Known Separation Scor es Latents LayerScore 024681012141618202224262830323436384000.10.20.30.40.50.60.7song player movie city MaxMinTop 5 F orgotten Separation Scor es Latents LayerScoreFigure 9: Latent separation scores using SAE activations on Gemma 2 9B. Left: known entities. Right: forgotten entities. 0246810121416182022242628303234363840−0.8−0.7−0.6−0.5−0.4−0.3−0.2−0.10 song player movie city MaxMinTop 5 Known Separation Scor es Latents LayerScore 024681012141618202224262830323436384000.10.20.30.40.50.60.70.8song player movie city MaxMinTop 5 F orgotten Separation Scor es Latents LayerScore Figure 10: Latent separation scores using Linear Probe activations on Gemma 2 9B. Left: known entities. Right: forgotten entities. D (k,l) Pair Impact on Probe
|
https://arxiv.org/abs/2505.21399v1
|
Behavior The figures in this section illustrate how varying the (k, l)parameters—representing the number of known and forgotten samples, respectively—affects linear probe performance and class balance outcomes across different model scales. Figure 11 presents the test and train accuracy gains over a random baseline for the Gemma 2 2B and Pythia 12B models. Notably, we observe that increases in k(the number of known samples) generally correspond to higher accuracy gains, particularly when l(the number of forgotten samples) remains low. This trend is more pronounced in larger models, consistent with their greater capacity to capture and retain class-discriminative features. Figures 13 and 12 explore the implications of (k, l)settings on class balance, defined as the ratio of known to forgotten samples, for Pythia and Gemma 2 model families, respectively. The heatmaps indicate that this ratio grows with increasing kand decreasing l, with larger models showing a more marked divergence between known and forgotten categories. These findings highlight the sensitivity of probe performance and interpretability metrics to sampling configurations, underscoring the importance of systematic calibration of (k, l)pairs when designing probing protocols. 15 5 50 500 5000 k0.1 0.2 0.3 0.4l0.213/0.214 0.292/0.295 0.293/0.311 0.063/0.056 0.211/0.213 0.283/0.282 0.304/0.313 0.062/0.063 0.214/0.216 0.252/0.258 0.306/0.327 0.073/0.077 0.217/0.214 0.241/0.241 0.309/0.294 0.103/0.096T est and Train Accuracy Above Random Baseline Across k and l Values 0.100.150.200.250.30 T est / Train Accuracy Above Random(a) 5 50 500 5000 k0.1 0.2 0.3 0.4l-0.001/-0.000 0.005/0.004 0.134/0.125 0.060/0.070 -0.000/-0.000 0.003/0.005 0.130/0.114 0.066/0.069 -0.000/-0.000 -0.006/0.005 0.121/0.114 0.071/0.072 -0.000/-0.000 0.004/0.004 0.105/0.112 0.087/0.096T est and Train Accuracy Above Random Baseline Across k and l Values 0.000.020.040.060.080.100.12 T est / Train Accuracy Above Random (b) Figure 11: Accuracy gain over a random baseline from linear probes on: (a) Gemma 2 2B, and (b) Pythia 12B. Each cell displays the test/train accuracy above random for a given combination of (k, l). Darker blue indicates greater accuracy gains. 5 50 500 5000 k0.4 0.3 0.2 0.1l0.42 (4309/10339)0.56 (5262/9386)0.91 (6985/7663)2.29 (10200/4448) 0.44 (4446/10202)0.62 (5583/9065)1.02 (7380/7268)2.61 (10587/4061) 0.45 (4561/10087)0.67 (5860/8788)1.11 (7713/6935)2.89 (10882/3766) 0.46 (4589/10059)0.69 (5957/8691)1.16 (7855/6793)3.03 (11016/3632)Class Balance Ratio Heatmap with (Known / Forgotten) 0.51.01.52.02.53.0 Class Balance Ratio (Known / Forgotten) (a) Gemma 2 2B 5 50 500 5000 k0.4 0.3 0.2 0.1l0.41 (4255/10393)0.61 (5533/9115)1.08 (7612/7036)2.29 (10189/4459) 0.44 (4482/10166)0.66 (5833/8815)1.26 (8179/6469)2.71 (10700/3948) 0.46 (4643/10005)0.70 (6014/8634)1.42 (8600/6048)3.18 (11140/3508) 0.48 (4739/9909)0.72 (6150/8498)1.58 (8979/5669)3.84 (11620/3028)Class Balance Ratio Heatmap with (Known / Forgotten) 0.51.01.52.02.53.03.5 Class Balance Ratio (Known / Forgotten) (b) Gemma 2 9B Figure 12: k-lparameters dependence on number of class balance (known and forgotten samples ratio) heatmaps for Gemma 2 models. 16 5 50 500 5000 k0.4 0.3 0.2 0.1l0.01 (148/14500)0.10 (1286/13362)0.17 (2109/12539)0.40 (4173/10475) 0.01 (185/14463)0.13 (1637/13011)0.22 (2647/12001)0.51 (4945/9703) 0.01 (209/14439)0.15 (1867/12781)0.26 (2991/11657)0.58 (5374/9274) 0.02 (220/14428)0.15 (1925/12723)0.27 (3108/11540)0.61 (5549/9099)Class Balance Ratio Heatmap with (Known / Forgotten) 0.250.500.751.001.251.501.75 Class Balance Ratio (Known / Forgotten)(a) Pythia 70M 5 50 500 5000 k0.4 0.3 0.2 0.1l0.02 (302/14346)0.10 (1377/13271)0.21 (2580/12068)0.67 (5869/8779) 0.02 (336/14312)0.12 (1513/13135)0.24 (2840/11808)0.75 (6287/8361) 0.03 (370/14278)0.13 (1672/12976)0.27 (3117/11531)0.85 (6744/7904) 0.03 (427/14221)0.14 (1843/12805)0.30 (3393/11255)0.98 (7254/7394)Class Balance Ratio Heatmap with (Known / Forgotten) 0.250.500.751.001.251.501.75 Class Balance Ratio (Known / Forgotten) (b) Pythia 1.4B 5 50 500 5000 k0.4 0.3
|
https://arxiv.org/abs/2505.21399v1
|
0.2 0.1l0.03 (491/14157)0.13 (1728/12920)0.31 (3458/11190)1.80 (9418/5230) 0.04 (511/14137)0.14 (1821/12827)0.34 (3703/10945)2.06 (9859/4789) 0.04 (526/14122)0.15 (1892/12756)0.36 (3901/10747)2.30 (10210/4438) 0.04 (544/14104)0.15 (1946/12702)0.38 (4064/10584)2.47 (10432/4216)Class Balance Ratio Heatmap with (Known / Forgotten) 0.51.01.52.0 Class Balance Ratio (Known / Forgotten) (c) Pythia 6.9B 5 50 500 5000 k0.4 0.3 0.2 0.1l0.03 (461/14187)0.13 (1666/12982)0.44 (4444/10204)2.49 (10446/4202) 0.03 (473/14175)0.13 (1720/12928)0.46 (4636/10012)2.83 (10820/3828) 0.03 (487/14161)0.14 (1785/12863)0.49 (4829/9819)3.16 (11123/3525) 0.04 (498/14150)0.14 (1843/12805)0.52 (4990/9658)3.40 (11321/3327)Class Balance Ratio Heatmap with (Known / Forgotten) 0.51.01.52.02.53.0 Class Balance Ratio (Known / Forgotten) (d) Pythia 12B Figure 13: k-lparameters dependence on number of class balance (known and forgotten samples ratio) heatmaps for Pythia models. 17
|
https://arxiv.org/abs/2505.21399v1
|
arXiv:2505.21409v1 [cs.CL] 27 May 2025RelationalFactQA: A Benchmark for Evaluating Tabular Fact Retrieval from Large Language Models Dario Satriani, Enzo Veltri, Donatello Santoro University of Basilicata, Potenza, Italy name.surname@unibas.itPaolo Papotti EURECOM, Biot, France paolo.papotti@eurecom.fr Abstract Factuality in Large Language Models (LLMs) is a persistent challenge. Current benchmarks often assess short factual answers, overlooking the critical ability to generate structured, multi-record tabular outputs from parametric knowledge. We demonstrate that this relational fact retrieval is substantially more difficult than isolated point-wise queries, even when individual facts are known to the model, exposing distinct failure modes sensitive to output dimensionality (e.g., number of attributes or records). To systematically evaluate this under-explored capability, we introduce RelationalFactQA, a new benchmark featuring diverse natural language questions (paired with SQL) and gold-standard tabular answers, specifically designed to assess knowledge retrieval in a structured format. Rela- tionalFactQA enables analysis across varying query complexities, output sizes, and data characteristics. Our experiments reveal that even state-of-the-art LLMs struggle significantly, not exceeding 25% factual accuracy in generating relational outputs, with performance notably degrading as output dimensionality increases. These findings underscore critical limitations in current LLMs’ ability to synthesize structured factual knowledge and establish RelationalFactQA as a crucial resource for measuring future progress in LLM factuality. 1 Introduction Large Language Models (LLMs) have emerged as powerful tools capable of understanding and generating human-like text. Despite these advances, factuality – the ability of LLMs to provide responses that are truthful and faithful to the real-world knowledge encountered during pre-training – remains a persistent challenge [ 20,33]. Effectively, a lack of factuality manifests as ‘hallucination’ — the generation of plausible yet incorrect information — a pervasive issue that is still observed in frontier models [ 10,1]. This issue is particularly critical when LLMs are used in settings demanding high factual precision, such as medical information synthesis [ 44], financial reporting [ 13], scientific data analysis [48], or educational content generation [23]. To evaluate and improve factual performance, the research community has developed a variety of benchmarks. However, existing benchmarks predominantly focus on single-value factuality, where the expected output is a short text span or a single scalar value (e.g., a date or named entity, or a numerical value) [ 49]. These tasks often emphasize reasoning complexity (e.g., multi-hop QA or ambiguous phrasing) [ 27,50,52] but overlook a fundamental aspect of factual competence: the ability of LLMs to generate long, coherent outputs directly from their internal parametric knowledge (i.e., the facts stored implicitly within the model’s parameters), without retrieving external documents. In this work, we focus on structured, multi-record, tabular outputs to investigate the factuality of LLMs in synthesizing long sequences of facts. This task is motivated by two main arguments. Preprint. Output Size Matters. First, experiments highlight that retrieving tabular data from parametric memory presents a significantly greater challenge than recalling isolated cell values, even when the underlying facts are known to the model. For instance, prompting an LLM to return two attributes (e.g., name and state) for US counties yields near-perfect results. However, requesting additional attributes for the same set of counties (e.g., including county area)
|
https://arxiv.org/abs/2505.21409v1
|
introduces factual errors in the results. County State Area (sq mi) Los Angeles California 4 751 Cook Illinois 1 635 Maricopa Arizona 8 500 ✗ Q: What is the area of Maricopa county? A: 9 224 sq mi ✓Crucially, if we then query the LLM for these specific incorrectly reported values in isolation (e.g., “What is the area of Maricopa county?”), the model returns the correct value, demonstrating that the error lies in the generation process, not in the absence of the underlying factual knowledge. Results show that the accuracy of re- trieving a specific attribute (e.g., state) degrades linearly (from 1.0 to 0.2) as the total number of concurrently requested attributes increases from one to fifty, regardless of the target attribute’s position in the schema. These findings underscore that the structured, multi-attribute retrieval of factual data is not merely an extension of single-fact recall but a distinct capability with unique failure modes. Moreover, while it is hard to quantify precisely and with fine granularity the quality of the output in unstructured generation tasks [ 15], structured data allows punctual comparison at the single fact (cell) level. Increasing Importance of Tabular Output. Second, we argue that the structured factual retrieval capability of LLMs is both under-explored and essential. Several tasks require not just isolated facts about common world knowledge, but the generation of relational data: lists of entities, comparisons, and collections of items satisfying specific conditions [ 36,29,43,41]. This requirement has been reported in sociology [ 45], business use cases [ 53], medical diagnosis [ 4], and financial use cases [ 3]. Obtaining tabular data is also increasingly relevant for user-facing applications, such as generating comparative tables of e-commerce products or structuring personalized trip itineraries [14, 47]. Yet, current benchmarks fall short in measuring this dimension. Existing datasets that do contain tabular data focus on its role as contextual input to the LLM, in the role of a corpus for question answering or fact checking [9, 35, 54, 2]. We define the Relational Fact Retrieval task as follows: given a query, the LLM must generate a structured table (rows and columns) containing factual information drawn purely from its parametric memory, prohibiting the use of external tools like web browsers during generation. To address the need for evaluating this capability, we introduce RelationalFactQA , a new benchmark designed to test LLMs’ ability to return factual knowledge in relational (i.e., tabular) form in a closed-books setting. RelationalFactQA probes this capability across several dimensions. The benchmark contains triples with the natural language (NL) question, the corresponding SQL script, and the expected answer in tabular format. For the creation, we combine manually crafted questions (for linguistic variety) with systematically generated ones where the corresponding query complexity (e.g., specific SQL constructs) is controlled. Expected output tables spans from small ones, with few tuples and attributes, to large ones. These dimensions enable analysis of LLMs’ performance across different logical operations (e.g., aggregates, filtering), data types (e.g., numerical, categorical), and retrieval methods (prompts with NL questions vs SQL queries). Through extensive experimentation, we find that although larger models show
|
https://arxiv.org/abs/2505.21409v1
|
improvement, the ability to produce correct structured answers remains limited — especially as the number of tuples and attributes increases or the query involves less common facts and numerical conditions. Moreover, we observe that even state-of-the-art models rarely exceed 25% of factual accuracy on our benchmark. To summarize, this paper makes the following contributions: •Task formulation. We introduce Relational Fact Retrieval — the closed-book generation of multi-tuple, multi-attribute tables directly from an LLM’s parametric memory — and clarify how it differs from single-fact recall and context-based table QA. •RelationalFactQA benchmark. We release a 696-question dataset covering nine knowledge domains, each triple-annotated with a natural-language query, its equivalent SQL statement, and a fully verified gold table (avg. 27 rows ×5 attrs). •Hybrid construction pipeline. Our semi-automatic workflow unifies (i) manual curation from three existing corpora and (ii) YAGO-driven synthetic tables, yielding controlled variation in schema size, output size, and query complexity. 2 •Comprehensive empirical study. Nine LLMs (7B – 235B params) are benchmarked under three retrieval techniques (NL, SQL, Chain-of-Thought). Despite parameter scaling, no model exceeds 0.25 in tuple accuracy; performance degrades linearly with requested attributes. Code, prompts, and data will be open-sourced to drive future progress. Our findings lay the groundwork for future research on factuality in LLMs, and position Relational- FactQA as a valuable resource for tracking progress on this critical capability. Table 1: Closed-book QA datasets characteristics1. Prior datasets have outputs with approximately one tuple and one attribute. In contrast, RelationalFactQA demands complex outputs, with an average of 27 tuples and 5.3 attributes per answer. Total # Avg # Avg # Avg # Dataset Questions Output Tuples Output Attributes Output Tokens WikiSQL [54] 56,355 1.08 1.00 3.22 WikiTableQuestions [35] 14,149 1.08 1.00 2.80 Open-WikiTable [24] 53,819 1.08 1.00 3.23 TAT-QA [55] 13,215 1.19 n.a. 6.63 TruthfulQA [28] 790 1.00 n.a. 10.49 TriviaQA (unfiltered) [22] 87,622 1.00 n.a. 6.39 NQ-Open [25, 26] 87,925 1.22 n.a. 4.30 SimpleQA [49] 4,326 1.00 n.a. 4.20 RFQA 696 26.942 5.32 357.09 2 Related Work The evaluation of factual accuracy in LLMs has lead to the development of diverse benchmarks [ 8]. However, existing work evaluates an LLM’s ability to return short-span answers, rather than complex, structured relational data. As motivated in Section 1, the ability to generate such larger, structured outputs presents distinct challenges beyond single-fact recall, involving sustained coherence and factual consistency across multiple data points [ 30,18]. Table 1 provides a comparative overview of output characteristics across several closed-book QA datasets and RelationalFactQA. Factuality Evaluation Benchmarks. A significant body of work focuses on evaluating the fac- tual correctness of LLM generations. Benchmarks such as TriviaQA, NQ-Open, and TruthfulQA assess LLMs’ ability to answer questions with short, often single-entity or single-value, factual statements [ 49]. While these are crucial for gauging general world knowledge, they do not probe the model’s capacity to synthesize answers as structured relations. As evident in Table 1, the expected outputs in these datasets typically consist of a single tuple and a single attribute. Other efforts like FactScore or HaluEval variants aim to quantify hallucination rates [ 20], but again,
|
https://arxiv.org/abs/2505.21409v1
|
within the context of single-statement claims rather than structured relational outputs. Despite these varied evaluation efforts, the fundamental challenge of LLM hallucination persists as a critical concern [10, 1]. Table Question Answering and Reasoning. Several benchmarks like WikiSQL [ 54], WikiTable- Questions [ 35], and TabFact [ 9] involve tabular data. However, these benchmarks provide the relevant table(s) as input to the LLM, tasking it with understanding, reasoning over, or extracting information from the provided context [ 51]. In contrast, RelationalFactQA operates in a closed-book setting, where the LLM retrieves the tabular answer from its parametric knowledge. This shifts the evaluation from context-based reasoning to parametric relational knowledge retrieval. Text-to-SQL. While RelationalFactQA uses SQL as one input modality to query the LLM’s knowledge, our focus is not on the correctness of SQL generation itself, which is the primary goal of Text-to-SQL benchmarks [ 52,27,31,19,42]. Instead, we evaluate the factual accuracy and completeness of the tabular data returned by the LLM in response to a query (be it in natural language or SQL). We manually filter examples from two Text2SQL datasets and adapt them to the Relational Fact Retrieval task in building our benchmark. 1Obtained from the train set for: WikiSQL, WikiTableQuestions, Open-WikiTable, TAT-QA and NQ-Open. 3 Knowledge Probes for LLMs. Prior research has explored using “knowledge probes” (e.g., LAMA [ 37]) to assess what factual information is stored in an LLM’s parameters, typically by prompting models to fill in missing tokens in cloze-style statements (e.g., “Paris is the capital of [MASK]”). These probes generally target single, atomic facts [ 38]. RelationalFactQA extends this concept from single-fact elicitation to probing for multi-tuple, multi-attribute relational structures. In summary, while existing benchmarks address various facets of LLM factuality, RelationalFactQA fills a critical gap by specifically evaluating LLMs’ ability to act as “parametric databases,” retrieving factual information - in contrast with plausible data [5] - in a tabular format. 3 The Benchmark Task Definition and Problem Formulation. We define the task of Relational Fact Retrieval as the generation of structured, multi-record, multi-attribute tabular data by an LLM in response to a query, relying exclusively on the model’s internal parametric knowledge. Formally, the problem is formulated as follows: •Input: The input is a query q, which can be expressed either in natural language (NL) or as a Structured Query Language (SQL) statement. The query qspecifies the factual information to retrieve and the desired output relational structure. •Output: The desired output is a table ˆT. This table is characterized by a schema S= {A1, A2, . . . , A k}, representing kattributes (columns), and a set of ntuples (rows), where n≥0. Each tuple ti∈ˆTis an ordered list of kcell values (vi1, vi2, . . . , v ik), corresponding to the attributes in S. LLMs are instructed to enforce a closed-book evaluation setting and, where applicable, technically restricted, e.g., by disabling access to external tools, web browsing functionalities, or code execution environments via API parameters. The closed-book setting is intentional: in retrieval-augmented generation (RAG) or tool-assisted workflows, the factual quality of outputs depends not only on the
|
https://arxiv.org/abs/2505.21409v1
|
model’s internal knowledge, but also on external factors—such as retrieval accuracy, context formatting, or prompt design. These confounding variables make it difficult to isolate the LLM’s intrinsic factual competence. While retrieval-based methods may improve factual coverage, we hypothesize the challenges in closed-book persist also in open-book scenarios Dataset Construction. To build the RelationalFactQA dataset, we combine manual curation and semi-automatic generation. In the manual pipeline, we consider 44 datasets from three existing corpora of examples (Spider [ 52], Bird [ 27], and Galois [ 41]) that contain natural language (NL) and SQL query pairs along with their underlying structured databases. We manually review each dataset in two steps. First, we identify the databases with schema and entities that are present on Wikipedia - this is important to ensure that the examples are within the knowledge scope of an LLM2. Second, for each database, we retain only the NL queries that reference factual, world-knowledge content that is temporally stable, deliberately excluding subjective or dynamic information such as user reviews or prices. The corresponding SQL queries and their tabular outputs are finally included in the benchmark. For the semi-automatic pipeline, we adopt a two-step process: first, we generate tables to serve as query targets; then, we construct corresponding NL question–SQL query pairs. To ensure that the table schemas and entities are likely to be known by LLMs, we extract data from the YAGO 4.5 knowledge base [ 46]–a structured resource derived from Wikidata. YAGO is organized around RDF triplets; each has a subject connected to an object through a predicate, e.g., “Trump, president, USA” or “NYC, population, 8.2M”. To obtain tables, we follow the procedure of selecting seven Yago types (high level classes, such as City and Country) and reorganizing the triples to collect multiple attributes for those (such as size in squared Km) [7]. Using an automatic generator tool, Qatch [ 34], we then create the corresponding NL-SQL pairs for these YAGO-derived tables. To ensure controlled complexity for this segment of the benchmark, the Qatch generation strategy deliberately focuses on SELECT queries. These queries are designed to 2While entities have different popularity online, we experimentally verified that this dimension does not impact our experiments. 4 systematically vary in two main dimensions: the number of projected attributes (columns) and the complexity of selection, achieved by altering the number and nature of predicates in the WHERE clause. Therefore, the Qatch-generated queries predominantly feature projection and filtering operations, allowing for a targeted assessment of these core capabilities. While the full RelationalFactQA benchmark incorporates a wider range of SQL operators, including JOIN andAGGREGATE functions, these more complex operators are sourced from the manually curated datasets (Spider, Bird, Galois). These human-authored queries contribute crucial linguistic and structural diversity to the benchmark. The rationale for the focused Qatch generation approach, emphasizing projection and selection, is that the primary challenge lies in the LLM’s ability to accurately retrieve the fundamental base data; if this initial extraction is flawed, any subsequent, more complex operations (such as the joins or aggregations found in other parts of the benchmark) would inherently build upon
|
https://arxiv.org/abs/2505.21409v1
|
incorrect information. As the tool occasionally produces syntactically correct but semantically trivial or invalid queries, we manually remove such non-meaningful examples. Finally, we perform targeted preprocessing steps to enhance consistency in the ground truth data. For all date attributes, we extract the year component to ensure that any condition involving dates can be treated as numerical comparisons, rather than requiring models to process full date-type values. Also, we manually removed noisy tuples, such as instances where organizations were listed as Nobel Prize laureates instead of individuals. These actions ensure comparable outputs across samples, focusing the evaluation on the fact retrieval capabilities. (a) Source Distribution 11%10% 71%8%BIRD GALOIS QATCH SPIDER(b) Query Complexity Distribution Type # Questions SELECT without WHERE 10 WHERE numerical condition 148 WHERE categorical condition 294 WHERE mixed condition 294 AGGREGATIVE 49 JOIN 67 DISTINCT 34 GROUP BY 13 LIMIT 11 ORDER BY 17 Figure 1: RFQA dataset. Source distribution and distribution of query complexity (SQL operators). Dataset Statistics. The RFQA benchmark comprises 696 question, query, answer triples. As reported in Figure 1(a), the majority of questions (71%) are from the Qatch pipeline, ensuring controlled complexity and coverage, while contributions from Bird (11%), Galois (10%), and Spider (8%) provide diverse, human-authored queries. This hybrid approach allows RFQA to cover a range of factual domains, including common entities typically found within an LLM’s pre-training corpus. A key characteristic of RFQA is the size of its target outputs, designed to test an LLM’s ability to generate structured relational data. As detailed in Table 1, ground truth answers in RFQA contain an average of 357 tokens , specifically 26.94 tuples (rows) and 5.32 attributes (columns), for an average of 135.50 cells per table. The output dimensions exhibit considerable variability: the number of tuples ranges from a minimum of 1 to a maximum of 904, while attributes span from 1 to 9. This contrasts sharply with prior QA benchmarks, which typically expect single-tuple, single-attribute answers. The attribute types within RFQA tables also vary; on average, each target table schema consists of approximately 1.06 numerical attributes, 3.16 categorical attributes, and 4.26 attributes containing mixed (numerical and string) data types. The complexity of the retrieval task is also defined by the SQL constructs associated with each question. Figure 1(b) presents the distribution of SQL operators within RFQA. The distribution reflects our focus on evaluating the retrieval of data under diverse projection and filtering requirements. 4 Experimental Settings Retrieval Methods. We evaluate LLMs on RFQA using three iterative methods: •NL. The LLM is directly prompted with a natural language query q, requesting the model to return tabular results based on its internal knowledge. 5 •SQL . Similar to the NL approach, but the query qis expressed in SQL. The model is expected to interpret the SQL semantics and return the corresponding tabular data. •COT. Given an SQL query q, a Chain-of-Thought approach [ 41] decomposes the query execution into two steps: (1) the LLM is prompted to retrieve the relevant base data (i.e., a broader result set), and (2) relational algebra operations are applied in memory on the
|
https://arxiv.org/abs/2505.21409v1
|
intermediate output to produce the final filtered result. This method aims to improve retrieval accuracy by breaking queries into simpler tasks. In all methods, the LLM is prompted with the query qand the corresponding output schema s, expressed in JSON Schema format. Output Processing. The prompt includes instructions for the model to return results in valid JSON. If no answer is found, the model is instructed to return an empty JSON object. Each strategy is applied iteratively. After the initial prompt, if the model returns a non-empty result, it is prompted again to return additional data until the model returns an empty JSON. Prompt templates used in the experiments are detailed in the Appendix. Since LLMs do not always produce outputs in valid JSON format, we apply heuristics to extract and recover structured responses. Our approach begins by identifying all text enclosed between “ {” and “}” or “[” and “]”. If this content forms a valid JSON object, we parse it directly and return it in a result. If the content is invalid, we re-prompt for correct formatting or attempt to repair common issues like syntax errors or truncation. If recovery fails, the response is treated as invalid. Further details on recovery strategies are in the Appendix. Models. We use open-source and proprietary LLMs. To enhance reproducibility and obtain determin- istic results, we set the temperature to 0.0. For open-source models, we adopt the following models hosted on Together.AI: Mistral-7B [ 21], Qwen2.5 and Qwen3 [ 39], LLama 3 (covering versions 3.1 and 3.3) [ 32], Gemma 2 [ 16], DeepSeek-LLama3 (as a base for the reasoning model DeepSeek R1 Distill LLama) [12]. As proprietary models, we use GPT-4.1 and GPT 4.1 mini [33]. Metrics. To evaluate the factuality of each LLM we measure the quality of the produced responses. Each example in the RFQA dataset consists of a query q(either NL or SQL) and the corresponding expected set of tuples texp(the ground-truth). To evaluate an LLM, we execute the query qon it and collect the resulting set of tuples tact. To assess the quality of the result, we compare tuple sets texp andtact. We adopt two metrics commonly used to benchmark queries executed by LLMs [34]: •F1: We compute the F1 score over the set of cells in tactwith respect to those in texp. This metric evaluates performance at the cell level, disregarding tuple structure and focusing purely on the correctness of returned values. •TS(Tuple Similarity): We measure the fraction of tuples in texpthat also appear in tact, comparing tuples holistically. A Tuple Similarity score of 1.0 indicates that texpandtactshare the same schema, cardinality, and cell values. This metric is stricter than F1, as it requires correct grouping of values within tuples, not just correct individual values. To account for superficial differences in formatting (e.g., “1K” vs. “1000”), we normalize all cell values in both tactandtexpbefore evaluation. This step mitigates false negatives caused by represen- tational variations. The normalization process involves the following steps: (i) Replacing accented characters with their unaccented equivalents (e.g., “ ´e”→“e”); (ii) Converting all characters to
|
https://arxiv.org/abs/2505.21409v1
|
lower- case; (iii) Converting shorthand numeric notations like “1K” or “1M” and into the corresponding numeric values (e.g., “1K” →1000); (iv) Standardizing numeric formats (e.g., converting “1.000,5” and “1,000.5” into a consistent representation). Moreover, since LLMs may produce answers that are close, but not identical, to the ground truth (e.g., “Bill Clinton” vs. “Bill J. Clinton”), we incorporate approximate matching. Specifically, we use Edit Distance [ 40] with a threshold of 10% relative to the length of the expected string. For numerical values, we apply a tolerance of ±10% relative to the expected number. To compare two tuples taandte, we evaluate each pair of corresponding cells based on their shared attribute, using the same comparison strategy as defined previously for the cells. While our current implementation uses simple, efficient matching rules, more advanced approaches such as entity resolution [ 11,6] or tuple-level instance comparison [ 17] could be applied for more nuanced matching, but they require manual user configuration and thus cannot be easily used as a metric. 6 5 Results We organize our evaluation around three main research questions. 1.Factuality . To what extent can LLMs generate factual tables based on their internal knowledge? 2.Extraction Techniques . Are LLMs more effective at generating tabular responses from SQL queries compared to NL questions? Does CoT help in getting better results? 3.Query complexity . Do LLMs’ performance depend on the schema and the query complexity? Table 2: Benchmark Results. F1 and Tuple Similarity (TS) measured for all LLMs in our evaluation. A VG is the average between F1 and TS. LLMs ordered by increasing size in terms of parameters. Mistral 7BQWEN 2.5-7BLLama 3.1-8BGPT 4.1 miniGemma 2-9BLLama 3.3-70BDeepSeek 70BGPT 4.1QWEN 3-235B NLF1 0.44 0.487 0.481 0.537 0.557 0.609 0.606 0.654 0.613 TS 0.076 0.085 0.155 0.115 0.107 0.149 0.15 0.247 0.225 A VG 0.258 0.286 0.318 0.326 0.332 0.379 0.378 0.45 0.419 SQLF1 0.346 0.459 0.332 0.417 0.571 0.62 0.6 0.388 0.595 TS 0.042 0.079 0.11 0.055 0.123 0.155 0.142 0.096 0.185 A VG 0.194 0.269 0.221 0.236 0.347 0.387 0.371 0.302 0.39 CoTF1 0.477 0.503 0.585 0.638 0.594 0.677 0.646 0.693 0.651 TS 0.09 0.091 0.127 0.12 0.106 0.157 0.168 0.174 0.228 A VG 0.284 0.297 0.356 0.379 0.35 0.417 0.407 0.433 0.439 Exp-1. Overall Performance We evaluate all LLMs in our benchmark using the RFQA dataset and report their performance using the two quality metrics: F1andTS. To provide a single, comparable measure of factual accuracy across models, we also compute the average of F1 and TS. The results in Table 2 reveal that increasing the number of model parameters generally leads to improved quality performance (and thus factuality) across all retrieval methods ( NL,SQL , and COT). However, the task remains inherently difficult. While larger models, such as Qwen 3, achieve F1 scores above 0.6, this improvement does not translate to accurate tuple-level results. The best TS score is only 0.247, obtained by GPT 4.1, highlighting that even frontier models often return wrong values in output tuples. This experiment also shows that querying using NLhas an edge over SQL in all models,
|
https://arxiv.org/abs/2505.21409v1
|
while the COT approach leads to improved retrieval with all LLMs except GTP 4.1. Takeaways for questions (1) and (2) : LLMs still struggle to consistently retrieve structured factual knowledge as complete output tuples. NL outperfoms slightly SQL as a retrieval method, while CoT provides benefits in most settings. Exp-2. Performance by Attribute Type. To investigate the third research question, we exploit the metadata used to annotate each query qin RFQA. In this experiment, we analyze model performance based on the type of attributes in the query output. We divide the queries into two categories: those that return only numerical values and those that return only categorical values. We use the average of the F1 and TS scores as the metric. Results in Table 3 show that extracting categorical values is generally easier for small and medium LLMs than retrieving numerical ones. However, larger models perform better on numerical queries than on categorical ones when using SQL and CoT. Exp-3. Performance by Output Size. We focus on the top-3 performing LLMs and analyze how their performance varies with the size of the expected output. We group the results according to: a)the number of attributes requested in the query, and b)the overall output size, measured as the number of expected cells (#rows ×#attributes). We use the TSmetric, which accounts for both the structure and completeness of the returned data. Figure 2 summarizes our findings. On the left side, we show how quality decreases as the number of requested attributes increases, indicating that LLMs struggle more when asked to retrieve wider tables. On the right side, we plot the TSscore against the total number of expected cells. The trend 7 Table 3: Quality measured as the A VG between F1 and TS w.r.t. type of output attributes. Mistral 7BQWEN 2.5-7BLLama 3.1-8BGPT 4.1 miniGemma 2-9BLLama 3.3-70BDeepSeek 70BGPT 4.1QWEN 3-235B NLNum 0.120 0.170 0.134 0.167 0.225 0.225 0.215 0.511 0.393 Cat 0.211 0.236 0.301 0.339 0.301 0.391 0.370 0.515 0.397 Diff % +76 % +39 % +125 % +103 % +34 % +74 % +72 % +1 % +1 % SQLNum 0.065 0.178 0.164 0.250 0.255 0.267 0.276 0.302 0.410 Cat 0.107 0.212 0.249 0.182 0.297 0.416 0.397 0.262 0.345 Diff % +65 % +19 % +52 % -27 % +16 % +56 % +44 % -15 % -16 % CoTNum 0.263 0.239 0.222 0.333 0.340 0.331 0.402 0.530 0.439 Cat 0.254 0.262 0.307 0.395 0.293 0.432 0.398 0.464 0.383 Diff % -3 % +10 % +38 % +19 % -14 % +31 % -1 % -14 % -13 % 1 2 3 4 5 6 7 8 900.20.40.60.8 # Attrs.NL-LLama 3.3 SQL-LLama 3.3 CoT-LLama 3.3 NL-GPT-4.1 SQL-GPT-4.1 CoT-GPT-4.1 NL-QWEN 3 SQL-QWEN 3 CoT-QWEN 3 0 50 100 150 200 250 300 350 400 450 50000.20.4 # CellsNL-LLama 3.3 SQL-LLama 3.3 CoT-LLama 3.3 NL-GPT-4.1 SQL-GPT-4.1 Cot-GPT-4.1 NL-QWEN 3 SQL-QWEN 3 CoT-QWEN 3 Figure 2: TS results for LLama 3.3, GPT-4.1 and QWEN 3, with all retrieval techniques, w.r.t. the expected output measure as the number of attributes (left) and cells (right). remains consistent: as the number
|
https://arxiv.org/abs/2505.21409v1
|
of rows and columns grows, the model’s ability to return accurate, complete tabular data declines. Table 4: Quality measured as the A VG between F1 and TS w.r.t. query complexity. LLama 3.3-70B GPT 4.1 QWEN 3-235B Type NL SQL CoT NL SQL CoT NL SQL CoT SELECT without WHERE 0.599 0.646 0.595 0.845 0.399 0.995 0.787 0.582 0.694 WHERE numerical condition 0.365 0.36 0.387 0.39 0.197 0.382 0.42 0.384 0.444 WHERE categorical condition 0.383 0.393 0.437 0.466 0.252 0.453 0.414 0.397 0.444 WHERE mixed condition 0.376 0.384 0.414 0.444 0.24 0.424 0.414 0.387 0.435 AGGREGATIVE 0.237 0.26 0.254 0.447 0.229 0.353 0.366 0.364 0.259 JOIN 0.143 0.184 0.157 0.461 0.126 0.091 0.356 0.222 0.139 DISTINCT 0.344 0.429 0.354 0.536 0.172 0.319 0.446 0.247 0.342 GROUP BY 0.234 0.228 0.461 0.325 0.166 0.26 0.181 0.266 0.239 LIMIT 0.108 0.189 0.517 0.417 0.399 0.355 0.146 0.162 0.206 ORDER BY 0.174 0.206 0.424 0.441 0.349 0.437 0.222 0.179 0.376 Exp-4. Query Complexity. Table 4 provides a breakdown of performance w.r.t. the query complexity. We observe that as query complexity increases, the quality of the generated responses tends to decrease. Simple queries such as SELECT without WHERE consistently achieve the highest scores, while complex constructs like JOIN ,AGGREGATE , or multi-condition WHERE clauses report substantially lower results across all models and retrieval methods. In particular, the JOIN operator represents a notable challenge. Scores are low for all models, especially in the COTsetting as it does not yet support joins over multiple tables. Despite its limitations, the COTstrategy demonstrates meaningful gains for several complex operations. This highlights the benefit of breaking down query execution into intermediate reasoning steps. Finally, certain operators such as LIMIT andORDER BYappear systematically difficult for all models and prompting strategies. These constructs require precise handling of position and ordering in the output tuples — capabilities that autoregressive models struggle to maintain as the result set grows. Takeaway for question (3) : LLM performance is significantly influenced by the structure of the target schema. Both attribute type and output size are key factors in determining LLM effectiveness for tabular factual retrieval. Query complexity has also a significant impact on LLM performance. 8 Results Discussion. The challenges observed in generating extensive and accurate tabular data from parametric memory resonate with known LLMs’ limitations in long-sequence generation. While issues such as maintaining thematic coherence [ 30], mitigating factual drift [ 20], and managing error propagation in autoregressive systems [ 18] are recognized in tasks involving lengthy free-form text, the generation of tabular outputs magnifies these problems. Specifically, the dual axes of table “size” - the number of rows (tuples) and the number of columns (attributes) - impose distinct pressures on the model’s generative capabilities. Our findings suggest that the demand for concurrent retrieval and precise alignment of numerous facts strains the model’s effective “working memory” or its ability to maintain sustained attention to all constraints of the query [ 30]. The fact that LLMs often correctly retrieve individual facts in point-wise queries (e.g., the area of a specific county that is reported incorrectly in a larger
|
https://arxiv.org/abs/2505.21409v1
|
table) underscores that the bottleneck is frequently not an absence of the underlying factual knowledge. Instead, the difficulty lies in the process of composing the individual pieces of information into a larger relational structure. This distinction points towards limitations in the architectural or learned capabilities for synthesis from parameters, rather than simply gaps in memorized knowledge. The dense factual requirement of tabular data, where each cell represents a correct assertion, and the inflexible nature of its structural integrity, make it a valuable testbed for these aspects of LLM performance, revealing failure modes that are less explicitly quantifiable in unstructured generation tasks [15]. 6 Conclusions RelationalFactQA fills a gap in the factuality landscape by probing LLMs’ ability to act as parametric databases : given only a natural-language question or SQL query, a model must assemble multi-row, multi-attribute tables directly from its internal weights. Our experiments — with nine LLMs and three querying modalities — show three consistent trends: •Scale helps, but does not solve the problem. Even the strongest systems score below 0.25 in tuple accuracy, with quality falling sharply as the requested table widens or lengthens. •Structure amplifies failure modes. Errors that remain latent in point-wise QA become evident when multiple cells must be emitted coherently. •Prompting matters. Chain-of-Thought decomposition improves cell-level recall, yet fails to repair tuple mis-alignment and JOIN reasoning. These results underscore that current LLMs retain abundant factual fragments, but lack the mech- anisms to reliably compose them into relational form. We release the data, evaluation suite, and prompting templates to support research on (i) architecture changes that improve structured recall, (ii) inference-time strategies for tuple alignment, and (iii) multilingual, temporal, and bias-aware extensions of the task. We hope RelationalFactQA becomes a key resource for measuring progress toward LLMs that are not just eloquent, but also factual. Table 5: Key limitations of R ELATIONAL FACTQA. Aspect Current scope Implication Temporal cover- ageBenchmark built from static snapshots; fast- changing facts intentionally excluded.Cannot assess models’ ability to reason over time- dependent knowledge (e.g., “current GDP”, “lat- est mayor”). Linguistic & cultural breadthTables and questions sourced almost ex- clusively from English-language, Western- centric resources.Reported performance may not generalize to other languages or under-represented knowledge do- mains, introducing geographic-cultural bias. Evaluation gran- ularityCell-level scoring uses exact / approximate string and numeric matching with basic nor- malization.Semantically correct but lexically different an- swers (synonyms, alternative units, name variants) can be penalized, under-estimating true capability. Limitations. Table 5 summarizes the principal limitations of the current benchmark and outlines how each one affects our results. 9 References [1]All About AI. Ai hallucination report 2025, 2025. URL https://www.allaboutai.com/ resources/ai-statistics/ai-hallucinations/ . (pp. 1 and 3) [2]R. Aly, Z. Guo, M. S. Schlichtkrull, J. Thorne, A. Vlachos, C. Christodoulopoulos, O. Cocarascu, and G. Li. FEVEROUS: Fact extraction and VERification over unstructured and structured information. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021. (p. 2) [3]D. Balsiger, H.-R. Dimmler, S. Egger-Horstmann, and T. Hanne. Assessing large language models used for extracting table information from annual financial reports. Computers , 13(10), 2024. ISSN
|
https://arxiv.org/abs/2505.21409v1
|
2073-431X. doi: 10.3390/computers13100257. URL https://www.mdpi.com/ 2073-431X/13/10/257 . (p. 2) [4]A. Bisercic, M. Nikolic, M. van der Schaar, B. Delibasic, P. Lio, and A. Petrovic. Interpretable medical diagnostics with structured data extraction by large language models, 2023. URL https://arxiv.org/abs/2306.05052 . (p. 2) [5]V . Borisov, K. Sessler, T. Leemann, M. Pawelczyk, and G. Kasneci. Language models are realistic tabular data generators. In The Eleventh International Conference on Learning Repre- sentations , 2023. URL https://openreview.net/forum?id=cEygmQNOeI . (p. 4) [6]M. Buoncristiano, G. Mecca, D. Santoro, and E. Veltri. Detective gadget: Generic iterative entity resolution over dirty data. Data , 2024. doi: 10.3390/data9120139. (p. 6) [7] R. Cappuzzo, G. Varoquaux, A. Coelho, and P. Papotti. Retrieve, merge, predict: Augmenting tables with data lakes. CoRR , abs/2402.06282, 2024. doi: 10.48550/ARXIV .2402.06282. URL https://doi.org/10.48550/arXiv.2402.06282 . (p. 4) [8]Y . Chang, X. Wang, J. Wang, Y . Wu, L. Yang, K. Zhu, H. Chen, X. Yi, C. Wang, Y . Wang, W. Ye, Y . Zhang, Y . Chang, P. S. Yu, Q. Yang, and X. Xie. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. , 15(3), mar 2024. ISSN 2157-6904. doi: 10.1145/3641289. URL https://doi.org/10.1145/3641289 . (p. 3) [9]W. Chen, A. Lilley, J. Gu, Z. Qian, V . Zhong, K. Gimpel, and K. Toutanova. Tabfact: A large-scale dataset for table-based fact verification. In International Conference on Learning Representations (ICLR) , 2020. (pp. 2 and 3) [10] N. Chowdhury, D. Johnson, V . Huang, J. Steinhardt, and S. Schwettmann. In- vestigating truthfulness in a pre-release o3 model. https://transluce.org/ investigating-o3-truthfulness , April 2025. (pp. 1 and 3) [11] V . Christophides, V . Efthymiou, T. Palpanas, G. Papadakis, and K. Stefanidis. An overview of end-to-end entity resolution for big data. ACM Comput. Surv. , 53(6):127:1–127:42, 2021. (p. 6) [12] DeepSeek AI. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954 , 2024. doi: 10.48550/ARXIV .2401.02954. URL https://doi. org/10.48550/arXiv.2401.02954 . (p. 6) [13] M. M. Dong, T. C. Stratopoulos, and V . X. Wang. A scoping review of chatgpt research in accounting and finance. International Journal of Accounting Information Systems , 55:100715, 2024. ISSN 1467-0895. doi: https://doi.org/10.1016/j.accinf.2024.100715. URL https: //www.sciencedirect.com/science/article/pii/S1467089524000484 . (p. 1) [14] A. Elnashar, J. White, and D. C. Schmidt. Enhancing structured data generation with gpt-4o evaluating prompt efficiency across prompt styles. Frontiers in Artificial Intelli- gence , V olume 8 - 2025, 2025. ISSN 2624-8212. doi: 10.3389/frai.2025.1558938. URL https://www.frontiersin.org/journals/artificial-intelligence/articles/ 10.3389/frai.2025.1558938 . (p. 2) [15] M. Gao, X. Hu, X. Yin, J. Ruan, X. Pu, and X. Wan. Llm-based nlg evaluation: Current status and challenges. Computational Linguistics , pages 1–28, 04 2025. ISSN 0891-2017. doi: 10.1162/coli a00561. URL https://doi.org/10.1162/coli_a_00561 . (pp. 2 and 9) [16] Gemma Team, Google. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 , 2024. doi: 10.48550/ARXIV .2403.08295. URL https: //doi.org/10.48550/arXiv.2403.08295 . (p. 6) 10 [17] B. Glavic, G. Mecca, R. J. Miller, P. Papotti, D. Santoro, and E. Veltri. Similarity measures for incomplete database instances. In L. Tanca, Q. Luo, G. Polese, L. Caruccio, X. Oriol, and D. Fir- mani, editors, Proceedings 27th International Conference
|
https://arxiv.org/abs/2505.21409v1
|
on Extending Database Technology, EDBT 2024, Paestum, Italy, March 25 - March 28 , pages 461–473. OpenProceedings.org, 2024. doi: 10.48786/EDBT.2024.40. URL https://doi.org/10.48786/edbt.2024.40 . (p. 6) [18] A. Holtzman, J. Buys, L. Du, M. Forbes, and Y . Choi. The curious case of neural text degeneration. In International Conference on Learning Representations , 2020. URL https: //openreview.net/forum?id=rygGQyrFvH . (pp. 3 and 9) [19] Z. Hong, Z. Yuan, Q. Zhang, H. Chen, J. Dong, F. Huang, and X. Huang. Next-generation database interfaces: A survey of llm-based text-to-sql. arXiv preprint arXiv:2406.08426 , 2024. (p. 3) [20] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y . Xu, E. Ishii, Y . J. Bang, A. Madotto, and P. Fung. Survey of hallucination in natural language generation. ACM Comput. Surv. , 55(12), Mar. 2023. ISSN 0360-0300. doi: 10.1145/3571730. URL https://doi.org/10.1145/3571730 . (pp. 1, 3, and 9) [21] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023. doi: 10.48550/ARXIV .2310.06825. URL https://doi.org/10.48550/arXiv.2310. 06825 . (p. 6) [22] M. Joshi, E. Choi, D. S. Weld, and L. Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension, 2017. URL https://arxiv.org/abs/1705. 03551 . (p. 3) [23] E. Kasneci, K. Seßler, S. Kuchemann, M. Bannert, D. Dementieva, F. Fischer, U. Gasser, G. Groh, G. Hahnel, M. C. Hett, N.-E. Hett, N. K ¨arger, J. Liu, X. Liu, M. Nerdel, J. Nistor, C. Scheid, R. Stallasch, S. Stober, and G. Kasneci. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences , 103: 102274, 2023. This article discusses both the potential benefits and the significant challenges, including accuracy and reliability, of using LLMs like ChatGPT in education. (p. 1) [24] S. Kweon, Y . Kwon, S. Cho, Y . Jo, and E. Choi. Open-wikitable: Dataset for open domain question answering with complex reasoning over table, 2023. URL https://arxiv.org/ abs/2305.07288 . (p. 3) [25] T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics , 2019. (p. 3) [26] K. Lee, M.-W. Chang, and K. Toutanova. Latent retrieval for weakly supervised open domain question answering. In A. Korhonen, D. Traum, and L. M `arquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 6086–6096, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1612. URL https://aclanthology.org/P19-1612/ . (p. 3) [27] J. Li, B. Hui, G. Qu, J. Yang, B. Li, B. Li, B. Wang, B. Qin, R. Geng, N. Huo, et al. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. Advances in Neural Information Processing Systems , 36, 2024. (pp. 1, 3, and 4) [28] S. Lin, J. Hilton, and O.
|
https://arxiv.org/abs/2505.21409v1
|
Evans. Truthfulqa: Measuring how models mimic human falsehoods, 2022. URL https://arxiv.org/abs/2109.07958 . (p. 3) [29] C. Liu, M. Russo, M. Cafarella, L. Cao, P. B. Chen, Z. Chen, M. Franklin, T. Kraska, S. Madden, R. Shahout, and G. Vitagliano. Palimpzest: Optimizing ai-powered analytics with declarative query processing. In Proceedings of the Conference on Innovative Database Research (CIDR) , 2025. (p. 2) [30] N. F. Liu, K. Lin, J. Hewitt, A. Paranjape, M. Bevilacqua, F. Petroni, and P. Liang. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics , 12:157–173, 2024. doi: 10.1162/tacl a00638. URL https: //aclanthology.org/2024.tacl-1.9/ . (pp. 3 and 9) 11 [31] X. Liu, S. Shen, B. Li, P. Ma, R. Jiang, Y . Luo, Y . Zhang, J. Fan, G. Li, and N. Tang. A survey of NL2SQL with large language models: Where are we, and where are we going? CoRR , abs/2408.05109, 2024. doi: 10.48550/ARXIV .2408.05109. URL https://doi.org/ 10.48550/arXiv.2408.05109 . (p. 3) [32] Meta AI. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783 . (p. 6) [33] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. doi: 10.48550/ARXIV . 2303.08774. URL https://doi.org/10.48550/arXiv.2303.08774 . (pp. 1 and 6) [34] S. Papicchio, P. Papotti, and L. Cagliero. Qatch: Benchmarking sql-centric tasks with table representation learning models on your data. Advances in Neural Information Processing Systems , 36:30898–30917, 2023. (pp. 4 and 6) [35] P. Pasupat and P. Liang. Compositional semantic parsing on semi-structured tables. In Pro- ceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 1470–1480, 2015. (pp. 2 and 3) [36] L. Patel, S. Jha, M. Pan, H. Gupta, P. Asawa, C. Guestrin, and M. Zaharia. Semantic operators: A declarative model for rich, ai-based data processing, 2025. URL https://arxiv.org/abs/ 2407.11418 . (p. 2) [37] F. Petroni, T. Rockt ¨aschel, A. H. Miller, P. Lewis, A. Bakhtin, Y . Wu, and S. Riedel. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP) , 2019. (p. 4) [38] F. Petroni, P. Lewis, A. Piktus, T. Rockt ¨aschel, Y . Wu, A. H. Miller, and S. Riedel. How context affects language models’ factual predictions. In Automated Knowledge Base Construction , 2020. URL https://openreview.net/forum?id=025X0zPfn . (p. 4) [39] Qwen Team. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . (p. 6) [40] E. S. Ristad and P. N. Yianilos. Learning string-edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence , 20(5):522–532, 1998. (p. 6) [41] M. Saeed, N. D. Cao, and P. Papotti. Querying large language models with SQL. In Proceedings 27th International Conference on Extending Database Technology, EDBT 2024, Paestum, Italy, March 25 - March 28 , pages 365–372. OpenProceedings.org, 2024. doi: 10.48786/EDBT.2024. 32. URL https://doi.org/10.48786/edbt.2024.32 . (pp. 2, 4, and 6) [42] I. Saparina and M. Lapata. Ambrosia: A benchmark for parsing ambiguous questions into database queries. Advances in Neural Information Processing Systems , 37:90600–90628, 2024. (p. 3) [43] S. Shankar, T. Chambers, T. Shah,
|
https://arxiv.org/abs/2505.21409v1
|
A. G. Parameswaran, and E. Wu. Docetl: Agentic query rewriting and evaluation for complex document processing, 2025. URL https://arxiv.org/ abs/2410.12189 . (p. 2) [44] K. Singhal, S. Azizi, T. Tu, S. S. Mahdavi, J. Wei, H. W. Chung, N. Scales, A. Tanwani, H. Cole- Lewis, S. Pfohl, P. Payne, M. Seneviratne, P. Gamble, C. Kelly, N. Sch ¨arli, A. Chowdhery, P. Mansfield, B. A. y Arcas, D. Webster, G. S. Corrado, Y . Matias, K. Chou, J. Gottweis, N. Tomasev, Y . Liu, A. Rajkomar, J. Barral, C. Semturs, A. Karthikesalingam, and V . Natarajan. Large language models encode clinical knowledge. Nature , 620(7972):172–180, 2023. This pa- per (often associated with Med-PaLM 2) evaluates LLMs on medical benchmarks, highlighting potential but also the need for accuracy and safety. (p. 1) [45] O. Stuhler, C. D. Ton, and E. Ollion. From codebooks to promptbooks: Extracting in- formation from text with generative large language models. Sociological Methods & Re- search , 0(0):00491241251336794, 0. doi: 10.1177/00491241251336794. URL https: //journals.sagepub.com/doi/abs/10.1177/00491241251336794 . (p. 2) [46] F. Suchanek, M. Alam, T. Bonald, L. Chen, P.-H. Paris, and J. Soria. Yago 4.5: A large and clean knowledge base with a rich taxonomy, 2024. URL https://arxiv.org/abs/2308.11884 . (p. 4) 12 [47] Y . Tang, Z. Wang, A. Qu, Y . Yan, Z. Wu, D. Zhuang, J. Kai, K. Hou, X. Guo, J. Zhao, Z. Zhao, and W. Ma. ItiNera: Integrating spatial optimization with large language models for open- domain urban itinerary planning. In F. Dernoncourt, D. Preo t ¸iuc-Pietro, and A. Shimorina, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track , pages 1413–1432, Miami, Florida, US, Nov. 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-industry.104. URL https: //aclanthology.org/2024.emnlp-industry.104/ . (p. 2) [48] D. Truhn, J. S. Reis-Filho, and J. N. Kather. Large language models should be used as scientific reasoning engines, not knowledge databases. Nature medicine , 29(12):2983–2984, 2023. (p. 1) [49] J. Wei, N. Karina, H. W. Chung, Y . J. Jiao, S. Papay, A. Glaese, J. Schulman, and W. Fedus. Measuring short-form factuality in large language models. CoRR , abs/2411.04368, 2024. doi: 10. 48550/ARXIV .2411.04368. URL https://doi.org/10.48550/arXiv.2411.04368 . (pp. 1 and 3) [50] J. Wu, L. Yang, D. Li, Y . Ji, M. Okumura, and Y . Zhang. MMQA: Evaluating LLMs with multi- table multi-hop complex questions. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=GGlpykXDCa . (p. 1) [51] X. Wu, J. Yang, L. Chai, G. Zhang, J. Liu, X. Du, D. Liang, D. Shu, X. Cheng, T. Sun, T. Li, Z. Li, and G. Niu. Tablebench: A comprehensive and complex benchmark for table question answering. In T. Walsh, J. Shah, and Z. Kolter, editors, AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA , pages 25497–25506. AAAI Press, 2025. doi: 10.1609/AAAI.V39I24.34739. URL https://doi.org/10.1609/aaai.v39i24.34739 . (p. 3) [52] T. Yu, R. Zhang, K. Yang, M. Yasunaga, D. Wang, Z. Li, J. Ma, I. Li, Q. Yao, S. Roman, et al. Spider: A large-scale human-labeled dataset for complex
|
https://arxiv.org/abs/2505.21409v1
|
and cross-domain semantic parsing and text-to-sql task. In 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 , pages 3911–3921. Association for Computational Linguistics, 2018. (pp. 1, 3, and 4) [53] X. Zhang, S. Luo, B. Zhang, Z. Ma, J. Zhang, Y . Li, G. Li, Z. Yao, K. Xu, J. Zhou, D. Zhang-Li, J. Yu, S. Zhao, J. Li, and J. Tang. Tablellm: Enabling tabular data manipulation by llms in real office usage scenarios, 2025. URL https://arxiv.org/abs/2403.19318 . (p. 2) [54] V . Zhong, C. Xiong, and R. Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103 , 2017. (pp. 2 and 3) [55] F. Zhu, W. Lei, Y . Huang, C. Wang, S. Zhang, J. Lv, F. Feng, and T.-S. Chua. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 3277–3287, Online, Aug. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.254. URL https://aclanthology.org/2021.acl-long.254 . (p. 3) 13 10 20 30 40 5000.20.40.60.81 # Number of requested attributesF1 Score (surname)LLama3.3-70B LLama3.1-8B GPT-4.1 GPT-4.1-mini (a) Experiment 1: Incremental Attribute Request.5 10 15 20 25 3000.20.40.60.81 Position of the ’surname’ attributeF1 Score (surname)LLama3.3-70B LLama3.1-8B GPT-4.1 GPT-4.1-mini (b) Experiment 2: Attribute Position Sensitivity. Figure 3: F1 Quality of attribute surname in different queries A Motivation Example As discussed in the introduction, extracting structured information from an LLM’s internal knowledge in a tabular format poses unique and challenging problems, distinct from conventional, single-point natural language queries. To demonstrate and isolate these issues empirically, we designed a series of controlled experiments. Specifically, we manually curated a compact yet informative dataset that includes detailed statistics for the 23 players who represented the Italian national football team at UEFA Euro 2016. For each player, the dataset contains their surname, date of birth, and jersey number used in the tournament. Additionally, it includes six per-season attributes (club, appearances, goals, assists, yellow cards, red cards) across nine different seasons. The final dataset thus comprises 23 rows and 57 attributes. All this information is publicly available (e.g., on Wikipedia) and is assumed to be part of the internal knowledge of the tested LLMs. Unlike the broader evaluation presented in Section 5, the aim of these specific experiments is not to assess overall extraction accuracy, but rather to evaluate how performance degrades due to conditions inherent to tabular queries. Experiment 1: Incremental Attribute Request. In the first experiment, we progressively increased the number of requested attributes, from 1 to 57. In the first iteration, the model was asked to return only the surname of the 23 Italian Euro 2016 players. In the second, both surname and date of birth were requested, and so on, up to all 57 attributes. Crucially, after each query, the F1 score was computed solely on the surname column. The goal was to assess whether the accuracy of a fixed attribute degrades as the number of requested
|
https://arxiv.org/abs/2505.21409v1
|
attributes increases. Ideally, the quality on surname should remain constant regardless of how many other attributes are queried. However, as shown in Figure 3.(a), performance on the surname column degrades substantially. For instance, using GPT-4.1, quality drops from 1.0 when a single attribute is requested to 0.516 when 53 attributes are included. This experiment was conducted using the COTprompting strategy, which yielded the best results in our main benchmark. Similar trends were observed across other LLMs and prompting strategies. These findings support the hypothesis that tabular queries induce specific degradation patterns that are not typically observed in more natural, conversational settings. Experiment 2: Attribute Position Sensitivity. The second experiment evaluated whether the position of an attribute in the output affects its quality. A fixed query requesting 30 attributes was used as a base. We then generated 30 query variants, each placing the surname attribute at a different position (from 1st to 30th). Again, F1 scores were computed only on the surname column. As illustrated in Figure 3.(b), the position of the attribute has a significant impact. Unlike the previous experiment, the size of the output remains constant across all variants; the only variable is the position of the surname attribute. Maximum accuracy is achieved when the attribute appears first, with performance declining as the attribute shifts toward the middle positions. 14 We repeated the same experiments for the running example introduced in Section 1, the US Counties dataset, which has a small number of attributes. We first measured the F1 score using only the <County, State >attribute pair. However, when we added the Area attribute to the returned attributes in the query, we observed a significant drop in F1. Most of the previously correct <County, State > pairs were no longer returned. For example, only 51 out of 3246 expected tuples were retrieved, and 3 of them included hallucinated <County, State >combinations. Further inspection revealed that 40 of the 51 returned tuples also contained incorrect values for the Area attribute. These preliminary experiments highlight structural phenomena that are intrinsic to tabular-style querying. They underline how even high-performing LLMs can suffer from specific degradation modes when asked to return structured multi-attribute outputs, making this task significantly more challenging than conventional QA scenarios. B More Details on RelationalFactQA Dataset RFQA Detailed Statistics. Table 6 presents detailed statistics of the RFQA dataset. It reports the minimum, maximum, first quartile (Q1), third quartile (Q3), and average values for the expected number of output tuples, attributes, and cells. Additionally, the attribute dimension is further analyzed by type, providing the same set of statistics separately for numerical attributes, categorical attributes, and mixed attributes (containing both numerical and categorical values). The training set statistics of the RFQA dataset reveal a highly skewed and diverse structure. The average number of output tuples per instance is about 27, but the maximum value reaches 904, indicating that while most cases are relatively small, there are some significantly larger ones, pointing to a long-tailed distribution. Similarly, the number of output cells ranges from 1 to 4500, with an average of 135.5, suggesting considerable
|
https://arxiv.org/abs/2505.21409v1
|
variation in instance complexity. On the attribute side, most examples contain a mix of categorical (avg. 3.16) and mixed attributes (avg. 4.26), with relatively few numerical attributes (avg. 1.06). This implies that RFQA poses both relational and interpretative challenges, as models must handle heterogeneous data types and cope with a wide range of input sizes. Table 6: Statistics of RFQA . Output Output Output Attribute Attribute Attribute Dimension Tuple Attributes Cells Numerical Categorical Mixed MIN 1 1 1 1 1 2 Q1 1 2 3 1 2 5 A VG 26.94 5.32 135.50 1.06 3.16 4.26 Q3 8 9 45 3 6 9 MAX 904 9 4500 4 6 9 Figure 4 shows the distribution of topics covered by the queries in the dataset, spanning 27 distinct categories. The most frequent topics are related to automatically generated queries, including Nobel Prize winners, chemical elements from the periodic table, popular web search engines, video game publishers, and global airports. Additional topics are sourced from the Spider and Bird corpora. Metadata. RFQA also includes rich metadata that can be used to analyze queries in terms of their expected output size (rows, attributes, cells), the nature of the selected attributes (whether numerical, categorical, or mixed), and their complexity (e.g., presence of WHERE conditions involving numerical or categorical filters). Below, we describe the fields available in the RFQA dataset: •QID: Unique identifier for each query instance. •DATASET : Name of the source corpus. Possible values are: “bird”, “galois”, “spider1”, and “qatch”. The first three represent queries from existing corpora, while “qatch” refers to automatically generated queries. •DB ID: Identifier of the associated database (i.e., the topic) for each query. •SQL: SQL query in PostgreSQL syntax. 15 Figure 4: Topic distribution •QUESTION : Natural language (NL) version of the query. •TUPLES : Expected number of output tuples (rows). •ATTRS : Expected number of output attributes (columns). •ATTR NUMERICAL : Number of expected numerical attributes. •ATTR CATEGORICAL : Number of expected categorical attributes. •TABLES : Number of tables referenced in the FROM clause. •NUMERICAL CONDITIONS : Number of numerical conditions in the WHERE clause. •CATEGORICAL CONDITIONS : Number of categorical conditions in the WHERE clause. •AGGR : Set to 1 if the query includes an aggregation function, 0 otherwise. •JOIN : Number of joins in the query. •DISTINCT : Set to 1 if the query includes the DISTINCT operator, 0 otherwise. •GROUP BY : Set to 1 if the query includes the GROUP BY operator, 0 otherwise. •LIMIT : Set to 1 if the query includes the LIMIT operator, 0 otherwise. •ORDER BY : Set to 1 if the query includes the ORDER BY operator, 0 otherwise. •ATTR NUM &CAT: Sum of numerical and categorical attributes in the SELECT clause. Present only if both types are used. •CON NUM &CAT: Sum of numerical and categorical conditions in the WHERE clause. Present only if both types are used. •CELLS : Expected number of cells (i.e., total elements in the output table). •BACKLINKS : Number of backlinks retrieved using the Wikipedia API. The actual data for each NL-SQL query pair
|
https://arxiv.org/abs/2505.21409v1
|
is stored in the data folder, following the structure <dataset>/<db id>. To retrieve the expected results for a given query q, one can execute the corresponding SQL query on its associated database that can be loaded with the associated data. In our experiment all the data are imported into PostgreSQL database. The backlinks for each query are calculated as follows: for each row of a table tresult of a query q, the key value of the table is used to search for a relative Wikipedia page referencing the entity described by the current row. If the search is successful, the backlinks are directly extracted from the page using the Wikipedia API. The total number of backlinks for a given query is then computed as the average number of backlinks of the single query results. 16 C Prompts and LLM Response Processing The prompting strategy used is iterative and consists of two main steps: a start prompt , which instructs the LLM on the type of data to extract, and an iterative prompt , which guides the model to retrieve additional data if more is available. Figure 5 shows the prompts used for the natural language (NL) strategy; Figure 6 presents the prompt for the SQL-based strategy; and Figure 7 illustrates the prompt used for the Chain-of-Thought (CoT) strategy. Start Prompt: NL Question . Respond with JSON only. Don’t add any comments. Use the following JSON schema: jsonSchema . NL Question: is the query in natural language jsonSchema: is the schema of the tabular response translated in JSON schema Iterative Prompt: List more values if there are more, otherwise return an empty JSON. Respond with JSON only. Figure 5: NL Prompt Syntax. Text in italic is injected from the given NL query and the expected JSON schema of the response. Start Prompt: List the results of the SQL query: SQL. Respond with JSON only. Don’t add any comments. Use the following JSON schema: jsonSchema . SQL: is the query in SQL syntax jsonSchema: is the schema of the tabular response translated in JSON schema Iterative Prompt: List more values if there are more, otherwise return an empty JSON. Respond with JSON only. Figure 6: SQL Prompt Syntax. Text in italic is injected from the given SQL query and the expected JSON schema of the response. First Prompt: Given the following query, populate the table with actual values. query: select attributes from table (where conditions ). Respond with JSON only. Don’t add any comments. Use the following JSON schema: jsonSchema . attributes: is the set of attribute names of the table table table: is the table name conditions: the condition(s) if passed jsonSchema: is the schema of the table translated in JSON schema Iterative Prompt: List more values if there are more, otherwise return an empty JSON. Respond with JSON only. Figure 7: CoT Prompt Syntax. Text in italic is injected from the given SQL query. Values between parenthesis are populated only if the condition(s) is given. Handling Output JSON Errors . All the strategies ask the LLM to return the data in a
|
https://arxiv.org/abs/2505.21409v1
|
structured form respecting a JSON format prompted. We parse the response according to the required JSON. The most common issues in the JSON parsing and our corresponding handling methods are the following: •Malformed JSON syntax : This includes missing quotation marks or improperly formatted numbers. In such cases, we re-prompt the LLM, explicitly asking it to return the answer in valid JSON format. •Truncated or broken JSON : Often caused by the model hitting its maximum token limit. When this happens, we identify the last unmatched opening brace and extract the content up to that point. We 17 then attempt to complete the JSON structure by adding the necessary closing braces to recover a valid object. If none of the recovery strategies succeed, we terminate the iteration and treat the response as invalid. D Details on The Experiments D.1 Used Models Table 7 lists the models used in our evaluation. For each model, we provide its full name, along with the version or release date when available. All models are accessed via their respective APIs. Model Name Model Full Name Platform GPT 4.1 gpt-4.1-2025-04-14 OpenAI GPT 4.1 mini gpt-4.1-mini-2025-04-14 OpenAI Mistral 7B Mistral (7B) Instruct v0.3 Released May 22, 2024 Together AI QWEN 2.5-7B Qwen2.5 7B Instruct Turbo * Together AI LLama 3.1-8B Meta Llama 3.1 8B Instruct Turbo * Together AI LLama 3.3 70B Meta Llama 3.3 70B Instruct Turbo * Together AI Gemma 2-9B Gemma-2 Instruct (9B) Together AI DeepSeek 70B DeepSeek R1 Distill Llama 70B Released Jan 20, 2025 Together AI QWEN 3-235B Qwen3 235B A22B FP8 Throughput Released Apr 27, 2025 Together AI Table 7: Overview of used Models and the respective Platforms. * indicates that there is no release date D.2 Additional Results Table 8 reports the detailed results for Exp-1 (Overall Performance). It extends results reported in Table 2 by also adding the Precision and Recall computed at the cell level. In the following we concentrate on the Precision and Recall to also motivate the F1 computed in Exp-1 (Overall Performance). Table 8 reveal notable trends, particularly in Precision (P) and Recall (R) across different prompting strategies (NL, SQL, CoT) and LLMs of varying sizes. Overall, the Chain-of-Thought (CoT) strategy consistently outperforms the NL and SQL settings in both precision and recall, especially for larger models like GPT 4.1, QWEN 3-235B. These models demonstrate the strongest balance between precision and recall, with GPT 4.1 achieving the highest CoT precision (0.720) and a strong recall (0.691), reflecting its ability to generate accurate and comprehensive responses. Smaller models, such as Mistral 7B and LLama 3.1-8B, show more variability, with generally lower precision in SQL and NL strategies and modest gains under CoT prompting. Interestingly, certain mid-sized models like Gemma 2-9B, LLama 3.3-70B and DeepSeek 70B achieve competitive performance, indicating that parameter count alone is not the sole determinant of quality. In general, the CoT strategy enhances both recall and precision, suggesting that structured reasoning prompts help LLMs capture more relevant data points while maintaining correctness. Figures 8 and 9 contain all the tested LLM performances based on the
|
https://arxiv.org/abs/2505.21409v1
|
number of the attributes requested in the query respectively on measures against the TS metric (the previous) and the F1 metric (the latter). The results indicate that Tuple Similarity (TS) generally decreases as the number of attributes increases beyond three, across most models and prompting strategies. Natural Language (NL) shows peak TS performance at low attribute counts (around two to three), with a sharp decline afterward. Models such as QWEN 3 and GPT-4.1 perform well on TS with fewer attributes but degrade quickly with increased complexity. Conversely, F1 scores tend to be more stable or even improve with more attributes, especially for models like Llama 3.3 and Gemma 2, suggesting these models handle complex outputs better in terms of generation quality. SQL prompting exhibits lower TS overall compared to NL, with early peaks and rapid decline; however, models like Llama 3.3 and DeepSeek demonstrate relative robustness at moderate attribute counts. Chain-of-Thought (CoT) prompting 18 Table 8: Benchmark Complete Results. Precision (P), Recall (R), F1 and Tuple Similarity (TS) measured for all LLMs in our evaluation. A VG is the average between F1 and TS. LLMs ordered by increasing size in terms of parameters. Mistral 7BQWEN 2.5-7BLLama 3.1-8BGPT 4.1 miniGemma 2-9BLLama 3.3-70BDeepSeek 70BGPT 4.1QWEN 3-235B NLP 0.459 0.546 0.473 0.593 0.596 0.608 0.649 0.682 0.635 R 0.477 0.477 0.608 0.525 0.556 0.657 0.615 0.652 0.618 F1 0.44 0.487 0.481 0.537 0.557 0.609 0.606 0.654 0.613 TS 0.076 0.085 0.155 0.115 0.107 0.149 0.15 0.247 0.225 A VG 0.258 0.286 0.318 0.326 0.332 0.379 0.378 0.45 0.419 SQLP 0.354 0.537 0.327 0.457 0.62 0.626 0.664 0.401 0.642 R 0.423 0.451 0.438 0.409 0.566 0.651 0.586 0.391 0.587 F1 0.346 0.459 0.332 0.417 0.571 0.62 0.6 0.388 0.595 TS 0.042 0.079 0.11 0.055 0.123 0.155 0.142 0.096 0.185 A VG 0.194 0.269 0.221 0.236 0.347 0.387 0.371 0.242 0.39 CoTP 0.544 0.6 0.593 0.693 0.645 0.686 0.696 0.72 0.692 R 0.469 0.48 0.614 0.626 0.578 0.701 0.634 0.691 0.641 F1 0.477 0.503 0.585 0.638 0.594 0.677 0.646 0.693 0.651 TS 0.09 0.091 0.127 0.12 0.106 0.157 0.168 0.174 0.228 A VG 0.284 0.297 0.356 0.379 0.35 0.417 0.407 0.433 0.439 1 2 3 4 5 6 7 8 900.20.40.6 # Attrs.Mistral Qwen 2.5 Llama 3.1 GPT-4.1 mini Gemma 2 Llama 3.3 DeepSeek GPT-4.1 Qwen 3 (a) NL1 2 3 4 5 6 7 8 900.20.40.6 # Attrs.Mistral Qwen 2.5 Llama 3.1 GPT-4.1 mini Gemma 2 Llama 3.3 DeepSeek GPT-4.1 Qwen 3 (b) SQL1 2 3 4 5 6 7 8 900.20.40.6 # Attrs.Mistral Qwen 2.5 Llama 3.1 GPT-4.1 mini Gemma 2 Llama 3.3 DeepSeek GPT-4.1 Qwen 3 (c) CoT Figure 8: Impact of the Number of attributes w.r.t. Tuple Similarity (TS) with NL, SQL, CoT strategies shows improved TS retention for GPT-4.1 and Llama 3.3 as attribute numbers rise, highlighting the advantage of step-by-step reasoning for managing complexity. Additionally, CoT achieves the highest F1 scores overall, particularly with reasoning-focused models maintaining strong performance regardless of attribute count. Across all experiments, Llama 3.3 consistently outperforms other models in both TS and F1 metrics, especially as task
|
https://arxiv.org/abs/2505.21409v1
|
complexity grows, while Gemma 2 and GPT-4.1 remain competitive in F1 but are more sensitive in TS. QWEN 3 exhibits inconsistent TS results despite some strength in F1. These observations underline the importance of both prompt strategy and model choice in handling increasing task complexity. 1 2 3 4 5 6 7 8 900.20.40.60.81 # Attrs.Mistral Qwen 2.5 Llama 3.1 GPT-4o mini Gemma 2 Llama 3.3 Reasoning QWEN 3 (a) NL1 2 3 4 5 6 7 8 900.20.40.60.81 # Attrs.Mistral Qwen 2.5 Llama 3.1 GPT-4o mini Gemma 2 Llama 3.3 Reasoning QWEN 3 (b) SQL1 2 3 4 5 6 7 8 900.20.40.60.81 # Attrs.Mistral Qwen 2.5 Llama 3.1 GPT-4o mini Gemma 2 Llama 3.3 Reasoning QWEN 3 (c) CoT Figure 9: Impact of the Number of attributes w.r.t. F1 with NL, SQL, CoT strategies 19 E Error Analysis To better understand the limitations of LLMs in factual table generation, we conducted an error analysis on GPT-4.1’s outputs. We randomly selected 100 examples in which all three querying strategies (NL, SQL, and CoT) produced non-perfect scores (less than 1.0) based on the average of F1 andTS. Each model output was compared against the expected tupleset to identify common failure patterns. Figure 10 reports the error distribution. Each example could belong to multiple error types. CanonicalizationMisunderstandingMissing TuplesExtra TuplesEmpty Results Aggregation Error01020304050 4341 30 13 4 1% of Errors Figure 10: Breakdown of common error types in GPT-4.1 outputs on factual table generation. Each bar represents the percentage of 100 analyzed examples where the error type was observed. We found that 43% of errors come from Canonicalization issues, where semantically equivalent values were not matched due to limitations in our string similarity metric based on edit distance. Examples include mismatches such as “USA” vs. “United States of America” or “s” vs. “s-block”. These string-based discrepancies led to false negatives but were rare in numerical values, where our proposed metric is more robust. A potential solution could involve incorporating LLMs as semantic judges to verify whether two strings refer to the same real-world entity. However, this must be done judiciously, as calling an LLM for each comparison can significantly increase evaluation time, especially for tupleset that involves a high number of cells. Another 41% of errors were categorized as Misunderstanding , where the LLM misunderstood the intended meaning of a field. A recurring case was in the domain of chemical elements: when asked for the “origin” of elements, models often returned the etymology of the name rather than the scientific classification. For instance, the expected answer for hydrogen’s origin was “primordial”, but the model returned “Greek: hydro (water) and genes (forming)”. Interestingly, such errors disappeared when the attribute was used as a filter in the query’s WHERE clause, for example, WHERE origin=“primordial”. This kind of error highlights the need to be clear in the values that we expect the LLM to return. For example, querying the LLM with prompts with few-shot examples could help to mitigate this kind of error. 30% of errors were due to Missing Tuples , where the model returned only
|
https://arxiv.org/abs/2505.21409v1
|
a partial set of expected rows. These omissions were closely tied (80% of the time) to queries with numerical conditions in the WHERE clause, particularly involving inequality operators such as >,>=,<, or<=. In contrast, equality conditions rarely led to missing results. This trend aligns with prior experimental results (Table 4) showing that numerical conditions are harder for LLMs to process than categorical ones. Another 13% of errors were attributed to Extra Tuples , where the LLM returned rows not present in the expected result. These typically occurred in queries with mixed WHERE conditions (both categorical and numerical). A notable pattern emerged: in cases where the numerical condition used the = operator, the LLM often hallucinated by forcing the condition’s value into all returned rows, regardless of factual correctness. For example, when asked for tuples satisfying nationality=American 20 AND birth year=1937, the model returned multiple tuples with the correct nationality but assigned 1937 as the birth year across the board, even for entities with different birth years. Lastly, 4%of the cases were Empty Results , which can be seen as extreme cases of missing tuples, and1%were due to Aggregation Errors , where the model failed to retrieve correct base data, resulting in incorrect aggregate computations. Missing Tuples and Extra Tuples errors highlight a key challenge when querying an LLM: the handling of conditions in the WHERE clause. Our findings indicate that while LLMs can generally interpret categorical conditions correctly, numerical conditions often lead to hallucinations or incomplete results. A promising future research direction is to systematically investigate which types of conditions are reliably handled by LLMs during query execution and which are prone to errors or hallucinations. This analysis would help define a boundary between conditions that can be safely included in the LLM prompt and those that should instead be applied as post-processing filters, i.e., all tuples are retrieved with the LLM and the filtering is done as a separate step. Understanding this distinction could lead to more robust hybrid querying strategies that combine the generative capabilities of LLMs with traditional filtering techniques. 21
|
https://arxiv.org/abs/2505.21409v1
|
arXiv:2505.21410v1 [cs.AI] 27 May 2025Multi-Resolution Skill Discovery for HRL Agents Shashank Sharma Department of Computer Science University of Bath ss3966@bath.ac.ukJanina Hoffmann Department of Psychology University of Bath jah253@bath.ac.uk Vinay Namboodiri Department of Computer Science University of Bath vpn22@bath.ac.uk Abstract Hierarchical reinforcement learning (HRL) relies on abstract skills to solve long- horizon tasks efficiently. While existing skill discovery methods learns these skills automatically, they are limited to a single skill per task. In contrast, humans learn and use both fine-grained and coarse motor skills simultaneously. Inspired by human motor control, we propose Multi-Resolution Skill Discovery (MRSD), an HRL framework that learns multiple skill encoders at different temporal resolu- tions in parallel. A high-level manager dynamically selects among these skills, enabling adaptive control strategies over time. We evaluate MRSD on tasks from the DeepMind Control Suite and show that it outperforms prior state-of-the-art skill discovery and HRL methods, achieving faster convergence and higher final performance. Our findings highlight the benefits of integrating multi-resolution skills in HRL, paving the way for more versatile and efficient agents. 1 Introduction Recent advances in HRL agents using abstract actions or skills have shown promising results in tackling long-horizon tasks [ 1,14,20,10]. Skill discovery methods aim to learn these skills automatically, without an external reward. Although skills enable long-horizon planning, existing methods often operate using the single best skill apt for a task [ 7,11,20]. In contrast, humans and animals naturally acquire fine and coarse motor skills that are adaptively combined, for instance, during running, moving the leg forward (coarse), and adjusting foot placement (fine) to navigate uneven terrain [ 24,6,17]. Even primates [ 4,5] and rodents [ 19] have been observed to have gross movement skills and fine-grained tool use. Current hierarchical and options-based methods lack this flexibility and can benefit from multi-resolution control. Inspired by this, we first explore subgoal predicting skills that partition the state space temporally, and then utilize them appropriately. Using a simulation with a 2D agent to illustrate the qualitative differences between temporally constrained subgoals. Note how closer subgoals enable precise but error-susceptible, finemovements, while farther subgoals enable smoother but imprecise, coarse movements. A general-purpose agent would require different skills in different contexts. Thus, we propose Multi-Resolution Skill Discovery (MRSD), an HRL framework that trains separate encoders to learn skills at distinct temporal scales. And simultaneously learns policies to use them, along with a meta-controller that dynamically interleaves from the learned skills appropriately. We evaluate our method on tasks from the DeepMind Control Suite [ 23], benchmarking against state-of-the-art (SOTA) hierarchical and skill discovery methods. Our experiments show that the Preprint. Under review. Figure 1: Simulation of a simple point agent (star) in a 2D grid that moves towards assigned goal positions (crosses). Goal updates every fixed number of steps Kand alternates between (x+li,1) and(x+li,−1), where xis the agent’s current x-position and li∈ {1,2,4,8}is the skill length. Goal positions impact agent behavior based on their distance from the agent state. Closer goals lead to more controlled and precise movements, but can be susceptible to incorrect goals. Meanwhile, far away goals cause less deviation,
|
https://arxiv.org/abs/2505.21410v1
|
leading to smooth but imprecise movements. proposed architecture yields significant performance improvements, outperforming previous HRL SOTA methods and matching SOTA non-HRL methods. We also conduct ablation studies to measure the contribution of each module and show that skill interleaving yields the best results. These results suggest that multi-resolution skills can serve as a powerful building block for scalable and efficient hierarchical reinforcement learning. We highlight some of the method limitations in Sec. 7 Our key contributions are: •An abstract skill-discovery framework that learns skills at multiple temporal resolutions in parallel (Sec. 3.2). •A multi-skill policy that learns expert policies for each skill independently, and a dynamic skill interleaving mechanism that uses the experts appropriately (Sec. 3.3). •Empirical validation and ablations showing improved convergence and final performance on DeepMind Control Suite tasks (Sec. 5.1,5.2). 2 Background 2.1 Director The Director [ 10] is a recent state-of-the-art HRL agent composed of a world-model, worker, manager, and a Goal Variational AutoEncoder (V AE) [ 12]. The world-model is implemented using the Recurrent State Space Module (RSSM) [ 8] that takes the environmental observations and constructs a state representation over time. The manager takes the state as input to yield a subgoal for the worker in the same state space (refreshed every Ksteps). The worker takes the current state and the subgoal state to output an environmental action. The authors mention that if the manager outputs subgoals for the worker directly in the state space, it results in a high-dimensional continuous control problem for the manager. Therefore, the Goal V AE learns a reduced categorical latent representation for the states, and the manager takes the current state as input to output a latent variable, which is expanded into a state using the Goal V AE decoder. The Goal V AE allows the manager to function in the reduced latent space by helping it recall states. We implement MSRD using Director as the base architecture. Motivation : It should be noted that the Goal V AE allows prediction of states irrespective of the current state, which means the manager can pick a goal sgunreachable by the worker. And by definition, the worker cannot reliably predict the right actions for unreachable goals. Therefore, given the current state, we propose constraining the search space to only nearby states, which can increase the search efficiency for appropriate goal states sg. Further, in our experiments with the Director, we noticed that the worker rarely reaches the prescribed goal state sgin an episode. The manager only learns to select the goals sgso that they induce the right actions from the worker that maximize the expected return. This behavior is apparent in the manager’s training objective, which only aims to increase the likelihood of actions that maximize the expected return. Rather than prescribing a goal state and waiting for the worker to reach it, we found that the manager assigns the worker goals as a moving target that the worker constantly chases. Thus, the final states st+l in CV AE training do not need to be strictly at temporal length K(the goal refresh
|
https://arxiv.org/abs/2505.21410v1
|
rate). In fact, in 2 (a) Skill CV AE architecture (b) Acting using Skill CV AE Figure 2: Illustrations of the abstract state transition-based control for the manager. Dashed arrows indicate sample propagation from the predicted distribution. (a) Skill CV AE, where the Encoder encodes initial and final states (st, st+l)to a latent skill space and the Decoder reconstructs the final state using the initial state stand a sampled skill variable. (b) The manager predicts the latent skills and then uses the Decoder to generate goals for the worker. our experiments with different temporal skill lengths we found l > K to work much better than l=Kfor Deepmind Control suite [ 23] tasks (Sec. 5.2). We simulated a simple 2D point agent to follow goals prescribed at different distances to illustrate the behavioral differences (Fig. 1). Also, the appropriate skill length can be highly task dependent. Thus, we propose a Multi-Resolution Skill Discovery (MRSD) mechanism that learns skills or abstract actions at multiple temporal resolutions. Unlike some previous skill discovery approaches like DIAYN [ 7] and ReST [ 11] that automatically partition the skill space, we use an explicit temporal distance-based skill partition. Note that we usetemporal to refer to the temporal distance of the assigned goal, not the duration for which it is executed. 3 Our Method 3.1 Skills as Abstract State Transitions Given that the agent is at the state st, we want to constraint the goal predictions to states that can be achieved in lsteps. To do this, we propose learning a Conditional V AE (CV AE) that learns to predict possible future states st+lconditioned on the current state st. The CV AE is learned online using the generated replay data. First, the replay trajectories are used to collect training examples as state pairs (st, st+l), where st+lhappens lsteps after st. Then, the CV AE parameterized by weights ϕis trained to optimize the ELBO objective (Eq. 1). It should be noted that it is the worker that predicts actions leading the agent to the goal state. CV AE is merely a skill recall mechanism that learns the abstract actions possible under the current worker policy and then allows the manager to modulate the worker’s behavior predictably. Fig. 2 illustrates the skill-based architecture as a CV AE that learns skills online using the collected data (Fig. 2a). Fig. 2b shows how the manager can use the Skill CV AE during inference to generate sub-goals for the worker. Next, we present our method by scaling the concept of skills to multiple resolutions. L(ϕ) =∥st+l−Decϕ(st, z)∥2+βKL[Encϕ(z|st, st+l)∥p(z)]where z∼Encϕ(z|st, st+l) (1) 3.2 Multi-Resolution Skills Ideally, we want the manager to be able to predict any state that the worker can directly reach as a goal state. Instead of learning a single CV AE, we can learn multiple CV AEs, each specific to a temporal resolution. However, this can heavily increase the size of the model, thereby increasing the memory capacity and causing an unfair comparison. Thus, we keep all but the last layer of the encoder, and all but the first layer of
|
https://arxiv.org/abs/2505.21410v1
|
the decoder, shared. The sharing causes a minimal increase in model size but increases the recall with the resolution-specific input and output layers. Fig. 3a illustrates the Multi-Resolution Skill CV AE architecture. For training, state-pairs (st, st+li)atN different temporal resolutions li∈ {l0, l1, ..., l N}are extracted from the replay data. Each training example is processed using the shared and the resolution-specific Encoder-Decoder layers. Then the total loss is calculated as the sum of the ELBO objectives of each CV AE and is optimized in a single step (Eq. 2) (the use of common layers is implied in the equations and is not mentioned 3 (a) Learning Multi-Resolution Skill CV AEs (b) Acting using Multi-Resolution Skill CV AEs Figure 3: Architectures for learning and acting using Multi-Resolution Skills ( li∈ {l0, l1, ..., l N}). Dashed arrows indicate sample propagation from the predicted distribution. Dashed boundaries indicate shared layers. (a) Separate CV AEs are learnt for each temporal resolution li. The Encand Dec modules represent the common layers of the Encoders and the Decoders, respectively. Each Enciis the resolution-specific encoder output layer, and each Deciis the resolution-specific decoder input layer. (b) The manager’s policy has N+ 1 output heads. Nskill heads πMithat predict the resolution-specific skill latents and choice head πMCthat predicts an N-dimensional one-hot distribution. Samples from the skill latents are used to predict sug-goals using the respective Decoders, then the choice sample from πMCselects one of the sub-goals as sgby gating. to maintain simplicity). This results in the common layers being trained for all examples and the resolution-specific layers being trained only on the relevant examples. We use a mixture of 8,8-dim categoricals as the prior distribution p(z)for our CV AEs. L(ϕ) =NX i=0 st+li−Deci ϕ(st, z) 2+βKL[Enci ϕ(z|st, st+li)∥p(z)]where z∼Enci ϕ(z|st, st+li) (2) 3.3 Multi-Skill Policy The manager policy has N+1output heads, Nheads corresponding to each Skill CV AE that predicts latent distributions over skills πMi(z|st), and an additional ’choice’ head that predicts a one-hot N-dim distribution πMC(c|st)(Fig. 3b). The latent skill samples are used to predict subgoals using the corresponding decoders (Eq. 3). And the one-hot choice sample selects from the subgoals by gating (Eq. 4). Fig. 3b illustrates the process of worker subgoal prediction using the Multi-Resolution Skill CV AEs. It should be noted that only the final layer of the policy is split into multiple heads, which minimally increases the model size. The MSRD policy is learned such that each skill head becomes an expert at using the corresponding resolution skills for all states st∈Sindependently. And the choice head simultaneously learns to pick the best skill head for all states st∈S. si,t g=Deci ϕ(zt,i, st)where zt,i∼πMi(zt,i|st) (3) st g=N−1X i=0ct,i.si,t gwhere ct∼πMC(ct|st) (4) 3.4 Policy Optimization Like the Director [ 10], the MSRD manager and the worker policies are implemented as Soft-Actor- Critics (SAC) and optimized using imagined trajectories. Imagination using the RSSM module helps cheaply generate on-policy data for training. The agent imagines a batch of T-step trajectories used 4 to train both the manager and the worker. The returns are estimated using lambda returns,
|
https://arxiv.org/abs/2505.21410v1
|
followed by policy update using policy gradients for the external and exploratory rewards. We briefly describe the standard training steps below, followed by the exploratory objective (Sec. 3.4.1) and the policy gradients for our approach (Sec. 3.4.2). See Sec. B for full training and architecture details. Manager: The manager is trained to maximize the external task and the exploratory rewards (Sec. 3.4.1). Since the manager works on a coarser temporal scale, an abstract trajectory of length T/K is extracted corresponding to every K-th step and summing rewards within each non-overlapping subsequence of length K. Then, separate lambda returns are computed for each reward type, which are learned using individual critics. The manager’s policy is updated using the REINFORCE objective (policy gradients) (3.4.2), using the weighted sum of advantages from both objectives. Worker: The worker is trained to maximize the goal rewards, calculated as the cosine-max similarity between the agent state stand the goal state sg. The imagined trajectory is split into K-step sub- trajectories where the goal state sgremains consistent. The rewards and lambda returns are computed for the sub-trajectories to update the critic, followed by policy update using the REINFORCE objective. 3.4.1 Exploratory Loss for Novel Skill Discovery To learn novel skills or abstract state transitions, we provide the manager with an additional ex- ploratory reward that encourages the manager to find novel state transitions. Since the Skills CV AE learns all possible abstract state transitions in the environment we use the reconstruction error from the CV AE as the reward signal. This encourages the agent to repeat state transitions that are not yet well-learned by the CV AE. The exploratory reward RExpl t(τ)for the imagined trajectory τof length Tis computed as the reconstruction error of the state stconditioned on the starting state s0(Eq. 5). Since we use multiple CV AEs, the min of the reconstruction errors across all CV AEs is used as the reward. Since the skill discovery objective is a reward, it can be used with the external task without needing a prior skill discovery phase. RExpl t= min i st−Deci ϕ(s0, zt,i) 2where zt,i∼Enci ϕ(z|s0, st) (5) 3.4.2 Policy Gradients We first decompose the action prediction process to derive the policy gradient to train the manager and worker policies. Let an MSRD agent be in state stat step t. Every K-th step, the manager refreshes the worker’s goal. For clarity, let the abstract step be indexed by k, then at each abstract step ( t=kK): 1. Sample skill latents from the skill heads: zk,0, zk,1, ..., z k,N−1∼ΠN−1 i=0πMi(zk,i|skK). 2. Sample a choice variable: ck∼πMC(ck|skK). 3. Compute the selected subgoal: sk g=PN−1 i=0ck,i·Deci ϕ(skK, zk,i). 4. Predicts the environmental actions using worker: πW(at|st, sk g) Thus, the trajectory probability that starts at s0can be written as: p(τ) =p(s0)⌊T/K⌋−1Y k=0πMC(ck|skK)N−1Y i=0πMi(zk,i|skK)ck,i | {z } ManagerT−1Y t=0πW(at|st, s⌊t/K⌋ g ) | {z } Worker·pT(st+1|at, st)|{z } State transition (6) The components of the equation can be read as: the manager predicts the skills (zk,0, zk,1, ..., z k,N) and choice ckfor every abstract step k, the worker that predicts
|
https://arxiv.org/abs/2505.21410v1
|
the action atat each step tusing the subgoal s⌊t/K⌋ g for the duration, and the environmental state transition pT. Here, the exponent ck,i collapses the skill probabilities πMi(zk,i|skK)of the unselected skills to 1as they do not affect the trajectory. 5 We follow the policy gradient derivation from [ 21]. The aim is to compute ∇θJ, where J=Eτ[R(τ)] is the expected reward and θare the policy parameters. Using the standard log-derivative trick ([ 21]), the objective can be written as maximizing the trajectory log-probability weighted by the expected reward: ∇θJ=Eτ[R(τ)· ∇θlogp(τ)] The gradient of the trajectory log-probability w.r.t the manager parameters Mis: ∇Mlogp(τ) =⌊T/K⌋−1X k=0[∇MlogπMC(ck|skK) +N−1X i=0ck,i∇MlogπMi(zk,i|skK)] Therefore, the policy-gradient objective can be written as: ∇MJ=Eτ[R(τ)·⌊T/K⌋−1X k=0[∇MlogπMC(ck|skK) +N−1X i=0ck,i∇MlogπMi(zk,i|skK)]] Given these policy gradients, we construct the losses for each head as the sum of the policy gradient objective and an entropy maximization objective (Eq. 9,8), and sum them for the total loss (Eq. 10). Gλ k=Rk+γ((1−λ)vM(skK) +λGλ k+1) (7) L(πMc) =−Eτ⌊T/K⌋−1X k=0logπMc(ck|skK)(Gλ k−vM(skK)) +ηH[πMC(ck|skK)] (8) L(πMi) =−Eτ⌊T/K⌋−1X k=0ck,i·logπMi(zk,i|skK)(Gλ k−vM(skK)) +ηH[πMi(zk,i|skK)] (9) L(πM) =L(πMc) +N−1X i=0L(πMi) (10) L(vM) =Eτ⌊T/K⌋−1X k=0(vM(skK)−Gλ k)2(11) Where Gλ kis the lambda returns estimated using abstract trajectories (Eq. 7), vMis the critic utilized to reduce variance and generalization beyond the rollout horizons (Eq. 11). The policy maximizes the advantage Gλ k−vM(skK)instead of directly maximizing estimated rewards. Weighted entropic lossesH[·]encourage adequate exploration prior to convergence. 4 Addressing a Critical Failure Mode Previous skill discovery methods have mentioned difficulties learning skill primitives while acting using the same skills [ 10,7]. This is because after the model learns a few reliable skills, it tends to repeat them, thus, getting stuck with unoptimal skills. The exploratory loss is sufficient to encourage the model to explore novel skills and perform as well as the results presented in this paper. However, the problem can be artificially induced by prematurely converging the CV AE before the policy converges; one way is to increase the training data for the CV AE disproportionately. This causes the policy to collapse to a degenerate solution as the CA VE predicts only initially learned goals. For completeness of the solution, we make an additional modification that prevents this problem. In addition to the Skill CV AEs, we add another V AE that learns states unconditionally, like the Director. The unconditional V AE (Enc∞ ψ,Dec∞ ψ)imitates learning ∞-length skills as the predicted state sgis 6 Figure 4: Episode scores from MSRD (ours) and the Director ( 3seeds per experiment). The plot shows the total rewards (mean and standard deviation) received in an episode against the environmental step. Both methods use the same common hyperparameters. completely independent of any previous state st. This helps the agent escape the collapse by allowing the manager to select goal states independent of the current state, and also removes any need for balancing policy and CV AE learning. Our results show that the agent initially uses the unconditional V AE but soon switches to Skill CV AEs (Fig. 7). 5 Results We use skill lengths L= [64 ,32,16,8,∞]for all our experiments. Since we use multiple policy heads, the
|
https://arxiv.org/abs/2505.21410v1
|
policy learning signal reduces by a factor of N; thus, we increase training to every 8-th step rather than 16. The agent is tested in locomotion-based environments and trained to optimize for external and exploratory rewards (advantages weighted as [1.0,0.1], respectively). 5.1 Standard Benchmarks DeepMind Control Suite : We compare our method with SOTA methods on several tasks in the DeepMind Control Suite (DMC) [ 23]. Each episode runs for 1000 steps before terminating with dense rewards at each step. Fig. 4 shows the performance of our method compared to the Director (the same common hyperparameters). Due to resource constraints, we are not able to run SOTA non-hierarchical approaches, but we provide the results for DreamerV2 ([ 9]) in the appendix (Fig. 9). The results show that our method outperforms Director at all tasks and matches DreamerV2’s performance at most. We also plot the evolution of the choice distribution during training (Fig. 5). A common trend was that the manager prefers unconditional V AE earlier and later shifts to skill CV AEs (Fig. 5). This trend is similar to human behavior when learning new skills, e.g., body movements for a new sport. Initially, one might make crooked motions through a few identified advantageous body configurations, but repetitions reduce conscious effort [18]. Egocentric Ant : We also tested our method on the Egocentric ant maze task, where the agent receives sparse rewards for reaching a goal location. Each episode lasts 3000 steps before terminating with a 0reward. Therefore, training is mostly done using exploratory rewards. The agent takes the proprioceptive observations and an egocentric camera image as inputs. While DreamerV2 fails at the task, the Director and our agent solve it, with our agent receiving higher scores. This task takes extremely long to complete, so we take results from [10] for comparison (Fig. 6). 5.2 Ablations How well do the individual skills perform, and is the dynamic skill interleaving useful? Our method trains individual expert policies for each skill CV AE, for all states st∈S, and the choice head for selecting the best skill for all states st∈S. In this context, we compare the following settings: the default choice mechanism, random choice, and using each skill module separately. The 7 Figure 5: Stream graphs showing the evolution of the choice distribution during training averaged across 3seeds. A trend can be noticed that the manager starts with the ∞skills but slowly switches to the temporally constrained skills. Figure 6: Episode rewards from the Egocentric Ant Maze task against the environmental step during training ( 3seeds). (Left) — MSRD ( Ours ), (Right) Results taken from [ 10] that compares: — Director, — Director with worker receiving external task reward, — DreamerV2. skill selection mechanism is modified in an already trained MSRD agent to enforce the above settings. Comparing skills from a trained MSRD model enforces the assumption that the skills are trained using the same data, possibly of a higher quality than each skill individually would have collected. Fig. 7 shows the results for some DMC tasks where each skill score is averaged across
|
https://arxiv.org/abs/2505.21410v1
|
100episodes. It can be seen that interleaving the skills using the proposed choice mechanism consistently yields the best results. It should also be seen that no individual skill performs well for all tasks; thus, using the choice policy πMCcan help automate skill selection. Figure 7: Final performance comparison between the settings: default choice mechanism, random selection, and the skills individually [64,32,16,8,∞]. The results are the mean and standard deviations of the episodic rewards across 100evaluation runs. 8 Figure 8: Performance comparison of agents fine-tuned for tasks after an exploration phase ( 3seeds per experiment). The graphs show total episodic rewards (mean and standard deviation) against the global steps. The plots compare: MRSD, MRSD using exploratory rewards from the unconditional V AE, ReST, and DIAYN. Our agent is trained every 8-th step using image inputs while DIAYN/ReST trains every step using the internal environmental proprioceptive state. Can the agent learn usable skills only using the exploratory objective? We test if our method can learn skills independently of the task and then learn a policy to use the skills to perform a task. For this, we first train an agent for 3M environmental steps using only the exploratory objective. The agent learns interesting behaviors like backflip, headstand, somersaults (forwards and backwards), etc. (Appendix C). Next, keeping all modules static, we fine-tune the manager policy and a fresh critic for the environmental task rewards for 1M environmental steps. We compare our method to two previous skills discovery methods: ReST and DIAYN. Both methods maximize an information-theoretic objective to learn a set of distinct skills. Then, the skill that gathers maximum rewards for the external task is fine-tuned further. The original results on the methods are at the Gymembodiments of the same agents, and we use their respective parameters, including reward scaling. We also compare our exploratory objective against the Director’s, which is computed as the reconstruction error using the unconditional V AE (Sec. 4). Fig. 8 shows the comparisons. It can be seen that our method performs reasonably well for all tasks, while other methods struggle to do so. Also, while these methods fine-tune a single skill, we can fine-tune using all skills because of the interleaving mechanism. 6 Related Work Hierarchical reinforcement learning (HRL) refers to a set of techniques that temporally abstract actions [ 3,25,2,22,16,15]. While standard RL techniques like Q-Learning choose from primitive actions, HRL agents can choose temporally extended abstract actions that differ in behavioral policies and subgoals. The manager can learn preferences over the discovered action abstractions using a model-free algorithm like the Actor-Critic [ 10] or use a planning mechanism for goal-conditioned reinforcement learning [ 14]. The temporally abstract actions help the agent plan symbolically in terms of subroutines to achieve long-term goals like reaching a specific point while avoiding obstacles. The agent simultaneously learns the abstract actions and acts by selecting these abstract actions to maximize expected rewards. The abstract actions or skills are usually represented using a low-dimensional latent variable and learned by maximizing the mutual information between the trajectory (sequence of states) and a skill variable
|
https://arxiv.org/abs/2505.21410v1
|
[ 1,13,14,20,11,7]. The mutual information objective maximizes the predictability of trajectories given skills, and skills given trajectories. The skills allow the agent policy to function in a latent space. OPAL [ 1] encodes the trajectory using a bidirectional GRU and is optimized as a Variational Autoencoder. Causal InfoGAN [ 13] use an InfoGAN to learn causal representations by maximizing mutual information between the skill and the initial and final state pairs. DADS [ 20] and DIAYN [ 7] apply the mutual information objective to learn a diverse forward-inverse kinematics model that is later used for planning. ReST [ 11] also uses the same objective but trains skills one after the other, recurrently. 7 Discussion The key findings that emerge from our analysis are: 9 •Skill Interleaving Matters : Ablation studies show that skill interleaving agent performs the best, and no single skill works best across all tasks (Fig. 7). •Reward-Agnostic Learning : The agent successfully discovers usable skills without external rewards through latent space exploration (Figs. 6,8). We see a small limitation when our exploration rewards do not perform as well as the others at the cheetah_run task, indicating that no single reward scheme is sufficient for all tasks. The architecture is highly flexible, allowing mixing learned and deterministic skills, leading to hybrid structures. The skills can also be used as abstract actions for goal-directed motion planning. The multi-head policy gradient formulation can also be easily extended to other algorithms. References [1]Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, and Ofir Nachum. {OPAL}: Offline primitive discovery for accelerating offline reinforcement learning. In International Conference on Learning Representations , 2021. URL https://openreview.net/forum? id=V69LGwJ0lIN . [2]Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete event dynamic systems , 13(1):41–77, 2003. [3]Matthew M Botvinick, Yael Niv, and Andew G Barto. Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective. Cognition , 113(3):262–280, 2009. [4]Aisha C Bründl, Patrick J Tkaczynski, Grégoire Nohon Kohou, Christophe Boesch, Roman M Wittig, and Catherine Crockford. Systematic mapping of developmental milestones in wild chimpanzees. Developmental Science , 24(1):e12988, 2021. [5]Luz Carvajal and Caroline Schuppli. Learning and skill development in wild primates: toward a better understanding of cognitive evolution. Current Opinion in Behavioral Sciences , 46: 101155, 2022. [6]Laura C Dapp, Venera Gashaj, and Claudia M Roebers. Physical activity and motor skills in children: A differentiated approach. Psychology of Sport and Exercise , 54:101916, 2021. [7]Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In International Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=SJx63jRqFm . [8]Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In International conference on machine learning , pages 2555–2565. PMLR, 2019. [9]Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193 , 2020. [10] Danijar Hafner, Kuang-Huei Lee, Ian Fischer, and Pieter Abbeel. Deep hierarchical planning from pixels. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and
|
https://arxiv.org/abs/2505.21410v1
|
A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 26091–26104. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/ 2022/file/a766f56d2da42cae20b5652970ec04ef-Paper-Conference.pdf . [11] Zheyuan Jiang, Jingyue Gao, and Jianyu Chen. Unsupervised skill discovery via recurrent skill training. Advances in Neural Information Processing Systems , 35:39034–39046, 2022. [12] Diederik P Kingma, Max Welling, et al. An introduction to variational autoencoders. Founda- tions and Trends® in Machine Learning , 12(4):307–392, 2019. [13] Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart J Russell, and Pieter Abbeel. Learning plannable representations with causal infogan. Advances in Neural Information Processing Systems , 31, 2018. 10 [14] Jinning Li, Chen Tang, Masayoshi Tomizuka, and Wei Zhan. Hierarchical planning through goal-conditioned offline reinforcement learning. IEEE Robotics and Automation Letters , 7(4): 10216–10223, 2022. doi: 10.1109/LRA.2022.3190100. [15] Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical reinforcement learning. Advances in neural information processing systems , 31, 2018. [16] Shubham Pateria, Budhitama Subagdja, Ah-hwee Tan, and Chai Quek. Hierarchical reinforce- ment learning: A comprehensive survey. ACM Comput. Surv. , 54(5), jun 2021. ISSN 0360-0300. doi: 10.1145/3453160. URL https://doi.org/10.1145/3453160 . [17] Jan P Piek, Lisa Dawson, Leigh M Smith, and Natalie Gasson. The role of early fine and gross motor development on later motor and cognitive ability. Human movement science , 27(5): 668–681, 2008. [18] Jerome N Sanes. Neocortical mechanisms in motor learning. Current opinion in neurobiology , 13(2):225–231, 2003. [19] Lisa-Maria Schönfeld, Dearbhaile Dooley, Ali Jahanshahi, Yasin Temel, and Sven Hendrix. Evaluating rodent motor functions: Which tests to choose? Neuroscience & Biobehavioral Reviews , 83:298–312, 2017. [20] Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, and Karol Hausman. Dynamics- aware unsupervised discovery of skills. In International Conference on Learning Representa- tions , 2020. URL https://openreview.net/forum?id=HJgLZR4KvH . [21] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction second edition. Adaptive computation and machine learning: The MIT Press, Cambridge, MA and London , 2018. [22] Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence , 112(1-2): 181–211, 1999. [23] Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690 , 2018. [24] Sanne LC Veldman, Rute Santos, Rachel A Jones, Eduarda Sousa-Sá, and Anthony D Okely. Associations between gross motor skills and cognitive development in toddlers. Early human development , 132:39–44, 2019. [25] Marco A Wiering and Martijn Van Otterlo. Reinforcement learning. Adaptation, learning, and optimization , 12(3):729, 2012. 11 A Results from the Director (a) Results from the Director paper that compares Director, Director with worker task rewards, and the Dreamer. Figure 9: Episode score comparison with state-of-the-art non-hierarchical method DreamerV2. The results from the Director [ 10] show that our Hierarchical agent can match DreamerV2’s performance at almost all tasks. B Architecture & Training Details B.1 Worker The worker is trained using K-step imagined rollouts (κ∼πW). Given the imagined trajectory κ, the rewards for the worker RW tare computed as the cosine_max similarity measure between
|
https://arxiv.org/abs/2505.21410v1
|
the trajectory states stand the prescribed worker goal swg. First, discounted returns Gλ tare computed as n-step lambda returns (Eq. 12). Then the Actor policy is trained using the REINFORCE objective (Eq. 13) and the Critic is trained to predict the discounted returns (Eq. 14). The entropy for the worker and the manager is weighted to maintain a target entropy. 12 Gλ t=RW t+1+γL((1−λ)v(st+1) +λGλ t+1) (12) L(πW) =−Eκ∼πWH−1X t=0 (Gλ t−vW(st)) lnπW(z|st) +ηH[πW(z|st)] (13) L(vW) =Eκ∼πW"H−1X t=0(vW(st)−Gλ t)2# (14) B.2 Implementation Details We implement two functions: policy (Alg. 1) and train 2, using the hyperparameters shown in Table 1. The functions are implemented in Python/Tensorflow using XLA JIT compilation. The experiments on average take 2days to run 5M steps on an NVIDIA RTX 5000. Name Symbol Value Train batch size B 16 Replay data length - 64 Worker abstraction length K 8 Explorer Imagination Horizon T 16 Return Lambda λ 0.95 Return Discount γ 0.99 Skill resolutions L {64,32,16,8,∞} Target entropy η 0.5 KL loss weight β 1.0 RSSM deter size - 1024 RSSM stoch size - 32×32 Optimizer - Adam Learning rate (all) - 10−4 Adam Epsilon - 10−6 Weight decay (all) - 10−2 Activations - LayerNorm + ELU MLP sizes - 4×512 Train every - 8 Prallel Envs - 4 Table 1: Agent Hyperparameters Algorithm 1: Multi-Resolution Skill Policy ( πMSRD ) Input: Observation ot, Agent state (t, st−1, at−1, sg) Output: Action at, New agent state (t+ 1, st, at, sg) st←wm(ot, st−1, at−1) // World model state update iftmod K= 0then // Manager updates goal every Ksteps (z0, z1, ..., z N−1, c)∼πM(st) // Sample skill latent zand choice c {si g}N−1 i=0← {Deci ϕ(st, zi)}N−1 i=0 // Generate candidate goals sg←PN−1 i=0ci·si g // Select goal using choice vector c else sg←sg // Persist previous goal // Worker policy execution at←Worker π(st, sg) // Generate action for current goal Return at,(t+ 1, st, at, sg) 13 Algorithm 2: Multi-Resolution Skill Training Input: Collected trajectories D={τ1, ..., τ B} Output: Updated world model wm, skill modules (Encϕ,Decϕ), manager πM, worker πW // World Model Training wm.train(D) // See [8] // Multi-Resolution Skill Learning Lskills←[ ] forli∈ L do {(st, st+li)} ← ExtractStatePairs (D, li) Li←skill_loss (st, st+li) // CVAE loss (Eq. 1) Lskills.append (Li) update_skills (sum(Lskills)) // Policy Optimization via Imagination Sinit← {s0|s0∈τ, τ∈ D} // Initial states ˆτ←wm.imagine (πMSRD,Sinit, T) // Rollout imagined trajectories (Alg. 1) // Reward Computation ˆτ.rextr←renv(ˆτ) // Environment reward ˆτ.rexpl←expl_rew (ˆτ) // Exploration reward (Eq. 5) ˆτ.rgoal←cosine_max (ˆτ.st,ˆτ.s⌊t/K⌋ g ) // Goal achievement reward // Hierarchical Policy Update TW←split (ˆτ) // Worker-level transitions TM←abstract (ˆτ) // Manager-level abstractions L(πM),L(vM) =manager_loss (TM) // Eqs. 13,11 update_manager (L(πM),L(vM)) L(πW),L(vW) =worker_loss (TW) // Eqs. 13,14 update_worker (L(πW),L(vW)) C Behaviors Learned via Exploration We noticed some interesting behaviors that the MSRD agent regularly exhibited, such as front-flips, back-flips, jumps, etc., while training only using the exploratory loss. The intrinsic exploratory loss encourages the agent to perform novel state transitions (Sec. 3.4.1). Fig. 10 shows some of the learned movements. D Broader Impacts D.1 Positive Impacts Our
|
https://arxiv.org/abs/2505.21410v1
|
method’s sample efficiency (train every 8-steps) could reduce compute costs for real-world robot training, lowering environmental footprints. The imagination-based policy optimization mitigates hazards that can occur during learning. The skill interleaving mechanism allows for transparent agents with interpretable subgoals. The learned skills can be interleaved with rigorously tested safe skills, and the selection can be appropriately constrained to mitigate failures. D.2 Negative Impacts and Mitigations •Inaccurate Training : Imagination can cause incorrect learning. Mitigation: Rigorous testing using manual verification of world-model reconstructions against ground truths. •Malicious Use : Hierarchical control could enable more autonomous adversarial agents. Mitigation: Advocate for gated release of policy checkpoints. 14 (a) Hopper learns to use a front flip to stand, and back flips. (b) Cheetah learns to leap forward and perform perfect back flips. (c) Quadruped learning side rolls and walking on two legs. (d) Walker trying to headstand repeatedly and fast-forward tumbling using head and legs. Figure 10: Samples of some movements learned and regularly performed by the agent optimized only for the exploratory loss. D.3 Limitations of Scope Our experiments focus on simulated tasks without human interaction. Real-world impacts require further study of reward alignment and failure modes. E Sample Goals using Skill CV AEs We generate some sample goals using each of the Skill CV AEs individually. The goals are generated using a uniform prior p(z)for the skills, and initial states stsampled from the replay database. We use skills with temporal resolutions [64,32,16,8,∞], but we omit the ∞length skills, as it corresponds to simply learning all states independently and is not our contribution. E.1 Default Objective The agent is trained using the default objective (weighted external and exploratory advantages). Since we use a strong bias towards external reward ([1.0,0.1]), the skills learned are biased towards the goal states more appropriate for the objective. We sample the goals for tasks: walker_run (Fig. 11), quadruped_run (Fig. 12), cheetah_run (13), and hopper_hop (14). 15 (a)64length. (b)32length. (c)16length. (d)8length. Figure 11: Sample goals from the walker_run task. 16 (a)64length. (b)32length. (c)16length. (d)8length. Figure 12: Sample goals from the quadruped_run task. 17 (a)64length. (b)32length. (c)16length. (d)8length. Figure 13: Sample goals from the cheetah_run task. 18 (a)64length. (b)32length. (c)16length. (d)8length. Figure 14: Sample goals from the hopper_hop task. 19 E.2 Exploration Only The agent is optimized only for the exploration objective that aims to maximize the coverage of the state transition space. We sample the goals per Skill CV AE for embodiments: walker (Fig. 15), quadruped (Fig. 16), cheetah (17), and hopper (18) in the DMC suite [23]. 20 (a)64length. (b)32length. (c)16length. (d)8length. Figure 15: Sample goals from exploration as a walker . 21 (a)64length. (b)32length. (c)16length. (d)8length. Figure 16: Sample goals from exploration as a quadruped . 22 (a)64length. (b)32length. (c)16length. (d)8length. Figure 17: Sample goals from exploration as a cheetah . 23 (a)64length. (b)32length. (c)16length. (d)8length. Figure 18: Sample goals from exploration as a Hopper . 24 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: The abstract and introduction explicitly state
|
https://arxiv.org/abs/2505.21410v1
|
three key claims that are rigor- ously validated: •"Learns skills at multiple temporal resolutions" (Sec. 3.2): Demonstrated through distinct skill visualizations (Fig. 1) and per-skill ablation results (Fig. 7) and samples of learned agents (Sec. E). •"Dynamic skill interleaving mechanism" (Sec. 3.3): Validated via comparative analysis of fixed vs. adaptive schedules (Fig. 7) and manager choice distributions (Fig. 5). •"Matches SOTA non-HRL methods" (Sec. 5.1): Quantified through per-task compar- isons with DreamerV2 and Director (Fig. 9), using identical evaluation protocols. Limitations noted in Sec. 7 as:(reduction in per skill head learning, exploration reward). While we reuse Director’s world model architecture [ 10], all policy components and skill learning mechanisms are novel contributions. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We discuss three limitations of our approach in section 7. •The method cannot be scaled to an arbitrary number of skill heads as it reduces per skill learning signal. •While our V AE-based exploration succeeds in sparse-reward navigation tasks (6), it underperforms in dynamic locomotion for cheetah_run . This suggests that the loss is not sufficient for all tasks.. • Quality of imagination places an upper limit on performance. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: While Section 3.4.2 derives policy gradients from trajectory probabilities, three aspects require improvement: •Missing Assumptions : The derivation assumes (1) fully observable MDPs can be generated by the RSSM using partial observations and (2) differentiable policy parame- terization without explicitly stating these constraints. •Proof Completeness : The gradient derivation cites but does not formally prove the policy gradient theorem [21]. •Formal Guarantees : While we present empirical convergence (Fig. 4), no theoretical analysis of convergence is provided. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We include the remaining training details in section B, which includes all the important hyperparameters, architecture, optimization objectives, and function algorithms for reproducing the results. Although our implementation is based on Director [ 10], all novel components are self-contained in the algorithms provided (Algs. 1,2). Code release will include Dockerfile for environment replication and pre-trained checkpoints. We follow the same design patterns as the Director, so people familiar with it should be able to implement our approach in a few hours. 25 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: While we commit to releasing code post-acceptance, the current submission lacks: • Anonymized code/scripts in supplemental materials • Environment setup instructions (Dockerfile) • Exact reproduction commands for key experiments Post-release will include: • Complete code with environment setup. •
|
https://arxiv.org/abs/2505.21410v1
|
Pre-trained checkpoints for all tasks. • Jupyter notebooks for all results and visualizations. 6.Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We try to provide all the details specific to our approach in the main pa- per. We include the remaining details in the supplementary (Sec. B), which include the hyperparameters and the function algorithms implemented. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: The statistical descriptions have been added to the paper. For all our plots, the results are mean and standard deviations (std) across 3seeds. The bar graphs are shown as mean and std over 100evaluation runs. However, we have a small sample size of 3seeds due to resource constraints. Error bars represent population standard deviation (not SEM) calculated via numpy.ndarray.std() . We plan 5-seed runs (including DreamerV2) for the camera-ready version. 8.Experiments compute resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: We include the exact implementation and resource details used in the section B. 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: We do not include any information that can violate anonymity. We try to cite all references to previous work and sources. 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] 26 Justification: We discuss the possible impact of our work in section D. We emphasize that these are speculative, given our simulation-only experiments. 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: We do not introduce any data or models that can be deployed directly. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: All reused assets are properly credited with licenses and terms explicitly stated: •Director Codebase : Built on the open-source implementation from [ 10], with modifi- cations documented in the paper (Mentioned in the paper). •DeepMind Control Suite : Used under Apache License (v2.0) for all environments [23], downloaded via https://github.com/deepmind/dm_control . No proprietary datasets or models were used. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: This paper does not introduce new datasets, models, or codebases. Our implementation modifies existing assets (Director codebase [ 10], DM Control
|
https://arxiv.org/abs/2505.21410v1
|
Suite [ 23]) without creating novel standalone assets. The proposed method and training procedures are fully described in Algorithms 1-2 and Appendix B. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our work does not involve crowdsourcing nor research with human subjects. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Our work does not involve crowdsourcing nor research with human subjects. 16.Declaration of LLM usage Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [NA] Justification: We do not use an LLM to develop any part of the method. 27
|
https://arxiv.org/abs/2505.21410v1
|
arXiv:2505.21413v1 [cs.CL] 27 May 2025REFTOOL : Enhancing Model Reasoning with Reference-Guided Tool Creation Xiao Liu1Da Yin2Zirui Wu1Yansong Feng1∗ 1Wangxuan Institute of Computer Technology, Peking University 2University of California, Los Angeles {lxlisa,ziruiwu,fengyansong}@pku.edu.cn, da.yin9712@gmail.com Abstract Tools enhance the reasoning capabilities of large language models (LLMs) in com- plex problem-solving tasks, but not all tasks have available tools. In the absence of predefined tools, prior works have explored instructing LLMs to generate tools on their own. However, such approaches rely heavily on the models’ internal knowl- edge and would fail in domains beyond the LLMs’ knowledge scope. To address this limitation, we propose REFTOOL, a reference-guided framework for auto- matic tool creation that leverages structured external materials such as textbooks. REFTOOL consists of two modules: (1) tool creation, where LLMs generate exe- cutable tools from reference content, validate them using illustrative examples, and organize them hierarchically into a toolbox; and (2) tool utilization, where LLMs navigate the toolbox structure to select and apply the appropriate tools to solve prob- lems. Experiments on causality, physics, and chemistry benchmarks demonstrate thatREFTOOL outperforms existing tool-creation and domain-specific reasoning methods by 11.3%on average accuracy, while being cost-efficient and broadly generalizable. Analyses reveal that grounding tool creation in references produces accurate and faithful tools, and that the hierarchical structure facilitates effective tool selection. RefTool enables LLMs to overcome knowledge limitations, demon- strating the value of grounding tool creation in external references for enhanced and generalizable reasoning. Code is in https://github.com/xxxiaol/RefTool . 1 Introduction Tools play a critical role in enhancing the reasoning capabilities of large language models (LLMs), particularly in complex problem-solving tasks like mathematical reasoning [ 14,32]. By integrat- ing external tools, LLMs can use off-the-shelf modules to complete subtasks and execute precise computations, thereby improving their performance. Despite their importance, such tools are not universally available across all scenarios. A prominent line of work attempts to mitigate this limitation by instructing LLMs to generate their own tools based on given problems [ 20,2,25]. However, these methods would fall short when models lack the relevant expert knowledge, especially for specialized and novel domains. For example, if an LLM is unfamiliar with how to estimate the causal effect from a treatment variable to an outcome variable , it can hardly generate appropriate tools for such tasks. To address this challenge, we propose REFTOOL, a reference-guided framework for automatic tool creation. Unlike existing methods that rely on LLMs’ internal knowledge, REFTOOL extracts and generates tools from structured reference materials, such as textbooks and technical documents. As shown in Figure 1, REFTOOL consists of two modules: tool creation and tool utilization. During ∗Corresponding author. Preprint. Under review. Chapter.7 Estimation 7.6 Inverse Probability Weighting Initial Tool Generation Tool Filtering and RefinementDescription: Compute the average treatment effect (ATE) using inverse probability weighting based on estimated propensity scores. Function: Example: (Problem, Solution, Answer)Hierarchical Tool Selection Solution GenerationReference Book Tools compute_ate_ipw trim_propensity_scores Toolbox Contains tools that can successfully execute and solve example problemsWhat is the average treatment effect (ATE) from T to Y?Question Selected Tools Step 1. Chapter selection Step 2. Tool selection within chapter Solution
|
https://arxiv.org/abs/2505.21413v1
|
Answer the question with the toolsTool Creation Tool Utilization Figure 1: Overview of the REFTOOL framework, which consists of two modules: tool creation (left) and tool utilization (right). tool creation, the framework employs LLMs to generate executable tools from reference content. In the example, given a section on Inverse probability weighting2from a causal inference textbook, the LLM produces tools like compute_ate_ipw according to the content of the section. The generated tools consist of descriptions, functions, alongside illustrative examples of how to use the tools. These examples also serve as validation cases, filtering out incorrect or non-functional tools while retaining those that successfully solve the example problems. The validated tools are organized into a hierarchical toolbox, mirroring the structure of the reference material. During inference, REFTOOL guides the LLM to select tools from the toolbox hierarchically and apply tools to solve problems. For an input question like what is the average treatment effect from T to Y , the LLM navigates the toolbox hierarchy, selecting the Estimation chapter and then the compute_ate_ipw tool within the chapter. Finally, the LLM generates the solution with the help of the selected tool. By grounding tool creation and selection in external references rather than internal knowledge, REFTOOL can construct and deploy tools beyond the model’s original capabilities, enabling it to tackle tasks that would otherwise be infeasible. We evaluate REFTOOL across three challenging domains: causality, physics, and chemistry. Experi- mental results on QRData [ 13], TheoremQA [ 4], and SciBench [ 24] show that REFTOOL outperforms general-reasoning methods like Program-of-Thoughts [ 3] and ReAct [ 30], highlighting the value of incorporating external knowledge. Notably, retrieval-augmented generation with the same textbook references fails to match this performance, revealing REFTOOL’s unique strengths in transforming knowledge into tools and utilizing hierarchical organization for tool selection. REFTOOL achieves an average accuracy improvement of 11.3% over existing tool creation and domain-specific methods [ 20,19,22]. Unlike prior works that depend on manually constructed toolsets or extensive trial-and-error on validation data, REFTOOL achieves greater efficiency in both time and computational cost. The generated tools also exhibit strong generalization. Rather than being dataset-specific, they maintain robust performance across diverse datasets, akin to human learning from textbooks. Further analyses highlight the benefits of reference-guided tool creation and hierarchical tool selection. Human evaluation reveals that with the help of reference books, most tools created by REFTOOL are correct, faithful to the books, and useful in solving downstream problems. On the other hand, the hierarchical organization of tools according to the books’ structures enables effective tool selection, outperforming similarity-based retrieval by an average of 3.3% in accuracy. To summarize, we propose REFTOOL, a reference-guided framework for tool creation. REFTOOL has the following advantages: (1) By leveraging structured reference materials, REFTOOL enables LLMs to generate tools beyond their internal knowledge, while the reference hierarchy naturally serves as a structure for organizing and selecting tools. (2) Experiments on three complex problem-solving benchmarks demonstrate that REFTOOL consistently improves model reasoning performance over 2Inverse probability weighting is a common method for estimating causal effects. 2 existing baselines. (3) REFTOOL generates dataset-agnostic tools
|
https://arxiv.org/abs/2505.21413v1
|
in a cost-efficient and human-free manner, demonstrating the potential for application in diverse scenarios. 2 The R EFTOOL Framework As shown in Figure 1, REFTOOL operates in two stages: (1) constructing a hierarchical toolbox T from reference material R, and (2) selecting and applying tools t⊂Tto answer the input question q during inference. Reference BookChapter.7 Estimation 7.6 Inverse Probability WeightingTool compute_ate_ipw Description: Compute the average treatment effect (ATE) using inverse probability weighting based on estimated propensity scores. Function: Example: ………… ‐ Answer: 2.0 ‐ Problem: Given treatment indicators T=[0, 1, 0, 1], observed outcomes Y=[2, 3, 1, 4], and estimated propensity scores [0 .2, 0.8, 0.2, 0.8], compute the average treatment effect (ATE) using IPW. ‐ Solution: Toolbox Chapter … Chapter 7. Estimation Tool …, Tool 11. compute_ate_ipw Figure 2: Example of a generated tool and its corresponding reference content. Code comments are omitted due to space limits. 2.1 The Tool Creation Module Structure Extraction The tool creation process begins with extracting the structure of reference materials. Reference materials are first converted to L ATEX format, preserving their inherent structure. These documents such as textbooks typically exhibit a clear hierarchical structure that supports systematic knowledge acquisition. At the highest level, the documents are usually organized into several chapters (e.g., \chapter{Estimation} in the causal inference book), and chapters are further decomposed into sections (e.g., \section{Inverse Probability Weighting} ), where each section addresses a particular technique, theorem, or application within the chapter’s broader context. Initial Tool Generation Given the content of each section si∈ R , the LLM is instructed to generate executable tools based on the content. Each tool contains the following components, as shown in Figure 2: •Description : Natural language summary of the tool’s purpose. •Function : Python implementation of the tool, with comments describing the parameters and returns. •Example : A demonstration example of how to use the tool, consisting of a problem, a piece of solution code where the tool is called, and the expected answer. The model prioritizes examples from the reference text when available, otherwise generating an appropriate example by itself. The LLM is asked to generate at most mtools for each section. To ensure proper formatting, the prompt includes a human-written tool example from a different domain. Tool Filtering and Refinement Each tool is validated through execution testing andoutput veri- fication using the model-generated demonstration example. The solution code should run without errors, and the output should match the expected answer. 3 Failed tools trigger a refinement step, where the failure information is provided to the LLM to refine the tool. Finally, the valid tools are organized hierarchically into a toolbox, mirroring the source document’s structure. 2.2 The Tool Utilization Module Hierarchical Tool Selection During inference, REFTOOL performs hierarchical retrieval to select tools for question qthrough two phases: •Chapter Selection : Given the reference material’s table of contents C, the model is instructed to select at most ncrelevant chapters c⊂Cfor the question q. •Tool Selection within Chapter : For each selected chapter ci, the model is given access to all tools from the toolbox Tassociated with
|
https://arxiv.org/abs/2505.21413v1
|
that chapter, including their descriptions, functions, and demonstration examples. It is then prompted to select up to ntrelevant tools t, or none if no tools are deemed applicable. Solution Generation The selected tools are then integrated into the reasoning process. We incorpo- rate the tools with two reasoning paradigms: single-turn Program-of-Thoughts (PoT) reasoning [ 3] and multi-turn ReAct-style agent reasoning [ 30]. For both paradigms, the model receives selected tools in the initial prompt and is instructed to invoke them when appropriate. When no suitable tools are identified, REFTOOL defaults to standard PoT or ReAct reasoning, ensuring graceful degradation for questions outside the reference domain. 3 Experiments We conduct experiments on three complex problem-solving domains: causality, physics, and chem- istry. This section introduces the experimental setup and presents the performance of REFTOOL across these domains. 3.1 Experimental Setup Datasets We employ the following evaluation benchmarks: •Causality : QRData-causal [ 13] (269 questions). Each question is accompanied by one or multiple datasheets, and models are asked to analyze the datasheets and answer causal questions. •Physics : TheoremQA-physics [ 4] (114 questions), covering broad topics of university-level physics. •Chemistry : SciBench-chemistry [ 24], focusing on three sub-datasets ( chemmc ,quan , and matter ) related to physical and quantum chemistry (118 questions in total). We omit the other sub-dataset atkins because our reference material overlaps with the question source of this sub-dataset. We maintain consistent evaluation protocols (like answer extraction methods and tolerance rates) with the original benchmarks (see Appendix A for details) and report accuracy as our primary metric. Reference Materials Analogous to humans preparing for an exam by reading relevant textbooks, we select reference materials that have a similar domain of knowledge to the evaluation datasets. For causality, we choose Introduction to Causal Inference [16], which provides a detailed description of main causal inference topics like causal discovery and estimation. For physics, as university physics is a broad domain, we choose the three-volume textbook University Physics [11], which covers the core concepts of physics like mechanics, thermodynamics, and modern physics. For chemistry, as the evaluation benchmark is in the domain of physical chemistry, we choose a famous physical chemistry textbook Atkins’ Physical Chemistry [1].3 Table 1 (left) provides detailed statistics. Note that none of the evaluation questions originate from these books, and none of these books contain code directly. Implementation Details We employ GPT-4o [ 8] for tool creation in the main paper, and eval- uate four powerful LLMs for tool utilization: Llama-3.1-70B [ 6], Gemini-1.5-Pro [ 23], GPT- 4 [17], and GPT-4o. The specific versions are Llama-3.1-70B-Instruct ,gemini-1.5-pro-002 , gpt-4-1106-preview andgpt-4o-2024-11-20 . 3Quantum chemistry is a subdomain of physical chemistry, and is also introduced in this textbook. 4 Table 1: Statistics of the reference materials and created tools. “Avg. Lines” indicates the average lines of tool functions. Domain Book # Chapters # Sections # Tools Avg. Lines Causality Introduction to Causal Inference [16] 11 55 84 24 Physics University Physics [11] 44 284 515 16 Chemistry Atkins’ Physical Chemistry [1] 19 90 158 17 During tool creation, we set m= 2
|
https://arxiv.org/abs/2505.21413v1
|
tools per section across all domains. This can be adjusted based on each section’s length and information density. For tool utilization, we employ a default configuration of selecting nc= 1chapter and nt= 1tool. As QRData and TheoremQA do not have a validation set, we use the default setting for the causality and physics domains. For chemistry, we perform grid search over nc∈[1,2]andnt∈[1,2]on the validation set of SciBench, and choose nc= 1andnt= 2. Additional details and prompt templates can be found in Appendices A and F. Baseline Methods We compare against the following baselines, with more details in Appendix A: •General reasoning methods : We implement Program-of-Thoughts (PoT) for single-turn reasoning and ReAct for multi-turn reasoning.4We also experiment with direct reasoning and Chain-of- Thought [ 28] on GPT-4 in Appendix B.1, but they are excluded from main comparisons due to inferior performance. •Retrieval-augmented generation (RAG) methods : To investigate if LLMs can directly learn from reference materials, we enhance both PoT and ReAct with RAG, using the same reference books employed for tool creation. The books are segmented into subsections, and the segment with the highest similarity to the question embedding is retrieved. •General-purpose tool creation methods : We include Creator [ 20] as a representative method, which dynamically creates tools for each question and performs error correction. We exclude methods that rely on training data [ 2,31] or require multiple test-time predictions [ 25] to maintain a fair zero-shot comparison. •Domain-specific reasoning methods : Our evaluation includes: (1) Physics Reasoner [ 19], which manually constructs a formula set and instructs LLMs to retrieve formulas during reasoning. (2) For chemistry, both StructChem [ 18], which instructs LLMs to generate formulas before reasoning; and ChemAgent [ 22], which builds a library of memories by extensive trial-and-error on validation data. We do not compare with methods with web search as some answers may be searched directly from the Internet. 3.2 Results Toolbox Construction Table 1 (right) demonstrates statistics of tools created. On average, 73% of initially generated tools pass validation directly, with an additional 14% tools succeeding after refinement. We assess tool quality through human evaluation in §4.1. Main Results The performance comparison in Table 2 demonstrates REFTOOL’s superior perfor- mance across all domains, achieving the highest average accuracy.5Notably, REFTOOL surpasses Creator by an average margin of 12.3%, highlighting the advantage of reference-based tool creation over tools created solely by LLMs. While RAG incorporates the same reference materials, it fails to consistently enhance the performance, showing an average accuracy decrease of 0.2%over corresponding PoT/ReAct baselines. This suggests that direct retrieval struggles to effectively extract and apply relevant knowledge, while REFTOOL’s tool format and hierarchical organization enable better utilization of reference materials. The ablation study in §4.2 further analyzes the impact of both components. 4While ReAct demonstrates effectiveness on QRData by allowing error correction through multi-turn interactions [ 13], our preliminary experiments (Appendix Table 6) show limited benefits for physics and chemistry domains, likely due to the simpler code solutions without data analysis and fewer execution errors. Consequently, we omit ReAct for these domains. 5Chemistry results
|
https://arxiv.org/abs/2505.21413v1
|
are averaged across three sub-datasets, with detailed per-sub-dataset performance shown in Appendix Table 7. 5 Table 2: Performance of REFTOOL and baseline methods in causality (QRData), physics (The- oremQA), and chemistry (SciBench) domains. Numbers are in percentages ( %), with the best performance for each model shown in bold. MethodAccuracy Llama-3.1-70B Gemini-1.5-Pro GPT-4 GPT-4o Average Causality PoT 33.1 41.3 34.2 39.8 37.1 PoT + RAG 29.7 36.4 37.5 42.0 36.4 PoT + R EFTOOL 36.8 43.9 38.7 46.8 41.6 Creator 14.9 29.7 39.4 39.8 31.0 ReAct 30.1 47.6 50.9 46.5 43.8 ReAct + RAG 32.3 46.8 48.0 49.1 44.1 ReAct + R EFTOOL 33.5 48.3 51.3 52.0 46.3 Physics Physics Reasoner 48.2 50.9 42.1 33.3 43.6 Creator 40.4 57.0 35.1 40.4 43.2 PoT 48.2 57.9 45.6 57.0 52.2 PoT + RAG 44.7 57.0 44.7 57.9 51.1 PoT + R EFTOOL 53.5 58.8 49.1 57.9 54.8 Chemistry Creator 40.1 60.0 46.9 43.3 47.6 StructChem 37.9 50.2 29.7 40.5 39.6 ChemAgent 48.2 65.5 52.5 58.9 56.3 PoT 46.9 62.3 51.8 58.9 55.0 PoT + RAG 48.1 63.7 54.1 56.6 55.6 PoT + R EFTOOL 49.5 66.4 53.4 61.3 57.7 Among domain-specific methods, Physics Reasoner and StructChem perform inferior to PoT, with their complex format requirements leading to suboptimal adaptation on some models. Although ChemAgent approaches REFTOOL’s performance, it needs significantly higher computational costs, as discussed in §4.3. Performance on Reasoning Models We also conduct a small-scale experiment to evaluate if REFTOOL works for reasoning models. We apply REFTOOL on o1-mini [ 9] (with the specific version o1-mini-2024-09-12 ), and Appendix Table 8 shows that its average accuracy improves by 4.3% over PoT. This indicates that REFTOOL is also compatible with reasoning models, supplementing their knowledge and skills. Robustness of Tool Creation We validate whether REFTOOL remains effective when alternative LLMs are used in the tool creation module. As shown in Appendix Table 9, employing Gemini-1.5- Pro for tool creation also achieves superior performance compared to baseline methods. Tool Reusability We conduct experiments on another physics dataset SciBench-fund [ 24] to validate the generalizability of tools created by REFTOOL. Appendix Table 10 shows that on SciBench-fund, REFTOOL outperforms all zero-shot baseline methods and matches 4-shot Physics Reasoner, using the same tools as in the evaluation of TheoremQA. As REFTOOL is dataset-agnostic, tools created for one domain could be applied to different datasets in that domain. 4 Analysis In this section, we further analyze the effectiveness of REFTOOL through multiple perspectives: human evaluation of tool quality and selection consistency (§4.1), ablation study of key components (§4.2), computational cost analysis (§4.3), and case study of how REFTOOL helps LLMs to answer questions (§4.4). 6 Table 3: Results of human evaluation. Numbers are in percentages ( %). (a) Tool Quality Assessment. Example correctness is assessed only when the function is correct. Domain Faithful Function Correct Example Correct Useful Causality 95 95 100 90 Physics 90 90 100 90 Chemistry 90 90 89 95 (b) Consistency of Tool Selection with Humans. Chapter selection consistency is calculated as the fraction of questions where human and model chapter choices match (when
|
https://arxiv.org/abs/2505.21413v1
|
both select a chapter). Tool selection consistency is the fraction where their tools overlap, given they both choose tools from the same chapter. Domain Consistency Llama-3.1-70B Gemini-1.5-Pro GPT-4 GPT-4o CausalityChapter Selection 100 95 100 100 Tool Selection within Chapter 94 100 91 94 PhysicsChapter Selection 80 80 75 76 Tool Selection within Chapter 53 56 44 69 ChemistryChapter Selection 55 65 72 75 Tool Selection within Chapter 60 67 40 90 4.1 Human Evaluation We conduct human evaluation to assess (1) the quality of created tools and (2) the alignment between LLM-selected tools and those chosen by domain experts. Tool Quality Assessment We evaluate tools along four dimensions. (1) Faithfulness : Whether the tool accurately reflects the source material. (2) Function Correctness : Whether the tool function meets the tool description and is implemented correctly. (3) Example Correctness : Whether the example solution uses the function properly and returns the right answer. (4) Usefulness : The practical utility of the tool for solving relevant problems without being too narrow. For each domain, we randomly sample 20tools from the toolbox, and ask a human expert who has studied corresponding courses to annotate them. As shown in Table 3a, all aspects are satisfied by ≥89% tools, indicating that most tools are faithfully derived from the references, correctly implemented, and useful in application. Chemistry tools show slightly lower quality due to the domain’s complexity. When LLMs lack foundational knowledge, they may misinterpret nuanced concepts, like mistaking the meaning of a coefficient. Consistency of Tool Selection with Humans We compare human and LLM tool selection by having experts simulate the hierarchical selection process. For each domain, we randomly sample 20 questions where models consistently use tools (see Appendix C for more details). Table 3b shows the consistency between LLMs and human experts. We observe higher agreement in chapter selection compared to tool selection, supporting our hierarchical selection step which narrows down the tool search space with a high consensus. Comparing between domains, causality achieves the highest consistency in both chapter and tool selection. This stems from the domain’s more straightforward questions containing keywords like average treatment effect that directly point to specific knowledge. Physics and chemistry questions, by contrast, often involve more indirect formulations that may challenge models in identifying the required knowledge. Gemini-1.5-Pro and GPT-4o demonstrate better alignment with human experts, mirroring their superior PoT performance which reflects stronger internal domain knowledge. While Llama-3.1-70B and GPT-4 show weaker consistency with humans in physics and chemistry tool selection, their chosen tools are still valuable as relevant knowledge is recalled. In cases where these models select the same chapter as humans but different tools, we observe a 17% accuracy improvement from tool usage compared to the PoT baseline. Appendix D provides a concrete example. 7 Table 4: Ablation results. (sim) indicates selecting with text similarity. Numbers are in percentages (%), with the best performance for each model shown in bold. MethodAccuracy Llama-3.1-70B Gemini-1.5-Pro GPT-4 GPT-4o Average Causality PoT + RAG 29.7 36.4 37.5 42.0 36.4 PoT + Hierarchical RAG 36.8 38.7 43.5 36.8 39.0 PoT + R
|
https://arxiv.org/abs/2505.21413v1
|
EFTOOL (sim) 30.5 36.1 46.1 39.4 38.0 PoT + R EFTOOL 36.8 43.9 38.7 46.8 41.6 Physics PoT + RAG 44.7 57.0 44.7 57.9 51.1 PoT + Hierarchical RAG 44.7 64.0 44.7 55.3 52.2 PoT + R EFTOOL (sim) 45.6 62.3 43.0 56.1 51.8 PoT + R EFTOOL 53.5 58.8 49.1 57.9 54.8 4.2 Ablation Study We design two variants of REFTOOL to analyze its key components: code-form tool creation and hierarchical selection. •PoT + Hierarchical RAG : Substitutes REFTOOL’s code-form tools with raw text segments while preserving the hierarchical structure. This maintains the three-step reasoning process of chapter selection, intra-chapter text retrieval, and solution generation, allowing us to isolate the impact of tool representation. •PoT + REFTOOL (sim) : Retains the tool creation but replaces hierarchical selection with similarity- based retrieval. Tool descriptions are encoded into embeddings, with the most similar tool selected for each problem, mirroring standard RAG approaches but using tools instead of text. Table 4 shows the ablation results. Due to computational constraints, we focus on causality and physics domains with single-turn reasoning. By comparing PoT + REFTOOL with PoT + Hierarchical RAG, as well as PoT + REFTOOL (sim) with PoT + RAG, we observe an average 1.9%accuracy gain of tools over textual knowledge, confirming that the code form of tools enhances model understanding and application of knowledge. By comparing PoT + REFTOOL with PoT + REFTOOL (sim), as well as PoT + Hierarchical RAG with PoT + RAG, we find that hierarchical selection outperforms similarity-based retrieval by 2.6% on average, demonstrating its effectiveness in knowledge retrieval. 4.3 Cost Analysis Table 5: Cost analysis of tool-augmented methods (with GPT-4o as the base model). “-” indicates that the step is done by humans and the cost is unknown. Domain MethodTime (min.) Cost ($) Toolbox Construction Inference Toolbox Construction Inference PhysicsPhysics Reasoner - 75 - 3.5 PoT + R EFTOOL 5 2 6.9 1.5 ChemistryChemAgent 1233 536 79.3 41.3 PoT + R EFTOOL 3 6 3.5 1.4 Table 5 shows that REFTOOL greatly saves cost compared with other tool-augmented methods. Compared with Physics Reasoner which iteratively refines the reasoning process, REFTOOL reduces inference time by 97% and cost by 57%. The improvements are even more pronounced when compared to ChemAgent’s divide-and-retry strategy: REFTOOL cuts both toolbox construction time and inference time by 99%. The guidance of references enables REFTOOL to achieve great performance without repeatedly trying, offering a scalable solution for complex reasoning tasks. 8 (a) Wrong solution when reasoning with PoT (b) Correct solution when reasoning with PoT + RefToolInput Data Description: The data set in flow.csv offers continuous measurements of expression levels of multiple phosphorylated proteins and phospholipid components in human immune system cells ... Question: Which cause -and-effect relationship is more likely? A.pakts473 causes pmek B. pmek causes pakts473 C. No causal relationship exists Please answer with A, B, or C. Program of Thoughts (PoT) Answer: A Answer: A The magnitude of the R -squared value does not indicate causality!Tool Causal Direction Fit Fit a linear model in the causal direction and compute residuals to test
|
https://arxiv.org/abs/2505.21413v1
|
for independence between the input variable and residuals. PoT + RefTool Selected Chapter: Chapter.11 Causal Discovery from Observational Data Selected Tool: Causal Direction Fit Answer: C Answer: C Figure 3: Example case of GPT-4o with (right) and without (left) R EFTOOL. 4.4 Case Study Figure 3 demonstrates a case where GPT-4o correctly answers a question with REFTOOL. When presented with the causal discovery problem, the model successfully navigates to the relevant chapter Causal discovery from observational data and selects the appropriate causal_direction_fit tool, generating the correct solution code. In contrast, without tool assistance, the model incorrectly uses R-squared values to infer causal relationships, leading to a wrong prediction. The detailed version of the case, along with physics and chemistry cases, are in Appendix D. 5 Related Work Automatic Tool Creation The automatic creation of tools for LLMs aims to overcome the limita- tions of relying solely on pre-existing tools. Most works generate tools in code format, while some generate skills or workflows in the format of abstract actions [ 29] or non-executable text [ 26]. Existing methods can be broadly categorized into two paradigms: (1) generating temporary, task-specific tools for individual queries [ 20], and (2) constructing reusable toolsets based on training or testing data [ 2,25]. These methods have demonstrated success in mathematical reasoning [ 20,2], visual question answering [ 31,25], and agent-based tasks [ 27,33]. Unlike previous works, which primarily rely on LLMs’ internal knowledge, our method utilizes external references to create tools, enabling applications beyond the models’ inherent knowledge scope. Tool-Augmented Reasoning Tool-augmented reasoning enhances LLMs’ problem-solving ca- pabilities by integrating external tools, particularly for tasks requiring specialized knowledge or complex computation. Some studies manually curate a small set of high-quality tools [7, 15], while others [ 21,12] utilize large-scale APIs from platforms like RapidAPI or API-Bank [ 10]. However, the massive amount of tools makes it difficult to select an appropriate tool for a task. To mitigate this, we use the hierarchical structure of reference materials to organize and select tools. This strategy is also adopted by Du et al. [5], which leverages RapidAPI’s categorization for tool selection. 6 Conclusion We present REFTOOL, a framework that enhances LLM reasoning through reference-guided tool creation. By leveraging structured materials like textbooks, REFTOOL enables LLMs to generate 9 accurate tools beyond their internal knowledge, and the hierarchical organization of tools further enables effective tool selection. Experiments across causality, physics, and chemistry domains show consistent improvements over existing tool-creation and domain-specific reasoning methods, while maintaining computational efficiency. REFTOOL demonstrates how grounding tool creation in authoritative references can overcome LLMs’ knowledge limitations, offering a generalizable solution for complex problem-solving. References [1]P. W. Atkins, J. De Paula, and J. Keeler. Atkins’ physical chemistry . Oxford university press, 2023. [2]T. Cai, X. Wang, T. Ma, X. Chen, and D. Zhou. Large language models as tool makers. In The Twelfth International Conference on Learning Representations , 2023. [3]W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research , 2023.
|
https://arxiv.org/abs/2505.21413v1
|
[4]W. Chen, M. Yin, M. Ku, P. Lu, Y . Wan, X. Ma, J. Xu, X. Wang, and T. Xia. Theoremqa: A theorem-driven question answering dataset. In The 2023 Conference on Empirical Methods in Natural Language Processing , 2023. [5]Y . Du, F. Wei, and H. Zhang. Anytool: Self-reflective, hierarchical agents for large-scale api calls. In International Conference on Machine Learning , pages 11812–11829. PMLR, 2024. [6]A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Yang, A. Fan, et al. The llama 3 herd of models. ArXiv preprint , abs/2407.21783, 2024. [7]Y . Gu, Y . Shu, H. Yu, X. Liu, Y . Dong, J. Tang, J. Srinivasa, H. Latapie, and Y . Su. Middleware for llms: Tools are instrumental for language agents in complex environments. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 7646–7663, 2024. [8]A. Hurst, A. Lerer, A. P. Goucher, A. Perelman, A. Ramesh, A. Clark, A. Ostrow, A. Welihinda, A. Hayes, A. Radford, et al. Gpt-4o system card. ArXiv preprint , abs/2410.21276, 2024. [9]A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [10] M. Li, Y . Zhao, B. Yu, F. Song, H. Li, H. Yu, Z. Li, F. Huang, and Y . Li. Api-bank: A comprehensive benchmark for tool-augmented llms. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 3102–3116, 2023. [11] S. J. Ling, J. Sanny, W. Moebs, G. Friedman, S. D. Druger, A. Kolakowska, D. Anderson, D. Bowman, D. Demaree, E. Ginsberg, et al. University Physics . OpenStax, 2016. [12] X. Liu, Z. Peng, X. Yi, X. Xie, L. Xiang, Y . Liu, and D. Xu. Toolnet: Connecting large language models with massive tools via tool graph. arXiv preprint arXiv:2403.00839 , 2024. [13] X. Liu, Z. Wu, X. Wu, P. Lu, K.-W. Chang, and Y . Feng. Are llms capable of data-based statistical and causal reasoning? benchmarking advanced quantitative reasoning with data. In Findings of the Association for Computational Linguistics ACL 2024 , pages 9215–9235, 2024. [14] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y . N. Wu, S.-C. Zhu, and J. Gao. Chameleon: Plug-and-play compositional reasoning with large language models. Advances in Neural Information Processing Systems , 36:43447–43478, 2023. [15] P. Lu, B. Chen, S. Liu, R. Thapa, J. Boen, and J. Zou. Octotools: An agentic framework with extensible tools for complex reasoning. arXiv preprint arXiv:2502.11271 , 2025. [16] B. Neal. Introduction to causal inference, 2020. [17] OpenAI. Gpt-4 technical report. ArXiv preprint , abs/2303.08774, 2023. 10 [18] S. Ouyang, Z. Zhang, B. Yan, X. Liu, Y . Choi, J. Han, and L. Qin. Structured chemistry reasoning with large language models. In International Conference on Machine Learning , pages 38937–38952. PMLR, 2024. [19] X. Pang, R. Hong, Z. Zhou, F. Lv, X. Yang, Z. Liang, B. Han, and C. Zhang. Physics reasoner: Knowledge-augmented reasoning for solving physics problems
|
https://arxiv.org/abs/2505.21413v1
|
with large language models. InProceedings of the 31st International Conference on Computational Linguistics , pages 11274–11289, 2025. [20] C. Qian, C. Han, Y . Fung, Y . Qin, Z. Liu, and H. Ji. Creator: Tool creation for disentangling abstract and concrete reasoning of large language models. In The 2023 Conference on Empirical Methods in Natural Language Processing , 2023. [21] Y . Qin, S. Liang, Y . Ye, K. Zhu, L. Yan, Y . Lu, Y . Lin, X. Cong, X. Tang, B. Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. In The Twelfth International Conference on Learning Representations , 2024. [22] X. Tang, T. Hu, M. Ye, Y . Shao, X. Yin, S. Ouyang, W. Zhou, P. Lu, Z. Zhang, Y . Zhao, et al. Chemagent: Self-updating library in large language models improves chemical reasoning. arXiv preprint arXiv:2501.06590 , 2025. [23] G. Team, P. Georgiev, V . I. Lei, R. Burnell, L. Bai, A. Gulati, G. Tanzer, D. Vincent, Z. Pan, S. Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. ArXiv preprint , abs/2403.05530, 2024. [24] X. Wang, Z. Hu, P. Lu, Y . Zhu, J. Zhang, S. Subramaniam, A. R. Loomba, S. Zhang, Y . Sun, and W. Wang. Scibench: Evaluating college-level scientific problem-solving abilities of large language models. In Forty-first International Conference on Machine Learning , 2024. [25] Z. Wang, G. Neubig, and D. Fried. Trove: Inducing verifiable and efficient toolboxes for solving programmatic tasks. In International Conference on Machine Learning , pages 51177–51191. PMLR, 2024. [26] Z. Z. Wang, J. Mao, D. Fried, and G. Neubig. Agent workflow memory. arXiv preprint arXiv:2409.07429 , 2024. [27] Z. Z. Wang, A. Gandhi, G. Neubig, and D. Fried. Inducing programmatic skills for agentic tasks. arXiv preprint arXiv:2504.06821 , 2025. [28] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al. Chain-of- thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [29] L. Wong, J. Mao, P. Sharma, Z. Siegel, J. Feng, N. Korneev, J. B. Tenenbaum, and J. Andreas. Learning adaptive planning representations with natural language guidance. In The Twelfth International Conference on Learning Representations , 2024. [30] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y . Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations , 2022. [31] L. Yuan, Y . Chen, X. Wang, Y . R. Fung, H. Peng, and H. Ji. Craft: Customizing llms by creating and retrieving from specialized toolsets. In 12th International Conference on Learning Representations, ICLR 2024 , 2024. [32] B. Zhang, K. Zhou, X. Wei, X. Zhao, J. Sha, S. Wang, and J.-R. Wen. Evaluating and improv- ing tool-augmented computation-intensive math reasoning. Advances in Neural Information Processing Systems , 36:23570–23589, 2023. [33] B. Zheng, M. Y . Fatemi, X. Jin, Z. Z. Wang, A. Gandhi, Y . Song, Y . Gu, J. Srinivasa, G. Liu, G. Neubig, et al.
|
https://arxiv.org/abs/2505.21413v1
|
Skillweaver: Web agents can self-improve by discovering and honing skills. arXiv preprint arXiv:2504.07079 , 2025. 11 A Implementation Details Implementation of the REFTOOL Framework By default, each tool’s demonstration example is included during solution generation. For the causality domain, we omit the example because QRData questions involve data analysis and differ significantly in format from the examples. The temperature of all models is set to 0. The maximum output tokens are set to 2048 for initial tool generation, refinement, and solution generation, and 512 for hierarchical tool selection. Experiments are conducted on 8 NVIDIA A800 GPUs. Implementation of Baseline Methods For RAG methods, reference documents are segmented by subsections. Subsections exceeding 1,000 tokens are further divided into 1,000-token chunks. These segments are then processed using the text-embedding-3-large embedding model to generate text embeddings. During inference, we compute the embedding for each question and select the text with the highest similarity score to include in the model’s prompt. All baselines are evaluated in a zero-shot setting. For Creator, we provide a tool example from a different domain (math) in the tool creation prompt. For ChemAgent, we use GPT-4o for library construction to align with R EFTOOL’s setting. Evaluation Protocol We adopt the original benchmarks’ evaluation code for consistency. The tolerant rate of numerical questions is 3% for QRData, 4% for TheoremQA, and 5% for SciBench. B Additional Results B.1 Performance of More Baseline Methods Table 6: Performance of more baseline methods on GPT-4. Numbers are in percentages ( %). MethodAccuracy Causality Physics Chemistry Direct Reasoning 33.1 14.0 32.8 CoT 41.3 10.5 35.4 PoT 34.2 45.6 51.8 ReAct 50.9 41.2 54.6 Previous works also compare with pure-text baselines like direct reasoning and Chain-of-Thought (CoT) [ 28], but as our preliminary experiment in Table 6 shows that these methods are much inferior to PoT on GPT-4, we do not add them into the baselines in the main paper. While CoT achieves high performance on the causality domain, this results from educated guessing on multiple- choice questions, with none of the numerical questions being answered correctly. This deviates from QRData’s original goal of conducting data-based quantitative reasoning. Direct reasoning outperforms CoT in physics because, despite the instruction to answer directly, the model still generates intermediate reasoning steps for most questions. As the code of solving physics and chemistry problems is simpler and successful executes in most cases, multi-turn reasoning is not that necessary in such scenarios, therefore we do not implement the multi-turn settings ReAct and React+ REFTOOL. Table 6 also shows that ReAct introduces limited improvement or even negative influence on these domains. B.2 Sub-dataset Performance of SciBench-chemistry Table 7 shows the performance of sub-datasets of SciBench-chemistry. While performance varies due to each sub-dataset’s small scale, R EFTOOL demonstrates effectiveness in most cases. B.3 Performance on Reasoning Models Table 8 shows REFTOOL’s performance on o1-mini, where it improves average accuracy by 4.3% over PoT. This indicates REFTOOL’s compatibility with reasoning models, effectively supplementing their knowledge and capabilities. 12 Table 7: Performance of sub-datasets of SciBench-chemistry. Numbers are in percentages ( %). MethodAccuracy Chemmc Matter Quan
|
https://arxiv.org/abs/2505.21413v1
|
Average Llama-3.1-70B Creator 50.0 34.0 36.4 40.1 StructChem 50.0 21.3 42.4 37.9 ChemAgent 60.5 44.7 39.4 48.2 PoT 65.8 44.7 30.3 46.9 PoT + RAG 63.2 44.7 36.4 48.1 PoT + R EFTOOL 63.2 48.9 36.4 49.5 Gemini-1.5-Pro Creator 73.7 63.8 42.4 60.0 StructChem 57.9 38.3 54.5 50.2 ChemAgent 78.9 66.0 51.5 65.5 PoT 78.9 59.6 48.5 62.3 PoT + RAG 78.9 63.8 48.5 63.7 PoT + R EFTOOL 81.6 66.0 51.5 66.4 GPT-4 Creator 60.5 46.8 33.3 46.9 StructChem 36.8 19.1 33.3 29.7 ChemAgent 68.4 46.8 42.4 52.5 PoT 68.4 44.7 42.4 51.8 PoT + RAG 65.8 51.1 45.5 54.1 PoT + R EFTOOL 71.1 46.8 42.4 53.4 GPT-4o Creator 55.3 38.3 36.4 43.3 StructChem 55.3 29.8 36.4 40.5 ChemAgent 78.9 55.3 42.4 58.9 PoT 78.9 55.3 42.4 58.9 PoT + RAG 76.3 51.1 42.4 56.6 PoT + R EFTOOL 76.3 53.2 54.5 61.3 Table 8: Performance of o1-mini. Numbers are in percentages ( %), with the best performance for each model shown in bold. MethodAccuracy Causality Physics Chemistry PoT 44.2 56.1 60.5 PoT + R EFTOOL 50.2 57.9 65.6 B.4 Performance of Using Gemini-1.5-Pro as the Tool Creation Model To assess the robustness of REFTOOL’s tool creation module, we experiment with Gemini-1.5-Pro as an alternative LLM. As Table 9 shows, this configuration also achieves superior performance compared to baseline methods. Compared to GPT-4o-created tools, a relatively lower ratio of Gemini-1.5-Pro-created tools passes the validation. In physics, GPT-4o achieves 82% direct validation success with 8% succeeding after refinement, while Gemini-1.5-Pro achieves 54% direct success with 24% succeeding after refinement. Although refinement helps recover about one-quarter of tools, approximately 20% still get filtered out, potentially leading to incomplete knowledge coverage and slightly lower overall performance compared to GPT-4o-created tools. 13 Table 9: Performance of REFTOOL using Gemini-1.5-Pro as the tool creation model. Numbers are in percentages ( %), with the best performance for each model shown in bold. MethodAccuracy Llama-3.1-70B Gemini-1.5-Pro GPT-4 GPT-4o Average Physics Physics Reasoner 48.2 50.9 42.1 33.3 43.4 Creator 40.4 57.0 35.1 40.4 43.2 PoT 48.2 57.9 45.6 57.0 52.2 PoT + RAG 44.7 57.0 44.7 57.9 51.1 PoT + R EFTOOL (GPT-4 O) 53.5 58.8 49.1 57.9 54.8 PoT + R EFTOOL (GEMINI ) 48.2 58.8 47.4 58.8 53.3 Chemistry Creator 40.1 60.0 46.9 43.3 47.6 StructChem 37.9 50.2 29.7 40.5 39.6 ChemAgent 48.2 65.5 52.5 58.9 56.3 PoT 46.9 62.3 51.8 58.9 55.0 PoT + RAG 48.1 63.7 53.7 56.6 55.5 PoT + R EFTOOL (GPT-4 O) 49.5 66.4 53.4 61.3 57.7 PoT + R EFTOOL (GEMINI ) 48.3 64.9 52.5 61.6 56.8 Table 10: Performance on another physics dataset: Scibench-fund. Numbers are in percentages ( %), with the best performance for each model shown in bold. MethodAccuracy Llama-3.1-70B Gemini-1.5-Pro GPT-4 GPT-4o Average Physics Reasoner 56.3 59.2 63.4 36.6 53.9 Creator 56.3 70.4 53.5 57.7 59.5 PoT 53.5 69.0 59.2 73.2 63.7 PoT + RAG 54.9 73.2 59.2 73.2 65.1 PoT + R EFTOOL 57.7 73.2 64.8 74.6 67.6 Physics Reasoner (4-shot) 62.0 73.2 63.4 71.8 67.6 B.5 Performance on Another Physics Dataset: SciBench-fund We evaluated
|
https://arxiv.org/abs/2505.21413v1
|
REFTOOL on another physics dataset SciBench-fund [ 24] with 71 questions, to test tool generalizability.6Table 10 shows that REFTOOL outperforms all zero-shot baselines and matches 4-shot Physics Reasoner’s performance using the same tools in the evaluation of TheoremQA. This demonstrates REFTOOL’s dataset-agnostic nature, where domain-specific tools can be applied across different datasets. C Human Evaluation Details In the tool selection process, given a question, annotators are first asked to select at most one chapter from the given book, and none if no chapter is relevant to the question. If they select a chapter, they are then asked to select one tool within the chapter if it is the most useful, select two tools only if they are equally useful, and select none if none of the tools are useful. For each domain, we randomly sample 20 questions where most models (at least 3 out of 4) choose to use tools. All human annotators are fairly paid. Consistency Metrics For chapter selection, the consistency is computed as: Consistencychapter =|{human and model select the same chapter }| |{both human and model select a chapter }|. 6We excluded two other SciBench-physics sub-datasets as they require advanced thermodynamics and particle dynamics knowledge beyond our reference textbook’s scope. 14 Figure 4: Example case of GPT-4o on a causal problem with (right) and without (left) REFTOOL. This is the detailed version of Figure 3 Figure 5: Example case of Gemini-1.5-Pro on a physical problem with (right) and without (left) REFTOOL. And for tool selection, the consistency is computed as Consistencytool=|{overlap exists between tools selected by human and model }| |{both human and model select tools within the same chapter }|. 15 Figure 6: Example case of Llama-3.1-70b on a chemical problem with (right) and without (left) REFTOOL. D Case Study Figure 4 provides the detailed causality case discussed in §4.4, while Figures 5 and 6 show physics and chemistry cases. These cases illustrate how REFTOOL helps LLMs solve problems when standard PoT fails. Notably, in Figure 6, while the selected tool doesn’t directly solve the question, it provides relevant knowledge for the LLM to solve the question in a roundabout way (through the ionization energy of hydrogen), which is different from the expert solution but also leads to the correct answer. E Limitations Task Scope REFTOOL is primarily designed for knowledge-intensive quantitative problem-solving tasks, such as those in causality, physics, and chemistry. It may not be applicable for general code generation or web-agent tasks, where knowledge isn’t the primary challenge. Domain Knowledge Dependency REFTOOL requires the LLM to possess basic domain under- standing to interpret reference materials and generate/select tools effectively. If the model lacks foundational knowledge, for instance, being unable to recognize key concepts or follow technical explanations, the generated tools may be incorrect or unusable. Future work could explore integrat- ing lightweight domain adaptation techniques, enabling REFTOOL to handle unfamiliar domains effectively. 16 F Prompts Figure 7 - Figure 11 demonstrate the prompts of R EFTOOL. Prompts of the general-reasoning baselines as shown in Figure 12 and Figure 13 are designed with reference to the QRData
|
https://arxiv.org/abs/2505.21413v1
|
and SciBench papers. Prompts for ReAct are the same as the QRData paper. Prompt for Initial Tool Generation Please extract the skills from the following text. The text is a section from the chapter {chapter} of the book {book}. Each skill is a python function with comments of parameters and returns, accompanied by a description and a demonstration example of using the skill. Please limit the number of skills to 2, and organize the skills in a list of json objects. Please implement the function, and *do not* leave it as a placeholder. Note the indent in code is 4 spaces. All packages used should be imported inside the function. The function should be self-contained. If the text contains examples, you are encouraged to use the examples in the text, otherwise please design examples by yourself. The answer to the example question is encouraged to be numerical. NOTE THAT THE SKILL PYTHON CODE SHOULD NOT BE SPECIFIC TO/ONLY APPLIED TO THE CHOSEN EXAMPLE! PLEASE GENERATE GENERAL SKILL CODE. The output should be in *complete* json structure, starting with ’[’ and ending with ’]’. Example output: [{ "description": "Compute the expected return using the Capital Asset Pricing Model (CAPM) formula.", "function": """def expected_return(rf, beta, rm): \"\"\" Parameters: - rf (float): The risk-free rate. - beta (float): The beta of the portfolio. - rm (float): The return on the market. Returns: - float: The expected return. \"\"\" return rf + beta * (rm - rf)""", "example": { "question": "Suppose a stock has the following information. It is listed on the London stock exchange and operates throughout Europe. The yield on a UK 10 year treasury is 2.8%. The stock in question will earn 8.6% as per historical data. The Beta for the stock is 1.4, i.e., it is 140% volatile to the changes in the general stock market. What is the expected rate of return?", "solution": """def solution(): # Given values. rf = 0.028 # The yield on a UK 10 year treasury beta = 1.4 # The stock is 140% volatile to the changes in the general stock market rm = 0.086 # The stock in question will earn 8.6% as per historical data # Calculate the expected return . result = expected_return(rf, beta, rm) # Return the result. return result""", "answer": 0.109 }}] Text: 17 {text} Figure 7: Prompt template for initial tool generation. Prompt for Tool Refinement Please revise the skill according to the feedback. The skill is a python function with comments of parameters and returns, accompanied by a description and a demonstration example of using the skill. Please try to keep the original intent of the skill, and modify the description/function/example to address the feedback. Note the indent in code is 4 spaces. All packages used should be imported inside the function. The function should be self-contained. The answer to the example question is encouraged to be numerical. NOTE THAT THE SKILL PYTHON CODE SHOULD NOT BE SPECIFIC TO/ONLY APPLIED TO THE CHOSEN EXAMPLE! PLEASE GENERATE GENERAL SKILL CODE. The output should be in *complete* json structure as the
|
https://arxiv.org/abs/2505.21413v1
|
original skill, starting with ’{’ and ending with ’}’. Original Skill: {skill} Feedback: {feedback} Figure 8: Prompt template for tool refinement. Prompt for Chapter Selection You are a data analyst and good at quantitative reasoning. You are required to respond to a quantitative question using the provided data. The question can be found below. Given the table of content of the book {book}, please select the chapters that you find useful in solving the question. Please provide an explanation supporting your choice. At the last line of your response, format the number of the chapters with a list, like ’[0]’. Limit the number of chapters to at most 1. Output ’[]’ if none of the chapters are useful. The last line should start with ’[’ and end with ’]’. Question: {question} Table of Content: {table_of_content} Response: Figure 9: Prompt template for chapter selection. 18 Prompt for Tool Selection within Chapter You are a data analyst and good at quantitative reasoning. You are required to respond to a quantitative question. The question and the list of skills can be found below. Please select the skills that you find useful in solving the question Please provide an explanation supporting your choice. At the last line of your response, format the number of the skills with a list, like ’[0]’. Limit the number of skills to at most 2. Output ’[]’ if none of the skills are useful. The last line should start with ’[’ and end with ’]’. Question: {question} List of skills: {tools} Response: Figure 10: Prompt template for tool selection within chapter. Prompt for Solution Generation You are a data analyst and good at quantitative reasoning. You are required to respond to a quantitative question below. Please write python code to answer the question. Please encase the Python code within triple backticks. You can use any python library you imported. The returned value of the code is supposed to be the answer. The format of the code should be “‘python def solution(): # import libraries if needed # write code to get the answer # return answer “‘ Question: {question} Please note that we provide you several functions for the above question. If the functions are related to the question, you are encouraged to use the functions to solve the question. The functions will also be provided in execution, so just call them. *DO NOT* define the functions again or import the functions. Functions: {tools} Response: Tool Template Function Description: {description} Function: {function} 19 Example Question: {example_question} Example Solution: {example_solution} Figure 11: Prompt template for solution generation. For evaluation of QRData, the data description and ten lines of the shuffled data are also added to the prompt along with the question. Prompt for PoT You are a data analyst and good at quantitative reasoning. You are required to respond to a quantitative question below. Please write python code to answer the question. Please encase the Python code within triple backticks. You can use any python library you imported. The returned value of the code is supposed to be the answer. The format
|
https://arxiv.org/abs/2505.21413v1
|
of the code should be “‘python def solution(): # import libraries if needed # write code to get the answer # return answer “‘ Question: {question} Response: Figure 12: Prompt template for PoT. For evaluation of QRData, the data description and ten lines of the shuffled data are also added to the prompt along with the question. Prompt for CoT You are a data analyst and good at quantitative reasoning. You are required to respond to a quantitative question below. Please provide a clear and step-by-step solution to answer the question. Do not write any code in your answer. Conclude the answer by stating ”The answer is therefore \boxed{[ANSWER]}.” Question: {question} Response: 20 Prompt for Direct Reasoning You are a data analyst and good at quantitative reasoning. You are required to respond to a quantitative question below. Directly answer by stating ”The answer is therefore \boxed{[ANSWER]}.” Question: {question} Response: Figure 13: Prompt templates for CoT and direct reasoning. For evaluation of QRData, the content of the data (shuffled and truncated to the first 3500 tokens) is also added to the prompt along with the question. For evaluation of SciBench, the prompt also states “The question will specify the unit of measurement, which should not be included in the answer. Express the final answer as a decimal number with three digits after the decimal point.” 21
|
https://arxiv.org/abs/2505.21413v1
|
arXiv:2505.21414v1 [cs.LG] 27 May 2025A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment Brett Bissey0 1Kyle Gatesman0 1Walker Dimon1Mohammad Alam1Luis Robaina1Joseph Weissman1 Abstract This paper introduces a comprehensive frame- work designed to analyze and secure decision- support systems trained with Deep Reinforcement Learning (DRL), prior to deployment, by provid- ing insights into learned behavior patterns and vul- nerabilities discovered through simulation. The introduced framework aids in the development of precisely timed and targeted observation pertur- bations, enabling researchers to assess adversar- ial attack outcomes within a strategic decision- making context. We validate our framework, vi- sualize agent behavior, and evaluate adversarial outcomes within the context of a custom-built strategic game, CyberStrike. Utilizing the pro- posed framework, we introduce a method for sys- tematically discovering and ranking the impact of attacks on various observation indices and time- steps, and we conduct experiments to evaluate the transferability of adversarial attacks across agent architectures and DRL training algorithms. The findings underscore the critical need for ro- bust adversarial defense mechanisms to protect decision-making policies in high-stakes environ- ments. 1. Introduction AI-enabled decision support systems trained in simulation are increasingly being deployed in safety-critical environ- ments, making them vulnerable targets to adversarial attacks. Deep reinforcement learning (DRL) has been effective in training superhuman policies in strategic board games (Sil- ver et al., 2017), video games like StarCraft (Vinyals et al., 2019), robotics tasks (Rajeswaran et al., 2018), and au- tonomous driving (Kiran et al., 2021). However due to 0Equal Contribution 1AI & Autonomy Center, MITRE Labs, McLean, V A, United States. Correspondence to: Brett Bissey (bbissey@mitre.org), Kyle Gatesman (kjgatesman@mitre.org) Copyright ©2024 The MITRE Corporation. ALL RIGHTS RE- SERVED. Approved for Public Release; Distribution Unlimited. Public Release Case Number 24-2499the reliance on deep neural networks (DNN) for decision- making, analyzing the strengths and vulnerabilities of DNN policies trained with DRL requires additional methodology. Adversarial attacks can manipulate the system’s perception of the environment through difficult-to-detect observation perturbations, leading to a policy taking sub-optimal or even harmful decisions with high confidence. To address this threat, it is essential to develop a framework that can assure the safety of decision-support systems prior to deployment, through both probing potential vulnerabilities and offering operators insights into the learned behavior. In this paper, we explore methods to develop optimally timed and targeted attacks, as well as measure the attack impact and transferability within a classic reinforcement learning (RL) setting. Our methodology involves collecting attack data, designing attack strategies that produce realistic and feasible perturbations, and measuring the impact of these attacks on various properties of the RL environment. We employ a custom-built strategic game, CyberStrike, as our experimental environment to validate our framework and visualizations. Our contributions are threefold: First, we develop an anal- ysis and visualization framework to help operators and re- searchers understand a policy’s learned behavior and vulner- abilities. Second, we develop a method to programmatically discover and rank the property impacts of attacking various observation indexes at various steps of an episode. Third, we test the transferability of adversarial attacks across agents trained with different
|
https://arxiv.org/abs/2505.21414v1
|
algorithms and learning curricula. 2. Related Work Conducting adversarial attacks on neural network policies is not as groundbreaking of a concept now as it was when first explored in (Huang et al., 2017), which extended pre- vious work in adversarial attacks in the computer vision domain such as Fast Gradient Sign Method (FGSM) (Good- fellow et al., 2015) and Carlini-Wagner attacks (Carlini & Wagner, 2017). Though, solely researching adversarial attacks on action selection may be too shallow of a tar- get to propagate meaningful influence towards a desired environmental property outcome. Methods introduced in 1 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment (Hasanbeig et al., 2020) and (Velasquez et al., 2021) sug- gest utilizing formal language, such as Linear Temporal Logic (LTL) to define objectives and constraints for DRL policies, assist in reward function design and explain be- havior patterns of policies acting within a Markov Decision Process (MDP). More recent work (Gross et al., 2022) ex- plores adversarial methods to impact atomic properties of the formalized environment; employing the aforementioned action-influencing adversarial attacks as building blocks to influence higher-level properties of the environment MDP and LTL objective specification. In addition to considering the formal logic definitions of a policy and environment when formulating attacks, we also build upon analysis and visualization techniques utilizing the internal learned mod- els of a policy, or the Semi-Aggregate Markov Decision Processes (SAMDP) (Baram et al., 2016). SAMDP analysis first aggregates observed agent behavior into meta-data sets, then clusters model-activation layer embeddings within a two-dimensional space, and finally visualizes the behav- ioral patterns within this embedding space with respect to atomic properties of interest. SAMDP’s are used by (Tapley et al., 2023) to characterize policies and their vulnerabili- ties, and we supplement these methods by illustrating the impact of adversarial attacks on environment properties at various regions of the activation embedding space. While DRL algorithms train policies to act within an environment MDP, the policy’s empirical action patterns within the en- vironment are a proxy representation of some subset of the environment MDP itself; suggesting that the identification of vulnerable embedding-space regions and observation in- dexes of one policy may transfer to other policies acting within the same environment (Behzadan & Munir, 2017; Waseda et al., 2022), even if the policies were not trained with the same algorithm. We build upon this research to develop an analytical frame- work to determine the optimal attack timing, attack targets, and observation perturbations to deliberately impact envi- ronment properties of interest and visualize this impact. 3. Methodology 3.1. Collecting Attack Data The process for injecting adversarial attacks into the classi- cal Reinforcement Learning (RL) loop is shown in Figure 1. Importantly, instead of directly altering the underlying state variable stthat influences the next step of the environ- ment dynamics, our adversary is only allowed to change the agent’s perception of s tby sending a perturbed adversarial observation otto the agent. As such, the adversary can only influence environment dynamics indirectly via the agent – the attacks engineer otin an attempt to control or alter
|
https://arxiv.org/abs/2505.21414v1
|
the agent’s action at. Figure 1. RL interaction loop with an attack injected at time step t. This time step ends with the environment dynamics using the agent’s action atand the true state stto compute the next state st+1and the reward rt+1. Time step t+1may or may not have an attack. Figure 2. Example set of attacked episode simulations stemming from an unattacked episode with 4actions (top line). In this scenario, the attack algorithm ran several attacks on state s1(at time step 1), and two of these attacks induced adversarial actions a′ 1 anda′′ 1that sufficiently differ from the original action a1, meeting the criteria for simulating the rest of the episode. Taking the adversarial actions a′ 1anda′′ 1from state s1will produce states s′ 2 ands′′ 2, respectively, which may or may not differ from s2. To study the effects of adversarial attacks, we first obtain simulated rollouts , depicted in Figure 2. Given a state st and deterministic action attaken by the agent in state st, we call an adversarial attack on stsufficiently adversarial if the agent’s adversarial observation otmakes the agent take an action a′ ithat sufficiently differs from at, according to some predefined distance metric over the action space and some pre-selected distance threshold. In an environment with discrete actions, for example, a sufficiently adversarial action would be a simple inequality, some action a′ t̸=at. However in environments with continuous action compo- nents, we must define sufficiently adversarial thresholds to compare a′ tandat. A simulated rollout from an attacked state stis only carried out if the adversarial action a′ tis sufficiently adversarial, so that data may be collected on the end-of-episode properties and compared to those of the unattacked trajectory, in an attempt to gauge the impact of the adversarial action. Figure 2 illustrates a hypothetical example of the simulated rollout process from a single state of one observed episode; however the full simulated rollout process ranges over all attacks performed on all 2 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment non-terminal states of each episode in the collected data set of agent experiences. 3.1.1. C OMPUTATIONAL COSTS AND SAMPLING In practice, a large proportion of all attempted attacks may be sufficiently adversarial, in which case running simula- tions to determine the impact of every sufficiently adversar- ial attack is computationally expensive. Specifically, under the “best-case” assumption that each environment step runs inΘ(1)time, the expected time complexity of running all of these simulated rollouts is Θ(LN), where Nis the number of sufficiently adversarial attacks over the whole data set andLis the expected length (number of time-steps) from the attack point to the end of the episode. The experiments in this paper only explore the impacts of Nsingle attack points rather than chains of multiple attacks, which would exponentially increase the time complexity. In many cases, Lscales with the expected length of a full episode, often linearly. Therefore, for environments that tend to require a large number of time-steps per episode, we can expect simulated rollouts from attacks to be expensive, particularly for those attacks
|
https://arxiv.org/abs/2505.21414v1
|
that stem from early states in an episode. To combat these computational costs, stratified sampling was implemented to prune the set of attacks from which to simulate while still guaranteeing sufficient representation from desired sub-populations. 3.2. Attack Strategy Design Anattack strategy is an algorithm that decides how and when to attack the agent. In an effort to select attack strate- gies that produce “realistic” attacks, we propose the follow- ing rough criteria for assessing attack realism: •Feasibility: For an attack at time t, adversarial obser- vation otmust lie in the environment’s state space. •Realistic Perturbation: For an attack at time t, the perturbation of stshould be restricted to a known (and ideally small) subset of components of the state vector, such that this perturbation could realistically model a sensor inaccuracy or malfunction in a real-world implementation of the RL environment. •Low Severity: Across all time-steps in a given episode, the average “attack severity” (a rough measure of the attack’s impact on the expected action and next state) should be low; roughly speaking, an expected action is one that an expert human operator would take if they were the agent. In other words, attacks should be sparse with respect to time, especially those known to have severe impact on the expected action. “Be- nign” perturbations (those that ought to have little orno impact on the expected action) may be performed more frequently but will be filtered out of the simu- lated rollout process if the induced agent action is not sufficiently adversarial. The attack strategies in our experiments satisfy the second and third bulleted conditions by limiting each perturbation to a single state component and limiting each simulated rollout to one attack (equivalently, after beginning a simu- lated rollout from a sufficiently adversarial attack, do not attack further). Still, this simple attack method must use environment context to guarantee that the first bulleted con- dition holds. In general, additional environment context will be necessary to measure attack severity and to design more sophisticated, multi-index perturbation attack strate- gies. In the Section 4.2, we illustrate an example of benign perturbations in a specific RL environment. In addition to constraining attacks to certain time-steps and certain state components, an attack strategy involves a per- turbation algorithm that specifies a way to perturb the state vector within realistic bounds. Our experiments are limited totargeted attacks , whose perturbation algorithms deliber- ately alter the observation in a way that encourages the agent to take a specific adversarial action aadv. Whether such an attack ends up being sufficiently adversarial only depends on the normal action aand the attack-induced action a′, with no additional dependence on aadv. However, our framework allows attack strategies to employ any perturbation algo- rithm, targeted or untargeted, as long as the three bulleted realism criteria are met. Among the adversarial perturbation algorithms, the CW and FGSM attacks are particularly notable. CW attacks (Carlini & Wagner, 2017), are optimization-based methods that gen- erate minimal perturbations capable of misleading models. Conversely, FGSM (Goodfellow et al., 2015), is a gradient- based attack that quickly creates adversarial examples
|
https://arxiv.org/abs/2505.21414v1
|
by leveraging the gradients of the loss function with respect to the input data. We default to using FGSM for our experi- ments, although the experimental framework is agnostic to the perturbation algorithm used. 3.3. Measuring Attack Impact 3.3.1. D EFINING A PROPERTY Attack impact is measured with respect to a handful of properties of interest that are chosen in advance. Each property captures certain information about the agent’s experience in the environment up until the point at which the property is measured; as such, a property value attributed to some time step of an episode should only depend on environment variables (observed and/or latent) and actions that were realized at or before that time step. All properties 3 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment of interest should be able to be computed at the very end of each episode. Certain properties, such as win/loss outcome, may only be known at the very end of the episode; however, other properties, such as number of prior steps that incurred some kind of environment-based reward penalty, can be computed at any step during the episode. After a suite of properties of interest have been selected, these properties are logged during all data collection rollouts for both attacked and unattacked episodes. These logged properties, particularly those at the end of each episode, are used for downstream “property impact analysis” (Gross et al., 2022). 3.3.2. M ATHEMATICALLY MODELING PROPERTIES To describe so-called “property impact” from an attack at time step t, we start by modeling the end-of-episode value of each i-th property Piin our suite as a random variable Pi(st,It,ot)that is a function of three arguments: •stis the state vector at time step t; •Itis a collection of other hidden environment informa- tion (including property logs) at time step t; and •otis the (potentially adversarial) observation sent to the agent at time step t(when no attack is present, one hasot=st). When Pican be expressed meaningfully as a scalar value, expected value of the given property mode becomes a relevant measure. One may estimate E(Pi(st,It,ot))by running repeated trials of simulations from the same (st,It,ot)and uniformly averaging the observed values of Pi(st,It,ot). Given an attack at time step tthat replaces the true state stwith an adversarial observation ot, the attacked value of property Piis defined to be Pi(st,It,ot), and the unattacked value of property Piis defined to be Pi(st,It,st) (in the latter, the agent’s observation matches the true state). 3.3.3. I MPACT METRICS To measure the impact of the given attack at time step t on property Pi, we feed the attacked and unattacked values ofPiinto an impact metric function D(·,·)as the first and second arguments, respectively. Note that the resulting im- pact value D(Pi(st,It,ot),Pi(st,It,st))is a random variable. One simple impact metric for a scalar-valued property Piis the difference Pi(st,It,ot)−Pi(st,It,st), which conveys both magnitude and direction of the observed change in the prop- erty induced by the attack at time step t. Another simpleimpact metric for anyproperty Piis ( 1 if Pi(st,It,ot)̸=Pi(st,It,st) 0 if Pi(st,It,ot) =Pi(st,It,st). While well-defined, this second impact metric
|
https://arxiv.org/abs/2505.21414v1
|
may lose saliency when Pihas any component that ranges over a continuous domain. To illustrate one way to combat this issue, if the property Piresides in some metric space with a distance metric d(·,·), then one could construct an impact metric such as ( 1 if d(Pi(st,It,ot),Pi(st,It,st))>d∗ 0 if d(Pi(st,It,ot),Pi(st,It,st))≤d∗ for some distance threshold d∗. For each impact metric function Dthat is invoked on a given prop- erty Piand a given attack st→ot, one can estimate E(D(Pi(st,It,ot),Pi(st,It,st)))using repeated trials. Specif- ically, if we let P′ ibe the list of all observed values of Pi(st,It,ot)andPibe the list of all observed values of Pi(st,It,st), thenE(D(Pi(st,It,ot),Pi(st,It,st)))is estimated by the uniform average 1 |P′ i||Pi|∑ p′ i∈P′ i∑ pi∈PiD(p′ i,pi), where both summations range over all elements, including repeats, in the lists P′ iandPi. 4. Experiments The motivation behind our experiments is threefold; First to develop an analysis and visualization framework to sup- plement the researcher’s understanding of a policy’s learned behavior and vulnerabilities, second to analyze the property impact of attacking various observation indexes, and third to test the transferability of adversarial attacks across agents of various training algorithms and learning curricula. 4.1. Experimental Environment Deep reinforcement learning is increasingly being used to discover adversarial tactics, techniques, and procedures (TTPs) within the cybersecurity domain (Molina-Markham et al., 2021). The gym environment used for notional ex- periments is CyberStrike; a custom-built, strategic network- defense game wherein an agent controlling blue nodes must determine information about the red network’s tree struc- ture, and then hack into each red node’s parent node re- cursively until reaching the target node. The CyberStrike environment is ripe for emergent, explainable learned strat- egy; contrasting typical control-focused benchmarks such as LunarLander-v2 or Cartpole (G. Brockman, 2016). The action space is multi-discrete, made up of four blue “hack- ers” that can simultaneously “hack” or “eavesdrop” on a 4 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment Figure 3. A notional CyberStrike state. The blue agent controls nodes B0, B1, B2, B3. Blue chooses actions which control each blue node simultaneously, locating and disabling the target red node by peeling back the layers of the red defense network until the target node is undefended. In this example, the target node (R0) is defended by R1andR2.R2is defended by R3, which is defended by R4.R1is defended by R5,R6, and R7.R6is also defended by R7. Dashed lines denote a connection marked as unknown in the agent’s observation, whereas solid lines represent a known connection. The agent begins with a fully unknown network, and must use its hackers to discover the network topology enough to reveal the target node’s ( R0’s) defenders and eventually hack into the target node. collection of red nodes. If a blue hacker attempts to hack a defended red node, the red defender will counter and the blue hacker will be unavailable for the rest of the episode. The “eavesdrop” action is only available to one of the blue hackers ( B3), and allows the agent to stealthily learn the de- fenders of a red node without risking a counter
|
https://arxiv.org/abs/2505.21414v1
|
from red. An example network structure from a mid-episode observation is displayed in Figure 3. 4.2. Experimental Setup First, we train a suite of both Advantage Actor Critic (A2c) (Mnih et al., 2016) and Deep Q-Network (DQN) agents (Mnih et al., 2013) within the CyberStrike environment. Following training we collect 10,000 state-action-metadata tuples from the frozen policies acting within CyberStrike, collecting metadata such as a policy’s hidden-layer acti- vations, observation saliency, and step-wise environment properties. Due to the absence of ε-greedy or distribution sampling for exploration, we force a small percentage (5%) of random actions during the data collection to inform potential adversarial targets, though the researcher may vary the percentage of random action selected depending on the environment MDP and frozen policy optimality. These collected data sets from the trained policies help to represent empirical policy behavior through activation clustering, SAMDP transition visualization, and other custom metadata visualizations. For example, Figure 4 Figure 4. This latent space representation maps a policy’s Cyber- Strike observations from initial time-steps in the northwest region to the final time-steps in the southeast region, with an aggregation of various intermediate trajectories connecting the initial and final observations. Attacks within the denser, bluer northeast region of the space are unlikely to yield nonzero changes in final red counts, whereas attacks in the sparser and redder western regions are more likely to be successful (increase final red counts). The sparsity of activation embeddings in the western region of the latent space representation suggests the policy is less likely to have trained on observations in this region and thus is more vulnerable to adversar- ial attacks when acting within this region. shows how one can track the average change rate of a property at any attacked observation. For each collected observation, the policy’s final latent activation layer is embedded in two dimensions and colored with a gradient across the aggregate change rates of a property of interest, the change rate being determined by the difference in the property value for the unperturbed versus perturbed observations. These behavioral visualizations help the researcher get a birds-eye view of policy trajectories as they relate to environment properties; while also highlighting feasible, optimally-timed, and low-severity attacks on the policy’s learned strategy. We can also run adversarial attacks on (and simulated rollouts from) the observations collected in these data sets, which will be necessary for both Property Impact and Attack Transferability Analysis. 4.2.1. B ENIGN PERTURBATIONS IN CYBER STRIKE In the Cyber Strike environment, one kind of benign pertur- bation would consist of selecting an ordered pair (A,B)of distinct red assets, with at least one already compromised by a blue hack, and changing the agent’s perception of whether or not Adefends B. Such a perturbation on the defense A→Bis benign because an expert human hacker would not target Bif it were already compromised; and if Awas 5 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment already compromised, then Awould not be able to counterat- tack a blue node following its hack on B, making the defense value for A→Birrelevant to
|
https://arxiv.org/abs/2505.21414v1
|
decision-making. Therefore, our attack-discovery framework would permit this kind of benign attack to be made frequently in a single episode, since an ideal policy ought to not behave differently from any of these attacks. Figure 5. The Average Final Red Count delta post-attack is aggre- gated per observation index, across all time-steps. Eight out of the ten most impactful attacked observation indexes are observed defense network nodes, suggesting that an attacker’s best chance of increasing the final red count is to perturb the DQN agent’s perception of the network structure at various adjacency nodes. Figure 6. TheFinal Red Count delta is plotted at each step for all attacks on odn32’s value in the DQN’s observation. This plots suggests that attacking odn32at the first two steps may have nega- tive effects for the attacker, whereas attacks from step 2 onward correspond with an increase in red nodes (thus a decrease in blue win percentage) compared to an unattacked trajectory.4.3. Property Impact Attack Analysis In order to measure and compare the aggregate property impact of attacking various observation indexes, we must run adversarial attacks perturbing each observation index for each collected observation tuple. Simulated rollouts are performed only from adversarial attacks inducing an action a′ ithat is sufficiently different from the original action ai. Environment properties are measured at the terminal state of the simulated rollout and compared to environment prop- erties of the unattacked trajectory, as to gauge the property impact of a given attack. We can aggregate these impact metrics across step numbers and observation indexes to answer questions about the ideal time-step or observation index to attack with respect to some environment property the attacker wishes to impact. In CyberStrike, we measure the attack impact on properties such as win percentage, final red count, final blue count, and trajectory length. Figure 5 displays a ranked aggregation of final red count deltas across attacked observation indexes for a DQN policy. The policy’s most impactful observation index attack with re- spect to final red count is a perturbation of the value of the observed defense network node 3:2 ( odn32 ), denoting if R3is defending R2or if this connection is unknown. In the notional example in Figure 3, R3isdefending R2, so a perturbation obscuring this information may cause the agent to take an action hacking the defended R2node, whereas an optimal decision would be to hack R2’s defender node, R3, first. Figure 6 displays the impact aggregation across time-steps for all attacks on the odn32 observation value, suggesting that attacks early in an episode, time-steps 0 and 1, may have negative consequences for the attacker by decreasing their average final red count. Figure 6 shows that time-steps 10 and 15 lead to the largest average final red count increase, suggesting an adversary may have the great- est impact on final red count in the middle of an episode, rather than at the beginning or end. 4.3.1. P ROPERTY IMPACT ANALYSIS RESULTS The Property Impact Analysis results indicate that adversar- ial attacks may exert both positive and negative influences on the attacker’s desired outcome,
|
https://arxiv.org/abs/2505.21414v1
|
dependent on the time-step when the attack is brokered. By strategically timing the ma- nipulation of the most vulnerable observation components of a policy, we are able to observe significant variations in policy behavior, leading to notable changes in the envi- ronment properties and game outcome; thus demonstrating the ability to deliberately impact an external environment property by choosing a specific adversarial attack target at a specific time-step. Figure 6 displays the “final red counts” property outcomes when a red attacker attacks the odn32 observation compo- nent at various time-steps. Specifically, we find that attack- 6 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment Table 1. Attack transferability results from experiments outlined in Section 4.4.2. The attacks are configured using the policy in the Attack Source column, targeting the No-Op (left) and max(loss-win) (right) action targets. The attacks are then run on the attack-source policy and transferred to the other four policies of interest. Three metrics are recorded per cell: transferability success rate (white sub-cell), target-transferability count out of one million (light gray sub-cell), and sub-action target-transferability success proportion (dark gray sub-cell). Self-attacks , where the attack source and target policy are the same, are also included in this table. ing the observation at the initial steps of the game may lead to an unexpected decrease in the final red count, which runs contrary to the attacker’s objective. However, as the game progresses, attacks on specific observations can result in more favorable increase in the final red count, aligning with the attacker’s strategic intentions. This finding underscores the dynamic nature of learned policies, even within simple environments. It highlights the delicate interplay between attack targets, how those action targets influence future be- havior, and how that future behavior affects the environment properties and the ultimate objective outcome. 4.4. Attack Transferability Analysis In addition to analyzing the property impact of various ad- versarial attacks on a single policy, we also analyze the transferability of an attack trained with one policy and de- ployed during another policy’s execution. 4.4.1. T RANSFERABILITY METRICS In order for an attack to be transferable , the attack (parame- terized by policy πi) must induce a sufficiently adversarial action when deployed on some other policy πj. In Cyber- Strike, an attack parameterized by a policy πiis counted astransferable if it induces an action different from the action taken by a policy πjfrom the unattacked observa- tion. A targeted attack, parameterized by πi, is counted astarget-transferable if it induces the attack’s target action on the new policy πj. Due to the multi-discrete nature of CyberStrike’s action space (which has four sub-actions), we can also measure the proportion of the induced sub-actions matching the target sub-actions, or the sub-action target- transferability . 4.4.2. T RANSFERABILITY EXPERIMENTAL SETUP We employ Automated Domain Randomization (ADR) and Curriculum Learning (CL) across the action and counter- action effectiveness dimensions, coined action stickiness by (Machado et al., 2017), to increase the variation in learned strategies, providing more heterogeneous policy targets for attack-transfer. We will analyze attack transferability across a suite of five
|
https://arxiv.org/abs/2505.21414v1
|
policies: A2c-ADR+CL (A), A2c-ADR (B), A2c-CL (C), DQN-CL (A), DQN-deterministic (B)). Training curriculum and hyperparameter details for the poli- cies are available in the appendix. For each policy, we use two action-targets for transferability analysis: the 0-action (No-op) and the max(loss-win) action. The max(loss-win) action is computed by counting each collected action’s us- age within winning and losing trajectories; if the action was used ULtimes in losing trajectories and UWtimes in winning trajectories then the (loss-win) value is UL−UW, and the max(loss-win) action target maximizes this value. After determining the max(loss-win) action-target for each 7 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment source policy, we run the transferred adversarial attacks. For each observation in each target policy’s collected dataset, we run an adversarial attack for each action target and col- lect metrics regarding transferability ,target-transferability , andsubaction-target-transferability . We hypothesize that at- tacks may be more transferable between policies of the same DRL algorithm (A2c-X →A2c-Y , or DQN-X →DQN-Y), however the target policy should be the biggest factor in transferability, regardless of target action or source policy. We also hypothesize that the max(loss-win) action target may be more easily induced compared to the No-Op action, because the No-Op action should not be taken by an optimal or near-optimal policy, whereas the max(loss-win) actions are empirically taken by the source policies during losses. 4.4.3. T RANSFERABILITY RESULTS The policy target is indeed the greatest factor on transfer- ability success rates, especially for the A2c policies where we see roughly the same transferability success rates per policy target, across all attack sources and action targets. It is also worth noting that the max(loss-win) action target induces target-transfers most often, but still sparsely, for A2c policies. DQN self-attacks induce the target action 45.8% (DQN-A) and 2.3% (DQN-B) of the time, however attacks transferred to DQN policies never induce the tar- get action. Contrarily, A2c self-attacks induce the target action at roughly the same rate as attacks transferred to A2c policies. The variation in sub-action target-transferability per-row in the max(loss-win) block can be attributed to the max(loss-win) action being different for each source policy. 5. Discussion & Conclusions The results suggest the ability to influence agent behav- ior, and thus future environment properties, is controllable through optimally timed, deliberately chosen observation perturbations. This capability, paired with the result show- casing varying levels of attack transferability across algo- rithm types, highlights the urgent need for robust defense mechanisms and adversarial evaluation schemes to safe- guard decision-making policies from the threat of adversar- ial influence, especially in high-stakes environments. The results also suggest that policies trained with some algo- rithms, like A2c, may be more vulnerable to transferred attacks than others, such as DQN in this specific experi- mental setting; and transferability must be measured on a per-algorithm basis. The presence of observation-dependent and time-dependent vulnerabilities implies the existence of training and fine-tuning methods to guard against these vulnerabilities, though we have not explored methods to do so in this paper and leave that to future research. While this paper focuses on
|
https://arxiv.org/abs/2505.21414v1
|
using adversarial attacks to probe and analyze the behavior of policies trained throughDRL algorithms, the same behavioral analysis may be con- ducted on LLM-based agentic architectures, albeit with language-based attacks and alternate metadata for t-SNE embeddings. We will leave this to future adversarial analysis research. Impact Statement The paper presents work whose goal is to advance the field of machine learning, specifically regarding deep reinforce- ment learning explainability and adversarial analysis. As society continues to adopt DRL and AI solutions broadly, explainability and evaluation methods such as those pre- sented in this paper will help provide frameworks to assure and gain trust of these systems. Acknowledgments The authors thank Guido Zarrella and Dr. Chris Niessen for their advisory roles throughout the research and develop- ment process. This work was funded by the 2023 MITRE Independent Research and Development Program. References Baram, N., Zahavy, T., and Mannor, S. Deep reinforcement learning discovers internal models, 2016. URL https: //arxiv.org/abs/1606.05174 . Behzadan, V . and Munir, A. Vulnerability of deep re- inforcement learning to policy induction attacks. pp. 262–275, 07 2017. ISBN 978-3-319-62415-0. doi: 10.1007/978-3-319-62416-7 19. Biemann, C. Chinese whispers: an efficient graph clustering algorithm and its application to natural language process- ing problems. In Proceedings of the First Workshop on Graph Based Methods for Natural Language Processing , TextGraphs-1, pp. 73–80, USA, 2006. Association for Computational Linguistics. Carlini, N. and Wagner, D. Towards evaluating the robust- ness of neural networks, 2017. URL https://arxiv. org/abs/1608.04644 . G. Brockman, V . Cheung, L. P. J. S. J. S. J. T. e. a. ”openai gym”, 2016. Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples, 2015. URL https: //arxiv.org/abs/1412.6572 . Gross, D., Simao, T. D., Jansen, N., and Perez, G. A. Tar- geted adversarial attacks on deep reinforcement learning policies via model checking, 2022. 8 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment Hasanbeig, M., Kroening, D., and Abate, A. Deep reinforce- ment learning with temporal logics. In Bertrand, N. and Jansen, N. (eds.), Formal Modeling and Analysis of Timed Systems , pp. 1–22, Cham, 2020. Springer International Publishing. ISBN 978-3-030-57628-8. Huang, S., Papernot, N., Goodfellow, I., Duan, Y ., and Abbeel, P. Adversarial attacks on neural network poli- cies, 2017. URL https://arxiv.org/abs/1702. 02284 . Kiran, B. R., Sobh, I., Talpaert, V ., Mannion, P., Sallab, A. A. A., Yogamani, S., and P ´erez, P. Deep reinforcement learning for autonomous driving: A survey, 2021. URL https://arxiv.org/abs/2002.00444 . Machado, M. C., Bellemare, M. G., Talvitie, E., Ve- ness, J., Hausknecht, M. J., and Bowling, M. Revis- iting the arcade learning environment: Evaluation pro- tocols and open problems for general agents. CoRR , abs/1709.06009, 2017. URL http://arxiv.org/ abs/1709.06009 . Mnih, V ., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. A. Play- ing atari with deep reinforcement learning, 2013. URL http://arxiv.org/abs/1312.5602 . Mnih, V ., Badia, A. P., Mirza, M., Graves, A., Lilli- crap, T. P., Harley, T., Silver, D., and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. CoRR , abs/1602.01783, 2016. URL http://arxiv. org/abs/1602.01783
|
https://arxiv.org/abs/2505.21414v1
|
. Molina-Markham, A., Winder, R. K., and Ridley, A. Net- work defense is not a game, 2021. URL https:// arxiv.org/abs/2104.10262 . Rajeswaran, A., Kumar, V ., Gupta, A., Vezzani, G., Schul- man, J., Todorov, E., and Levine, S. Learning complex dexterous manipulation with deep reinforcement learn- ing and demonstrations, 2018. URL https://arxiv. org/abs/1709.10087 . Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., and Has- sabis, D. Mastering chess and shogi by self-play with a general reinforcement learning algorithm, 2017. URL https://arxiv.org/abs/1712.01815 . Tapley, A., Gatesman, K., Robaina, L., Bissey, B., and Weissman, J. Utilizing explainability techniques for reinforcement learning model assurance, 2023. URL https://arxiv.org/abs/2311.15838 .van der Maaten, L. and Hinton, G. Visualizing data using t- sne. Journal of Machine Learning Research , 9(86):2579– 2605, 2008. URL http://jmlr.org/papers/v9/ vandermaaten08a.html . Velasquez, A., Bissey, B., Barak, L., Beckus, A., Alkhouri, I., Melcer, D., and Atia, G. Dynamic automaton- guided reward shaping for monte carlo tree search. Pro- ceedings of the AAAI Conference on Artificial Intelli- gence , 35(13):12015–12023, May 2021. doi: 10.1609/ aaai.v35i13.17427. URL https://ojs.aaai.org/ index.php/AAAI/article/view/17427 . Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M., Jaderberg, M., Czarnecki, W., Dudzik, A., Huang, A., Georgiev, P., Powell, R., Ewalds, T., Horgan, D., Kroiss, M., Danihelka, I., Agapiou, J., Oh, J., Dalibard, V ., Choi, D., Sifre, L., Sulsky, Y ., Vezhnevets, S., Molloy, J., Cai, T., Budden, D., Paine, T., Gulcehre, C., Wang, Z., Pfaff, T., Pohlen, T., Yogatama, D., Cohen, J., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Apps, C., Kavukcuoglu, K., Hassabis, D., and Silver, D. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/ , 2019. Waseda, F., Nishikawa, S., Le, T.-N., Nguyen, H. H., and Echizen, I. Closer look at the transferability of adver- sarial examples: How they fool different models dif- ferently, 2022. URL https://arxiv.org/abs/ 2112.14337 . 9 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment A. Appendix A.1. Publicly Available Code Code for the Cyberstrike environment, DRL-SAT analysis repository, and training repository will be open sourced at: https://github.com/mitre/drlsat. A.2. CyberStrike Cyberstrike is a highly customizable network defense environment, initialized with the following configuration parameters. The values listed were used for experiments, with the exception of standard deviations of ADR variables: Listing 1. CyberStrike Configuration File a d r v a r i a b l e s : − i d : a d r 0 v 1 t y p e : a d r n o r m a l r a n g e p a r a m e t e r s : mean : 1 . 0 # s t a n d a r d d e v i a t i o n v a r i e s # w it h ADR & CL s t d e v : 1 . 0 maximum : 1 . 0 minimum : 0 . 1 − i d : a d r 0 v 2 t y p e
|
https://arxiv.org/abs/2505.21414v1
|
: a d r n o r m a l r a n g e p a r a m e t e r s : mean : 1 . 0 s t d e v : 1 . 0 maximum : 1 . 0 minimum : 0 . 1 − i d : a d r 1 v 0 t y p e : a d r n o r m a l r a n g e p a r a m e t e r s : mean : 1 . 0 s t d e v : 1 . 0 maximum : 1 . 0 minimum : 0 . 1 − i d : a d r 2 v 0 t y p e : a d r n o r m a l r a n g e p a r a m e t e r s : mean : 1 . 0 s t d e v : 1 . 0 maximum : 1 . 0 minimum : 0 . 1 s c e n a r i o : r e d : a s s e t s : − i s t a r g e t : t r u e #0 t y p e : 0 i sa l i v e : True − i s t a r g e t : f a l s e #1 t y p e : 0 10 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment i sa l i v e : True − i s t a r g e t : f a l s e #2 t y p e : 0 i sa l i v e : True − i s t a r g e t : f a l s e #3 t y p e : 0 i sa l i v e : True − i s t a r g e t : f a l s e #4 t y p e : 0 − i s t a r g e t : f a l s e #5 t y p e : 0 i sa l i v e : True − i s t a r g e t : f a l s e #6 t y p e : 0 i sa l i v e : True − i s t a r g e t : f a l s e #7 t y p e : 0 i sa l i v e : True d e f e n s e n e t w o r k : − [ 1 , 2] # r e d node 0 d e f e n d e d by [ 1 , 2 ] − [ 5 , 6 , 7 ]# r e d node 1 i s d e f e n d e d by [ 5 , 6 and 7]
|
https://arxiv.org/abs/2505.21414v1
|
− [ 3 ] # r e d node 2 d e f e n d e d by 3 − [ 4 ] − [ ] #4 − [ ] #5 − [ ] #6 − [ 6 ] # r e d node 7 i s d e f e n d e d by 6 b l u e : a s s e t s : − t y p e : 1 l o s s c o s t : 20 u s e c o s t : 2 − t y p e : 2 l o s s c o s t : 20 u s e c o s t : 2 − t y p e : 2 l o s s c o s t : 20 u s e c o s t : 2 i sa l i v e : True − t y p e : 3 l o s s c o s t : 10 u s e c o s t : 5 e f f e c t p r o b a b i l i t y : # t y p e {r o w i d x}e f f e c t i v e n e s s # h a c k i n g t y p e {c o l i d x} − [ 0 , adr 0v1 , adr 0v2 , 0] − [ adr 1v0 , 0 , 0 , 0] − [ adr 2v0 , 0 , 0 , 0] − [ 0 , 0 , 0 , 0] A.3. Observation Space The observation space in CyberStrike consists of ” alive ” and ” type” information for all blue assets, ” alive ” and ” type” and ” istarget ” information for red assets, and the observed defense network, from blue’s perspective. This information is 11 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment Figure 7. Embedded policy activation vectors are colored by their cluster, determined by Chinese-Whispers, and marked with aggregate skill-transition arrows. The shading of the arrows represent the empirical likelihood of the policy transitioning from one cluster to another; thus the most-travelled trajectories are marked by the darkest-shaded arrow path. flattened into an array and passed to the agent as a flat tensor. The size of the flat tensor is formally 3∗(num blue +num red) +num red2 A.4. Action Space The action space in CyberStrike is multi-discrete, with each blue asset capable of being paired to some red asset (or no red asset) for any given multi-discrete action. This means the action space linearly increases as we increase the number of red or blue assets in the configuration. Formally the action space is of size num blue∗(num red+1) A.5. Strategy and Optimality The optimal strategy in CyberStrike requires using eavesdrop assets to discover the defense network nodes, and then utilizing hacking assets to infiltrate the defense network, hacking undefended assets first, until the
|
https://arxiv.org/abs/2505.21414v1
|
target is reached through recursive hacks. In the absence of adversarial attacks, DRL policies optimize towards this behavioral pattern. A.6. Curriculum Learning and Automated Domain Randomization We randomize the action effectiveness variables for Curriculum Learning (CL) and Automated Domain Randomization (ADR) policies. For the CL policies, the curriculum incrementally adds new variables to randomize as level difficulty increases. We increase the environment level whenever the learning agent reaches 90% on its current level. For instance, the CL agent starts training in a fully deterministic environment. Once 90% win-rate is reached, the environment randomizes one the four effect probabilities, sampling from a truncated normal distribution centered around 1, standard deviation of 1, and minimum and maximum of 0 and 1. As the agent reaches 90% win-rate on this second level, the environment randomizes yet another action-effectiveness dimension, until eventually all four variables are sampled with a standard deviation of 1.0 in the last level. During purely ADR training, we sample from this truncated distribution with a standard deviation of 1.0, for each of the four action effectiveness dimensions; and this pure-ADR level is identical to the final, fully-randomized level on the CL-denoted policies. The policy denoted ADR+CL (A2c-A) is trained with a curriculum that increases the standard deviation of all effectiveness probabilities by 25% every level. Once 90% win-rate is reached on the deterministic level 1, the agent begins training on level 2, where there is a 0.25 standard deviation for the action stickiness sampling distribution centered around 1. This ADR+CL lesson plan increases the standard deviation from 0 →0.25→0.5→0.75→1.0. 12 A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment A.7. Training hyperparameters All DRL policies were trained with either DQN or A2c, utilizing the standard deep Q-learning algorithm (Mnih et al., 2013) and the standard advantage actor critic algorithm introduced in (Mnih et al., 2016). For the DQN policies, we use a discount factor of .99, replay ratio of 4, target update tau of 0.05 with an interval of 250, an Adam optimizer, clip grad norm of 10, and a learning rate of 3e−4. The ε-greedy exploration module initializes at 1.0, decays by a factor of .99 to a minimum epsilon of 0.01. For the A2c policies, we use a discount factor of 0.99, actor learning rate of 1.5e-4, critic learning rate of 3e-4, value loss coefficient of 0.5, entropy loss coefficient of 0.01, Adam optimizer, and clip grad norm of 10. The networks ingest a flat input layer of varied size, depending on the size of the CyberStrike configuration. In the CyberStrike configuration used for experiments, where we have 8 red nodes and 4 blue hackers, the input size is 100. The hidden dimension, and thus the size of the activations used for embedding and clustering, is set to 256 by default. Policies are trained through their curriculum, until 90% win-rate is reached on the final level. At this point, policies are frozen, evaluated, and collected for analysis. Both the actor network and DQN are instantiated as follows: Listing 2. DQN and Actor Network hidden dim = 256
|
https://arxiv.org/abs/2505.21414v1
|
s e l f . f c = t o r c h . nn . S e q u e n t i a l ( nn . L i n e a r ( i n s h a p e [ 0 ] , hidden dim ) , nn . ReLU ( ) , nn . L i n e a r ( hidden dim , hidden dim ) , nn . ReLU ( ) , nn . L i n e a r ( hidden dim , o u t s h a p e [ 0 ] ) , ) The critic network for A2c training is instantiated as follow: Listing 3. Critic Network hidden dim = 256 s e l f . f c = t o r c h . nn . S e q u e n t i a l ( nn . L i n e a r ( i n s h a p e [ 0 ] , hidden dim ) , nn . Tanh ( ) , nn . L i n e a r ( hidden dim , hidden dim ) , nn . Tanh ( ) , # S i n g l e o u t p u t neuron f o r v a l u e f u n c t i o n . nn . L i n e a r ( hidden dim , 1 ) , ) A.8. Analysis hyperparameters For SAMDP analysis, we utilize the policy network activation vectors to create t-Distributed Stochastic Neighbor Embeddings (t-SNE) (van der Maaten & Hinton, 2008) with a perplexity of 132. We utilize Chinese-Whispers (Biemann, 2006) clustering algorithm with a critical distance of 15.0 to cluster the policy activation vectors, and color the associated 2d embedded points with a unique cluster color. In addition to coloring by cluster, shown in Figure 7, we can also color by adversarial or atomic property attributes as in Figure 4. 13
|
https://arxiv.org/abs/2505.21414v1
|
arXiv:2505.21419v2 [cs.AI] 28 May 2025Diagnosing and Resolving Cloud Platform Instability with Multi-modal RAG LLMs Yifan Wang wangyifan@cs.cornell.edu Computer Science Department, Cornell University Ithaca, NY, USAKenneth P. Birman ken@cs.cornell.edu Computer Science Department, Cornell University Ithaca, NY, USA ABSTRACT Today’s cloud-hosted applications and services are complex sys- tems, and a performance or functional instability can have dozens or hundreds of potential root causes. Our hypothesis is that by com- bining the pattern matching capabilities of modern AI tools with a natural multi-modal RAG LLM interface, problem identification and resolution can be simplified. ARCA is a new multi-modal RAG LLM system that targets this domain. Step-wise evaluations show that ARCA outperforms state-of-the-art alternatives. CCS CONCEPTS •Software and its engineering →System administration ; •Information systems →Information retrieval ;•Computing methodologies →Knowledge representation and reasoning . KEYWORDS Root cause analysis, RAG LLM, AI-Ops ACM Reference Format: Yifan Wang and Kenneth P. Birman. 2025. Diagnosing and Resolving Cloud Platform Instability with Multi-modal RAG LLMs. In The 5th Workshop on Machine Learning and Systems (EuroMLSys ’25), March 30–April 3, 2025, Rotterdam, Netherlands. ACM, New York, NY, USA, 8 pages. https://doi.org/ 10.1145/3721146.3721958 1 INTRODUCTION Incident response in complex systems entails 4 steps. (1) Detection, which includes the detection or prediction of an impending problem; (2)Triage: categorizing severity and assigning the task to a Site Reliability Engineering (SRE) team; (3) Diagnosis: collecting more data and pinpointing the root cause; (4) Mitigation: Formulating and carrying out a response and disabling any extra instrumentation that was activated. Decades of work has given us a remarkable range of AI-assisted IT-Operation (AI-Ops) tools covering each step, such as prediction- based anomaly alarming, classification-based internal support ticket assigning tool for triaging, root-cause analysis tools using language models for summarization and many more. These AI tools work on a variety of data modalities, including user-provided bug reports in Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands ©2025 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-1538-9/25/03. . . $15.00 https://doi.org/10.1145/3721146.3721958natural language, system logs in a semi-structured language and numerical performance metrics. Our work takes the next step by offering an AI-Ops solution that can carry out cross-modality reasoning. The task is challenging for several reasons: multi-modal language models are still in an a very early stage, and there is a lack of a significant lack of high-quality training data sets for our setting. To the extent that one can identify public data sets for AI-Ops and IT-Ops they generally offer just a single data mode, as is the case
|
https://arxiv.org/abs/2505.21419v2
|
for the two most widely cited sets, HPC4 [ 15], COM2 [ 18]. But this issue is also seen with less widely used data sets. Even if we limit ourselves to a single data mode, existing AI- Ops solutions turn out to have limitations (such as weak support for events characterized by evolution of a problem over time, and hence recognizable only from a series of log records), and also struggle to adapt to changes in their operating environment. If the underlying data distribution shifts, for example after a hardware upgrade, the performance of threshold-based incident detection tool is often found to degrade. Upgrades often result in logging new information, yet small modifications in the log formatting can defeat log analytics implemented with regular expressions. As a result, users of today’s solutions complain about frequent forced code changes and the need for periodic model retraining. Beyond these technical limitations, today’s AI-Ops tools are often proprietary and forbiddingly expensive. DevOps teams at cloud computing companies with vast GPU deployments can train new models, but this is out of the question for smaller companies. ARCA is an AI for Root Cause Analysis based on a multimodal RAG (Retrieval-Augmented Generation) approach, in which an LLM is augmented by a database. Many RAG systems are limited to approximate search in document or image collections, but ARCA also supports data in structured (tabular) collections and logs. The basic idea is to focus on recurrent incidents, looking for similar past problems, summarizing prior findings, and recommending mitigation strategies that succeeded in the past. A complicating factor is that users often report incidents in fuzzy ways, which limits label quality: a particular problem given that many AI-Ops tools are trained using labeled data. Rather than battling this reality, our work focuses on approximate match (an idea familiar in text-based contexts), but generalizes the mechanism to to encompass data modalities other than text. The idea, though, is similar: RAG LLMs for search document collections treat each query as a vector database search for documents “similar” to the query. ARCA treats the multimodal signature of the incident as a kind of query and performs approximate match against precomputed signatures from past incidents. EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands Wang et al. Here we report on a proof-of-concept that supports three data modes: (1) incident descriptions , in natural language; (2) logsof semi- structured text generated by automated reporting components; and (3) multivariate performance-counter time-series. ARCA is an end- to-end tool created from off-the-shelf ML models, and designed to cover incident response steps from triaging new cloud incidents to generating mitigation plans for the SREs. The ARCA multimodal RAG search mechanism (Sec. 3) is an original contribution of our effort. The future ARCA will expand these data modes and enlarge ARCA’s multimodal pattern-matching capabilities. To test the end-to-end effectiveness of ARCA, we created a data set of 800 bug reports collected from micro service systems in a con- trolled environment. The bug reports are typical Bugzilla incident reports of the kind users employ to request issue resolution. Each contains three
|
https://arxiv.org/abs/2505.21419v2
|
components: 1) the user’s incident description; 2) a log file collected from the docker container of the faulty service and 3) a time sequence of performance metrics collected from the same container during the the fault. Although the bugs have very differ- ent features, all trace to root causes associated with three widely recognized cloud computing issues: computations that exceeded time limits, memory leaks and network delays. In the evaluation, ARCA achieves 92% accuracy in triage and 72% accuracy in finding the correct mitigation plan. We have also tested the efficacy of individual components in ARCA using established data sets. 2 RELATED WORK Before we dive into details we review related work that shapes our thinking. 2.1 Retrieval Augmented Generation The RAG paradigm is in widespread use [ 5,11]. In this approach, a query is first transformed into a vector representation and an approximate nearest neighbor search is then used to fetch rele- vant documents from a knowledge base. The retrieved content is then provided as auxiliary input to a generative model, typically an LLM. This extra “context” allows the model to ground its outputs in factual, up-to-date, or domain-specific information, reducing hallu- cinations and offering a way to continuously update the knowledge base without retraining models. RAG is effective for question an- swering [ 9], summarization [ 10], and code generation [ 16], and has been shown to significantly improve LLM performance and interpretability. Prior work on multimodal RAG has focused on the visual domain (text used to describe images). In ARCA, how- ever, we need a RAG system specially for IT-Ops/AI-Ops. To the best of our knowledge, our work is the first to explore this form of multimodality. 2.2 Prompting and Reasoning Prompt engineering is central to RAG LLM design. One prompting technique, few-shot learning [ 1], leverages the in-context learning capabilities of LLMs, guiding models from structure and examples in the prompt (without updates to model weights). A second, Chain of Thought [ 21], takes a further step by structuring the prompt in a way that encourages step-by-step reasoning. This has been shown to improve LLM performance on tasks requiring multi-step logical inference, arithmetic, or complex decision-making. In combinationthese two techniques achieve state-of-the-art performance across various domains including mathematics, common-sense reasoning, and question answering. We adopt both in ARCA. 2.3 AI-Ops ARCA is also inspired by prior work in AI-Ops [ 2], notably for processing logs and telemetric data. LogCluster [ 13] introduced techniques for clustering log records to assist in bug detection using a weighted encoding, and subsequent work used LLMs to summa- rize abnormalities in logs [ 20,24]. We used labeled log records from one of these efforts, LogHub[23], for our evaluation. We noted our interest in combining application instrumentation with text records from logs. Prior studies have explored aspects of this question, notably by using deep neural networks for anomaly detection in multivariant time-series data. For example, Microsoft has proposed an anomaly detector based on Convolutional Neural Network (CNN) [ 17], while Alibaba describes an encoder-decoder architecture in RobustTAD [ 4] and Tencent used a VAE
|
https://arxiv.org/abs/2505.21419v2
|
network [7] for the same purpose. Detecting anomalies in cloud platforms using telemetric perfor- mance data requires handling potentially noisy high-dimensionaldata. Li et al. (2024) have explored this problem and proposed a methodol- ogy for noise-tolerant self- supervised learning [ 14] that combines tensor decomposition with self-supervised learning to capture rele- vant features and identify anomalies in time series data. For tabular data, the anomaly detection technique described in [ 12] shows that LLMs can detect anomalies by converting data into text and di- recting the models to find outliers. That effort went on to optimize performance by fine-tuning open-source LLMs using synthetic data. In contrast, existing AI-Ops tools (including those we cited) have generally been limited to a single data modality. 3 HOW DOES ARCA WORK? ARCA runs in two phases (Fig. 1): building the multimodal knowl- edge base of historical bugs and then querying it. Below we focus on a bug tracking use case, but the idea generalizes to other incident- analysis scenarios. 3.1 Building Phase To deploy ARCA, we first collect and process data from existing solved bugs retrieved from bug tacking tools and then use the col- lected data to form a knowledge base. After creating the knowledge base, users can query ARCA for new and ongoing incidents, and the system will automatically generate a mitigation plan for each SRE. 3.1.1 Data Sources. We assume that software incidents are re- ported through tickets in a bug-tracking system such as Bugzilla. Each bug ticket contains multiple data modalities, e.g., bug de- scriptions (natural language), performance metrics (time sequences of numerical multi-variant data), logs (semi-structured machine- generated event reports), etc. In ARCA, we strive to find a mitigation plan by reasoning across the different modes of data. A bug tracking system works like an online bulletin board similar to Reddit. Progress towards resolving bugs is tracked as follow-up posts to the original post initiated by the staff member who found the incident. To collect data to form a knowledge base, we keep Diagnosing and Resolving Cloud Platform Instability with Multi-modal RAG LLMs EuroMLSys ’25, March 30–April 3, 2025, Rotterdam, Netherlands Figure 1: ARCA workflow in its building and query phases. our attention to the following steps within the life cycle of a bug ticket: (1) The first post, which includes a textual description of the problem. (2) The ticket assignment post, which reflects the judgment of a human triage specialists and has a fixed format. (3) Data collection posts with attachments: these are often data collected by the SRE team using tools they found relevant and is the step at which ARCA can learn from data modalities other than natural language. (4) The last post: the last post of a closed ticket is usually the diagnosis of the issue and the following mitigation. Notice that each category of posts and data hints at a its own similarity metric: rather than a single metric for all types of data, we need a unified metric spanning multiple modalities and robust against missing data (some reports may cite data that other related reports
|
https://arxiv.org/abs/2505.21419v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.