text
string
source
string
elements on the current mobile screen. The output should include four parts: 1. Sub-Instruction: Identify the interactive elements and generate natural language instructions for interacting with these elements. The instructions should be concise, clear, and executable, and must include critical details such as filenames, times, or other content as they appear on the screen. For example: " Scroll left to open the app drawer, displaying all installed applications on the device", "Click the chat interface, allowing the user to view and participate in conversation", "Type the username ’Agent’, preparing for the next step in logging into the account". 2. Analysis: Analyze possible subsequent operations based on the current interface and action instructions. This analysis should involve step-by-step reasoning, considering potential changes on the screen and actions that can be taken after these changes. For example: "After clicking the plus button, a dropdown menu appears with an option to create a document. I can select this option to create a new document. First, I need to name the document, then enter content into the document, and finally save the document and exit". 3. High-Level Instruction: Based on the analysis results, envision a high-level task that can be completed within the current interface. There are two types of High-Level Instructions: Task-Oriented: Completing a series of operations to achieve a specific goal. Question-Oriented: Performing a series of operations and deriving an answer to a specific question. For example: Share my favorite Book \"the Queen\\’s Gambit\" to my Friend Natalie larson over her gmail address -natalie.larson1998@gmail.com from the PocketBook app. Ensure that the High-Level Instruction is executable by including all critical specifics, such as filenames, relevant timings, or required details. 4. UI item: Based on the current page parsed result and action instructions, identify the corresponding UI item and provide the specific number. You only need to return a dictionary formatted as follows: { "Sub-Instruction": "xxx", "Analysis": "xxx", "High-Level-Instruction": "xxx", "UI item": x } Current screen analysis: Prompt for reasoning the correct golden action. You are a GUI task expert, I will provide you with a low-level instruction, a golden ui with its corresponding ID. Low-level instruction: UI ID: Please generate the action for the next step. Candidate Actions: "action_type": "click", "ui": <ui_idx> "action_type": "long_press", "ui": <ui_idx> 16 "action_type": "type", "text": <text_input> "action_type": "scroll", "direction": <up, down, left, or right> "action_type": "navigate_home" "action_type": "navigate_back" "action_type": "open_app", "app_name": <app_name> "action_type": "wait" "action_type": "status", "goal_status": <"successful","infeasible"> You need to generate a script in the form: actions: {ACTION}\nMake sure to consider the details in the screenshot and the task requirements to create an accurate and functional script." Prompt for evaluating whether actions correctly execute low-level instructions. It is used to filter out high-quality data in the width dimension. You are a mobile expert who excels at interacting with elements on mobile screens to complete tasks. I have a task for you, and I hope you can use your extensive knowledge to identify interactive elements on mobile screens. I will provide you with the following information: 1. A low-level instruction, which we will follow to perform actions on the current
https://arxiv.org/abs/2505.21279v1
screen. 2. The type of action currently being executed, which can be one of two types: CLICK or LONG_PRESS. You need to determine whether this action can fulfill the current low-level instruction. 3. The current screen environment, with the position where the action(click and long_press) needs to be executed marked by a red box. I will provide you with a screenshot, along with the low-level instructions and the action to be executed. Your task is to determine whether the current action brings us closer to achieving the low-level instruction. If the current action contributes to the realization of the low-level instruction, answer "Yes"; otherwise, answer "No". You only need to return a dictionary formatted as follows: { "Analysis": "xxx", "Correct": Yes/No } Prompt for evaluating whether low-level instructions solve high-level instructions. It is used to filter out high-quality data in the depth dimension. You are a mobile expert who excels at interacting with elements on mobile screens to complete tasks. I have a task for you, and I hope you can use your extensive knowledge to identify interactive elements on mobile screens. I will provide you with the following information: 1. A high-level instruction, which is our ultimate goal to be executed. 2. A low-level instruction, which we will follow to perform actions on the current screen. 3. The current screen environment, with the position where the action needs to be executed marked by a red dot. I will provide you with a screenshot, along with the high-level and low-level instructions to be executed. Your task is to determine whether the current low- level instruction brings us closer to achieving the high-level instruction. If the current low-level instruction contributes to the realization of the high- level instruction, answer "Yes"; otherwise, answer "No". You only need to return a dictionary formatted as follows: { "Analysis": "xxx", "Correct": Yes/No 17 } Prompt for classifying trajectories into specific app tasks. You are a GUI agent. I’ll give you a total goal, a screenshot, and some categories of apps, and I’ll ask you to choose the closest to the general goal among those categories. ## Output Format You only need to return a dictionary formatted as follows: { "Analysis": "xxx", "Categories": "xxx" } ## APP Categories 1. Shopping 2. Productivity & Office 3. Other 4. Files 5. Transportation 6. Health & Fitness 7. Recipes 8. Flights 9. Clock & Alarms 10. Reminders 11. Voice recording 12. Education 13. Books 14. Email 15. Calendar 16. Notes & Todos 17. Maps 18. Videos 19. News 20. Meditation 21. Weather 22. Finance 23. Art & crafts 24. Gardening 25. Contacts 26. Drawing 27. Music 28. Real estate 29. Messaging ## Total Goal 18
https://arxiv.org/abs/2505.21279v1
RLJP: Legal Judgment Prediction via First-Order Logic Rule-enhanced with Large Language Models Yue Zhang1, Zhiliang Tian1, Shicheng Zhou2, Haiyang Wang1, Wenqing Hou1,Yuying Liu1,Xuechen Zhao3,Minlie Huang4,Ye Wang1, Bin Zhou1 1College of Computer Science and Technology, National University of Defense and Technology. No.137 Yanwachi Street, Changsha, Hunan, 410073, P R.China, 2College of Electronic Engineering, National University of Defense Technology. No. 460, Huangshan Road, Shushan District, Hefei. 230037, P. R. China, 3School of Data and Computer Science, Shandong Women’s University. No.2399 Daxue Road, Ji’nan, Shandong, 250300, P R.China 4Institute for Artificial Intelligence, Tsinghua University. Dept. of Computer Science, Tsinghua University, Beijing 100084, China Correspondence: binzhou @nudt.edu.cn Abstract Legal Judgment Prediction (LJP) is a pivotal task in legal AI. Existing semantic-enhanced LJP models integrate judicial precedents and le- gal knowledge for high performance. But they neglect legal reasoning logic, a critical compo- nent of legal judgments requiring rigorous logi- cal analysis. Although some approaches utilize legal reasoning logic for high-quality predic- tions, their logic rigidity hinders adaptation to case-specific logical frameworks, particularly in complex cases that are lengthy and detailed. This paper proposes a rule-enhanced legal judg- ment prediction framework based on first-order logic (FOL) formalism and comparative learn- ing (CL) to develop an adaptive adjustment mechanism for legal judgment logic and fur- ther enhance performance in LJP. Inspired by the process of human exam preparation, our method follows a three-stage approach: first, we initialize judgment rules using the FOL for- malism to capture complex reasoning logic ac- curately; next, we propose a Confusion-aware Contrastive Learning (CACL) to dynamically optimize the judgment rules through a quiz con- sisting of confusable cases; finally, we utilize the optimized judgment rules to predict legal judgments. Experimental results on two public datasets show superior performance across all metrics. The code is publicly available1. 1 Introduction The application of AI in law is growing annually, demonstrating capabilities in both assisting judi- cial professionals and providing legal consultation services to the public. Legal Judgment Prediction (LJP) aims to predict the outcome of a case based 1https://anonymous.4open.science/r/RLJP-FDF1on its facts, typically including the applicable legal provisions, accusation, and prison terms. As legal judgments are highly professionalized, legal knowl- edge is essential for LJP predictions. Researchers often employ deep learning models enhanced with legal knowledge or judicial precedents to achieve LJP. The current methods of LJP mainly consist of two approaches: modeling legal knowledge and re- trieving judicial precedents or legal knowledge. (1) Methods modeling legal knowledge typically use neural networks to model legal provisions and ex- tract semantic associations between case facts and legal knowledge(Xu et al., 2020, 2024). While these models can efficiently extract relevant le- gal knowledge for LJP relying on semantic sim- ilarity, they overlook the intrinsic logic of cases. (2) Other methods leveraged semantic techniques to search for related judicial precedents or legal knowledge(Wu et al., 2023a; He et al., 2024). Judi- cial precedents and legal knowledge as references to enhance the model’s understanding of the case circumstances, but they tend to study similar cases or knowledge instead of modeling the logical rea- soning process. In summary, the above two
https://arxiv.org/abs/2505.21281v1
types of methods rely on textual and semantic match- ing but ignore capturing the logic of judging legal cases, where logic in reasoning is crucial in legal judgment. To address the aforementioned issues, re- searchers applied legal judgment logic to enhance reasoning in traditional deep learning models or large language models (LLMs) for LJP. Some re- searchers employ deep learning models to extract logical premises from complex cases (Yue et al., 2021, 2024) or handle the legal judgment using 1arXiv:2505.21281v1 [cs.AI] 27 May 2025 rules defined by experts (Gan et al., 2021). The integration of legal judgment logic has improved the reasoning capabilities of deep learning mod- els. However, the ability of deep learning models is still limited by their model capacity and train- ing data scale. As LLMs have remarkable abilities in text comprehension and logical reasoning, re- searchers combined judgment logic with LLMs for LJP. Researchers(Deng et al., 2023) use LLMs to retrieve relevant legal provisions and extract four types of criminal elements from case facts for judg- ment based on the classical legal logic syllogism. Prompts constructed by keyword structure from legal experts’ determination guide LLMs to clas- sify legal cases(Izzidien et al., 2024). In summary, while existing methods using legal logic effectively analyze cases through established rules, their logic rigidity limits adaptation to case-specific contexts in complex scenarios. In this paper, we propose a dynamically opti- mized judgment rule method with adaptive adjust- ment mechanisms, formalizing rules(§3.3) through First-Order Logic (FOL) - a symbolic language for complex legal reasoning. Our framework integrates tree-splitting operations with contrastive learning, where confusable cases systematically generate correct/erroneous judgment pairs for Confusion- Aware Contrastive Learning (CACL) (§3.4). Iter- ative optimization produces enhanced rules with superior discriminative power in complex cases, enabling LLMs to achieve precise judgment predic- tions. This culminates in RLJP (Rule-enhanced Legal Judgment Prediction), a logic-enhanced sys- tem operationalizing FOL rules for legal judgment prediction (§3.5). Our contributions are: (1) We propose a dynamic rule optimization method. Pioneer the modeling of judgment rules optimization as tree splitting, and use the CACL mechanism for self-adaptive rules, overcoming the limitations of fixed rules in handling complex cases; (2) We propose RLJP, a method that novelly integrates FOL judgment rules to enhance reasoning ability for LJP; (3) Compre- hensive evaluations on two public datasets demon- strate state-of-the-art performance across all met- rics compared to baseline methods. To facilitate future research, we make our code publicly avail- able.2 Related Work LJP aims to predict judicial outcomes by employ- ing traditional models or LLMs to analyze legal cases (He et al., 2023; Deng et al., 2024; Zhang et al., 2024). Broadly, LJP can be divided into two main approaches: one based on semantic sim- ilarity (Liu et al., 2024a) and the other guided by judgment logic (Cheng et al., 2024). 2.1 LJP Assisted by Semantic Similarity The semantic similarity-assisted method for LJP involves modeling domain knowledge and applying retrieval techniques (Zhang and Dou, 2023). The former mainly utilizes neural networks to model legal knowledge (Yue et al., 2021; Yao et al., 2024) for feature extraction and
https://arxiv.org/abs/2505.21281v1
to establish seman- tic associations among the knowledge. To improve the accuracy of crime prediction, the prevailing methods primarily focus on refining the attention mechanism (He et al., 2023; Yu and Qiu, 2023; He et al., 2025). Additionally, some methods infuse domain-specific legal knowledge into LLMs (Wang et al., 2024; Li et al., 2025), thereby enhancing the models’ capability to identify and leverage critical information. Regarding confusion issue in criminal charges, Zhang et al. (2023) and Gan et al. (2023) utilized contrastive learning to capture fine-grained distinctions. Using retrieval techniques refers to uti- lizing search engines (Wu et al., 2023a; Yue et al., 2023; Liu et al., 2024b) to obtain external knowl- edge from semantically similar judicial precedents. He et al. (2024) introduced the retrieval technol- ogy into the SimCourt multi-agent framework. Qin et al. (2024) constructed a hierarchical semantic ID for the retrieved candidate documents. Legal judgment fundamentally relies on rigor- ous, logic-bound reasoning demanding systematic analysis. This process necessitates precise legal interpretation, detailed case fact scrutiny, and accu- rate application of legal principles, whereas solely relying on textual semantic similarity fails to en- sure the accuracy and logical rigor required in LJP. 2.2 LJP Assisted by Case Judgment Logic The legal judgment logic-enhanced method for LJP leverages models to summarize the knowledge em- bedded in case details (Chen et al., 2020; Yang et al., 2022) and extract the underlying reasoning logic. Existing methods focus on using the entire fac- tual description (Chai et al., 2020; Hong and Chang, 2 2023) to generate judgment results, overlooking the actual judicial scenario where judges consider vari- ous criminal circumstances (Wu et al., 2022) to de- termine the verdict and sentencing. Consequently, Yue et al. (2021) proposed a legal judgment frame- work, NeurJudge, which can divide facts and the prediction results of intermediate subtasks into dif- ferent scenarios. Yue et al. (2024) further proposed NeurJudge+, which integrates the semantic labels of accusations and legal provisions into facts. Gan et al. (2021) injected legal knowledge as an addi- tional logical rule into the neural network. Deng et al. use LLMs to retrieve relevant legal provisions and extract four types of criminal elements from case facts for judgment based on the classical legal syllogism. Prompts constructed by keyword struc- ture from legal experts’ determination guide LLMs to classify legal cases(Izzidien et al., 2024). In summary, the inherent structural rigidity of previous methods limits their ability to dynami- cally adapt to diverse, case-specific logical frame- works. Thus, when confronted with complex legal cases, these inflexible frameworks struggle to rec- oncile conflicting evidence or interpretative contra- dictions. In light of this, this paper aims to develop an adaptive adjustment mechanism for legal judg- ment logic, thereby further enhancing LJP perfor- mance in complex and contradictory situations. 3 Method 3.1 Task Definition In this paper, we focus on the problem of legal judgment prediction. First, we clarify the definition of the terms as follows: •Judgment is the final decision made by a judge in a legal case based on the facts and the judgment rule. It typically consists of
https://arxiv.org/abs/2505.21281v1
the law article, the charge, and the prison term. We represent the judgment of a case as j= (article, charge, prison _term ), where (article, charge, prison _term )refers to the labels of provisions, accusation, and prison term, respectively. •Precedent is the previous case with a similar fact. For a given judgment label, there can be several precedents, which can be denoted as S={s1, s2, ..., s n}, where nis the number of precedents. •Rule refers to formalized patterns derived from summarizing recurrent case develop-ment trajectories across similar judicial de- cisions in our study. Problem 1 Induction Rule. Extract causal fac- tors from case facts for the content of the rule, and construct the confusable case fact to optimize the rule. Problem 2 Legal Judgment Prediction. Given the case fact f, our task is to analyze f based on rule, then predict judgment j= (article, charge, prison _term ). 3.2 Model Overview During the learning process of human students, they typically first acquire examples and knowl- edge from textbooks, then summarize problem- solving logic or methods. These methods are later tested through quizzes to assess their accuracy, and the final learning outcomes are evaluated in the fi- nal exam. Inspired by this, we propose the RLJP framework (Figure 1) to enhance reasoning ability for the LJP task. The framework contains three modules. •Rules Initialization Module (§3.3). This module aims to utilize an LLM agent to gener- ate rude judgment rules in the FOL formalism, leveraging legal provisions and precedents. This process mirrors how students acquire knowledge and summarize problem-solving logic. •Rules Optimization Module (§3.4). This module aims to use the Confusable Case Set to optimize the FOL judgment rules through Confusion-Aware Contrastive Learn- ing (CACL), which is similar to how students optimize their problem-solving logic through quiz experience. •Examination Module (§3.5). This module uses an LLM agent to predict legal judgments by applying optimized judgment rules in com- bination with candidate labels generated by a lightweight model, which mirrors the final exam of the students. 3.3 Rules Initialization Inspired by the cognitive process where students first encounter a specific problem type and sub- sequently extract solution logic, we develop rea- soning rules for determining categories of legal judgments through context learning. To precisely 3 Optimization Tree n0 : (R0[t], score0) n* : (R*[t], Score*) New node n’ : (R’[t], Score’)Optimization Tree Splitting Step 3: Choose Node ∗= () Step 2: Evaluate Nodes Legal Quiz Q1.···Vt[0][fact] ··· R[t]··· A. Yes B. No Q2.···Vt[1][fact] ··· R[t]··· A. Yes B. No ··· ··· MAX PointerStep 4: Confusion-Aware Contrastive Learning Correct Records Error RecordsEffective Logic Part Ineffective Logic PartRemove PreserveStep 1: Construct Confusable Cases Set Vt True Answer: Yes No Quiz Questions ScoreError RecordsCorrect RecordsR0 Examination ModuleR* Case Fact Candidate LabelsLightweight Model FOL Rules COT + Predict Legal Label Article Charge Penalty … … … Rules Optimization Module Rules Initialize Module Individual variablesPredicate symbolsQuantifiersJudgment Labels Judicial Precedents Antecedent Consequent FOL Rules R0:Type of CriminalType of VictimTime Criminal behaviorObjective ConsequencesLocation Subjective psychology of criminalsPost-offense behavior of the criminalFigure 1: RLJP framework. The blue box is the Rule
https://arxiv.org/abs/2505.21281v1
Initialization Module on the upper left(§3.3), the gray box is the Rule Optimization Module(§3.4) with Confusion-Aware Contrastive Learning, and the green box is the Examination Module(§3.5) for the completion of LJP tasks based on the optimized rules. describe the reasoning rules, we utilize the FOL formalism, which can accurately express complex logic. Each FOL judgment rule Rule :A→Ccom- prises an antecedent Aand a consequent C. The content of Ais determined by the circumstance logic. And Cis constructed by one or two le- gal judgment labels, including article , (article , charge ), and ( article ,prison _term ). Since legal provisions offer a clear legal basis for accusation determination, we select precedents vio- lating the same legal provisions and receiving the same accusation to generate accusation judgment rules. As different legal provisions set different sen- tencing standards, we extract precedents violating the same legal provisions and receiving the same prison term to help generate prison-term judgment rules. This combination reminds the LLM agent that the content of judgment rules should strictly adhere to established legal provisions when initial- izing rules. To systematically summarize the underlying log- ical patterns in case developments of similar legal cases, we employ FOL symbols to formalize legal judgment rules through a three-step process. (1) Summarize circumstance logic. This step aims to analyze some causal factors from similar legal cases. Specifically, we use the LLM agent tosummarize causal factors causing the legal judg- ment from some precedents in the same legal judg- ment, including the category of criminal subject, the category of the victim, the time and location of the crime, the criminal behavior, the objective consequences of the crime, and the criminal’s sub- jective mental state. These factors are crucial for constructing the antecedent Aof the FOL judgment rules. (2) Define FOL symbols. The objective of this step is to establish the logical relationship between the causal factors and the legal judgment in the case. For this purpose, we use the LLM agent to define causal factors with FOL symbols, includ- ing variables, predicate symbols, and quantifiers. Variables can represent the entities, time, location, events, consequences, and mental state in factors. Predicate symbols describe specific attributes of the criminal subject, the victim, and the criminal behavior. The universal quantifier ∀and the exis- tential quantifier ∃, express the rule’s generality or specificity. (3) Construct FOL judgment rules. The pur- pose of this step is to standardize the format of rules to facilitate subsequent optimization and rea- soning processes. The judgment rules are con- structed using the antecedent Aand the consequent C.Rule :A→Cin this step. The antecedent 4 Acomprises multiple FOL symbols and logical connectives ( ∨,∧, and¬). The consequent C is the same as LJP labels, including legal provi- sions article , (legal provisions article , accusa- tioncharge ), and (legal provisions article , prison terms prison _term ). 3.4 Confusion-Aware Rule Optimize Engine In RLJP, we propose a Confusion-Aware Rule Op- timization Module to eliminate ambiguous bound- aries among rules, which can learn from correct and incorrect experiences in confusable cases rea- soning by iteratively optimizing rules
https://arxiv.org/abs/2505.21281v1
with CACL. The full procedure of this Optimization is described in Algorithm 1. Specifically, this module works in two-stage steps: construction confusable case set(§3.4.1), and rules optimization(§3.4.2) depends on Confusion-Aware Contrastive Learning(CACL). Algorithm 1: Confusion-Aware Rule Opti- mization Require: Precedents : Precedent set; Require: target : Target label which is the consequence of the rules; Require: RULE : Initial rule set ; Require: DefinedScore : A predefined score threshold serves as the stopping criterion for terminating the iterative updating process; Require: Num : The required number of confusable cases. 1:Spostive ←Precedents [target ] 2:Sother←Precedents [other ] 3:Snegative ← RankSimilarity (Spostive , Sother, Num ) 4: Confusable case set Vtarget←Snegative ∪Snegative 5:r0←RULE [target ] 6:MaxScore ←0, MaxPointer ←Null 7:Rtree←[r0] 8:forr inRtreedo 9: ifIs_Evaluate( r)then 10: ( score , reasoning records) ←history (r) 11: else 12: ( score , reasoning records) ←Evaluate (Vtarget , r) 13: end if 14: ifscore > MaxScore then 15: MaxScore ←score 16: MaxPointer ←r 17: R∗←MaxPointer 18: end if 19: ifMaxScore > =DefinedScore then 20: break; 21: end if 22: R′←CACL (R∗,reasoning records ) 23: Rtree←Add_ChildNode( R′, R∗) 24:end for 25:return R∗;3.4.1 Construction of the Confusable Cases Validation Set The confusable case set is defined as cases charac- terized by highly similar factual circumstances but differing legal outcomes in our paper. In the real world, confusable cases often pose significant chal- lenges to professionals tasked with making judg- ment decisions. Thus, we construct the confusable case set as a benchmark to evaluate the quality of legal rules generated in RLJP. Specifically, to measure the similarity between cases, we use the Bi-directional Generative Encoder(BGE) model to generate vector representations of case fact texts and calculate the cosine distance between vectors. The construction process of this set is as follows. To find confusable cases for legal judgment target , we first find the case set, whose legal judg- ment label is target , denoted as Spositive , and the case set for other legal judgment labels, denoted asSother. Then, to asses the similarity score be- tween Spositive andSother, we convert each case fact set into fixed-dimensional embedding vectors using the BGE model: Et=E1 t, E2 t, ..., Em t for Spositive andEo=E1 o, E2 o, ..., En o forSother, Ei t, Ei o∈Rd. Then, we compute the cosine sim- ilarity between EtandEo. The results are orga- nized into a similarity matrix A, where A(i, j) = Ei t·Ei o ∥Ei t∥∥Eio∥represents the similarity score of two cases Si positive andSj other. The higher the similarity score between cases in different judgment labels means that they con- tain similar crime circumstances, the more likely they are to be confusable for legal judgment. For this reason, we sort the similarity matrix Ain descending order based on the similarity scores of each row, and the first column is extracted to obtain a set of negative sample cases for judg- ment target :Snegative ∈Sother. Finally, the confusable case set Vtarget can be constructed: Vtarget =Snegative ∪Spositive . 3.4.2 Optimization Tree Splitting To iteratively optimize legal judgment rules, we formulate the process of rule optimization as
https://arxiv.org/abs/2505.21281v1
the process of tree-splitting. These nodes of the opti- mization tree are different versions rule during opti- mization. In one iteration, we first create some rea- soning quizzes constructed by the confusable cases set, and calculate the score of the judgment rules for best-first splitting. And then, we collect cor- rect and incorrect reasoning experience in quizzes 5 for CACL, which autonomously optimizes rules to keep effective logic parts and update ineffective logic parts by analyzing experience. Optimization Tree Definition. We formalize judgment rule optimization as a weighted tree struc- tureTtarget = (N, E, W ), where Ndenotes rule in different versions, E∈N∗Nencodes rule opti- mization relationships, and W:N→[0,1]quan- tifies rules’ performance using quiz score using the confusable case set. The root node n0∈Ncor- responds to the initial rule formulation from §3.3, with directed edges (np, nc)∈Erepresenting op- timization paths where child rule ncevolves from parent npthrough experience-based refinements. Node weights W(n)establish a partial ordering over rule versions, enabling performance-guided choice in the optimization cycle. Optimization Tree Splitting. Legal judgment rules optimization contains three iterative steps. The first step aims to evaluate and choose the node. To select the basis node for optimization, in this step, we evaluate all rules in the current tree using the confusable case set, and choose the high- est accuracy node n∗(ruleR∗) to mark with the MAX pointer. We propose the CACL method to optimize R∗. The CACL method finds optimiza- tion directions based on evaluation experience and generates optimized rules. Finally, the third step aims to integrate the node. Attach the new rule as a child node to n∗, repeating these three phases until either a predefined accuracy threshold is achieved by the optimized rule or the maximum iteration count is reached. Detailed explanations of each step are as follows. Step 1: Evaluate and Choose Node. Inspired by staged assessments in educational settings, we design a quiz using confusable cases to evaluate the rules. First, we format case facts from the Vtarget into single-choice quiz questions. The template for the construction of single-choice questions is provided in Figure 4. Then the LLM agent using Rnchooses a predicted option and generates a rea- soning process for each question. The capability of Rnis assessed based on the accuracy of predicted options. The experience of choosing correctly con- sists of True Positives (TP) and True Negatives (TN). The experience of choosing incorrectly con- sists of False Positives (FP) and False Negatives (FN). The weight of node W(n)is computed as formalized in Equation 1.W(n) =TP+TN TP+TN+FP+FN(1) The selected node n∗=argmax n∈NW(n)is the node with the highest weight in the tree, corre- sponding to the rule R∗that achieves the highest accuracy on the validation set. Algorithm 2: Confusion-Aware Con- trastive Learning Require: R∗: Current rule (Anchor); Require: Correct (R∗): Set of positive samples (valid reasoning records); Require: Incorrect (R∗): Set of negative samples (invalid reasoning records). 1:▽keep←LLM (R∗, Correct (R∗)) 2:▽imp←LLM (R∗, Incorrect (R∗)) 3:▽R←LLM (▽keep,▽imp) 4:R′←LLM (▽R, R∗) 5:return R′ Step 2: CACL Method. To optimize the current optimal rule R∗, we propose the CACL to generate an optimized new
https://arxiv.org/abs/2505.21281v1
rule, which simulates students reflecting on the correct and incorrect problem- solving process. The full procedure of the CACL method is described in Algorithm 2, which consists of three core steps as follows. Step 2-1: Construct triplets. CACL con- structs the experience of rule evaluation as con- trast triplets. The key triplet in CACL is (Anchor R∗, Positive Samples Correct (R∗), Negative Sam- plesIncorrect (R∗)). Anchor is the current rule R∗. Positive samples are correct reasoning records Correct (R∗)(TP and TN) in evaluation, which contain quiz questions, reasoning process, correct option, and predicted Option. Negative samples are composed of the same quadruples and use wrong reasoning records Incorrect (R∗)(FP and FN) in the evaluation. The quadruple construction is pro- vided in Figure 2. Step 2-2: Generate optimization direction. CACL analyzes contrast triplets for optimiza- tion, the LLM agent generates optimization di- rection ▽RofR∗based on the positive and negative triplets, including effective logic parts ▽keep and ineffective logic parts ▽imp.▽R= (▽keep,▽imp). The prompt template for generat- ing optimization direction is shown in Figure 5. Step 2-3: Optimize rules. The optimization direction generated from CACL guides the LLM agent to maintain the effective logic part and im- prove the ineffective logic part. The prompt tem- 6 plate for the rule optimization is detailed in Figure 3. Step 3: Integrate Node. The optimized R′is added to the tree as a child node of R∗. We continue to optimize rules until the optimized rule achieves the predefined accuracy or the maximum iteration count is reached. 3.5 Examination Inspired by the final examination process after many quizzes in educational settings, this mod- ule executes the LJP task with the optimized rules. In this module, we start using a lightweight BERT model to output the top 10 probable legal labels. Then we apply FOL judgment rules against each candidate label with the Chain-of-Thought method to predict legal judgment. The prompt template for completing the LJP task, as Figure 2 shows which is the same as the quiz template. If none of the candidates meet the logical constraints, we initi- ate a stochastic traversal of the remaining labels based on their rules. Furthermore, we activate the abstract template as Figure 6 shows to generate the abstract of the case fact when the length of the fact exceeds a defined threshold. The abstract tem- plate aims to generate condensed case abstracts that retain legally relevant features while eliminating redundant details. 4 Experiments 4.1 Settings Datasets. Following previous LJP works (Zhong et al., 2018; Xu et al., 2020; Yue et al., 2021; Wu et al., 2023b), we conduct experiments on datasets CAIL20182and CJO22(Wu et al., 2023b). For the CAIL2018 dataset, we randomly divide it into training, validation, and test sets with a ratio of 8:1:1Wu et al. (2023b). For the CJO22 dataset, following Wu et al. (2023b), we use it solely as an additional test set. The table in Appendix F presents the details of two datasets. Evaluation Metrics. We use Accuracy (Acc), Macro-Precision (Ma-P), Macro-Recall (Ma-R), and Macro-F1 (Ma-F) as the evaluation metrics following previous LJP works (Zhong et al.,
https://arxiv.org/abs/2505.21281v1
2018; Xu et al., 2020; Yue et al., 2021; Wu et al., 2023b). Baselines. We compare with: (1) CNN (LeCun et al., 1989) use different kernel convolutional operations to extract text features for classifica- tion; (2) BERT (Devlin et al., 2019) can be easily 2https://github.com/china-ai-law-challengefine-tuned on downstream tasks such as LJP; (3) TopJudge (Zhong et al., 2018) employs multi-task learning and captures the dependencies among the three sub-tasks in LJP; (4) R-Former (Dong and Niu, 2021) formalizes LJP as a node classifica- tion problem over a global consistency graph; (5) LADAN(Xu et al., 2020) uses graph distillation to extract discriminative features of the fact; (6) NeurJudge (Yue et al., 2021) splits the fact descrip- tion into different parts for making predictions; (7) EPM (Feng et al., 2022) locates event-related infor- mation essential for judgment; (8) CTM (Liu et al., 2022) uses case triple modeling from contrastive case relations; (9) PLJP (Wu et al., 2023b) uses the collaboration between LLMs and domain-specific models for LJP; (10) Llama3 (Wu et al., 2023b), which is Meta’s advanced open-source language model; (11) D-LADAN (Xu et al., 2024) solves LJP confusion via graph and memory mechanisms. Implementation Details. We directly adopted Bert-base-chinese3for candidate labels, Gpt-4o for rules optimization, and Llama3- chinese(Zhichen Zhang, 2024) for other steps because the language of the cases in the data set is Chinese. For the length limit, we use three precedents in Section 3.3. For the parameter settings of the baseline, we strictly follow the original paper. More details are in Appendix G. 4.2 Main Results Tables 1 and Table 2 show the performance com- parison results of RLJP and various baseline mod- els on the CAIL2018 and CJO22 datasets, respec- tively, which are the average values obtained from five test turns. The experimental results show that RLJP achieved optimal performance in all metrics, verifying the enhancing effect of FOL judgment rules on LJP. Specifically, compared to the subop- timal model, RLJP achieved an average improve- ment of 1.43% in Acc and 14.98% in Ma-F on the CAIL2018 and CJO22 datasets. These experi- mental results fully validate the effectiveness and superiority of our method in LJP, indicating its sig- nificant advantages in legal case judgment. It is worth noting that compared to the tasks of predicting legal provisions and criminal charges, the task of prison term prediction still presents sig- nificant challenges, mainly reflected in its relatively small performance improvement and lower over- all prediction accuracy compared to the other two 3https://huggingface.co/google-bert/ bert-base-chinese 7 tasks. This phenomenon may be related to the more complex sentencing factors and subjective judg- ments of judges involved in sentence prediction and is worth further exploration and improvement in subsequent research. 8 MethodLaw Article Charge Prison Term Acc Ma-P Ma-R Ma-F Acc Ma-P Ma-R Ma-F Acc Ma-P Ma-R Ma-F CNN(LeCun et al., 1989) 80.50 40.10 38.33 38.49 87.52 88.23 88.31 88.17 34.42 32.22 30.53 31.05 BERT(Devlin et al., 2019) 82.77 36.82 35.94 35.82 89.10 90.10 89.48 89.63 40.00 37.53 33.66 33.58 Roberta(Zhong et al., 2018) 83.08 48.09 44.25 44.87 90.30 91.02 90.97 90.94 40.84 38.62 38.55 38.50
https://arxiv.org/abs/2505.21281v1
TopJudge(Zhong et al., 2018) 80.46 40.96 40.96 38.24 87.31 88.68 87.84 88.20 35.54 33.55 31.08 32.00 R-Former(Dong and Niu, 2021) 87.82 56.13 56.57 55.81 91.54 91.61 91.96 91.58 40.70 36.09 36.76 35.04 LADAN(Xu et al., 2020) 82.82 42.57 39.00 40.71 88.09 90.12 88.82 89.47 38.03 33.66 30.08 31.77 Neurjudge(Yue et al., 2021) 76.91 55.95 52.92 53.56 82.13 82.71 82.30 82.36 33.53 36.46 37.26 36.53 EPM(Feng et al., 2022) 85.80 49.08 45.76 47.32 91.20 90.81 89.99 90.46 40.25 37.96 37.00 37.34 CTM(Liu et al., 2022) 84.72 46.46 44.83 45.10 90.28 90.34 88.08 86.30 39.56 38.66 38.02 37.84 Llama3-chinese(Zhichen Zhang, 2024) 2.62 4.41 1.66 1.11 22.63 22.53 9.58 10.04 12.00 14.91 19.04 10.53 PLJP(Bert)(Wu et al., 2023b) 87.07 58.81 57.29 56.63 94.99 92.12 91.10 91.33 48.72 42.64 36.80 35.43 D-LADAN(Xu et al., 2024) 91.05 56.37 54.91 58.38 90.82 66.26 62.51 62.08 38.31 27.48 26.01 25.00 RLJP(ours) 91.27 85.72 91.27 88.32 96.00 96.53 96.00 96.10 54.72 48.01 54.73 48.45 Table 1: Experiment results of LJP in CAIL2018 dataset. “ Bold ” indicates optimal results, and “ underline ” indicates sub-optimal results. The experimental results represent the average values obtained from five test rounds. 4.3 Ablation Experiment To systematically evaluate the contributions of dif- ferent components to the performance of LJP, we designed ablation experiments. Results of ablation experiments in Table 3 and Table 4, we can con- clude that: (1) “w/o R” represents removing judg- ment rules (§3.3) and causes the decrease of all metrics, which proves judgment rules greatly help the LLM agent in reasoning legal judgment. (2) We find removing the optimization module (“w/o Optimize”) causes the drop in metrics, proving that the dynamical Optimization module (§3.4) is beneficial for improving the rules used in the pro- cess of judgment prediction. (3) In row 3 (“w/o CL”), results show some metrics drop and some metrics up when we optimize the rule based on the latest version rather than CACL (§3.4.2). The up phenomenon shows the validity of optimization. The down phenomenon, caused by the content of the rule overfitting in some cases, shows that the performance-guided optimization method is impor- tant. (4) “w/o Candidate” removes candidate la- bels in the Examination module (§3.5), performing worse than RLJP. This indicates the importance of the lightweight model. 4.4 Analytical Experiment To validate the performance advantages of RLJP in handling complex case facts, we selected the top 5% of cases as the test cases based on case length, which contain more details and complex circumstances. The table in Appendix H presents the statistics of these two subsets. We compared the performance of the second-ranked PLJP (§4.2)with our RLJP. The experimental results in Table 5 and Table 6 demonstrate that the proposed RLJP method is over-performing in judging complex facts. We can conclude that FOL judgment rules can effectively capture key elements in fact of cases, reducing interference from redundant information and focus- ing on critical facts that are decisive for judgment prediction. In contrast, PLJP, which uses fixed three-type reorganized case facts, may ignore some important details. FOL judgment rules help the model better understand the
https://arxiv.org/abs/2505.21281v1
logical structure and legal terminology in complex facts, which can fo- cus on important logical details to reduce incorrect judgments caused by excessively long texts. 9 MethodLaw Article Charge Prison Term Acc Ma-P Ma-R Ma-F Acc Ma-P Ma-R Ma-F Acc Ma-P Ma-R Ma-F CNN(LeCun et al., 1989) 76.14 35.48 38.55 35.39 74.91 74.00 78.12 73.97 27.38 18.48 17.51 17.44 BERT(Devlin et al., 2019) 82.62 45.89 47.91 45.83 80.50 80.34 81.09 78.36 36.80 29.83 27.50 27.03 Roberta(Zhong et al., 2018) 80.32 42.36 44.22 41.80 79.26 78.93 81.25 78.18 29.74 24.73 24.76 23.22 TopJudge(Zhong et al., 2018) 78.73 40.38 41.47 40.09 76.67 74.00 77.40 74.62 27.14 19.76 17.69 17.94 R-Former(Dong and Niu, 2021) 87.69 53.03 49.35 50.23 90.71 93.06 88.66 89.82 38.63 32.63 32.76 29.51 LADAN(Xu et al., 2020) 79.44 48.43 44.13 46.18 79.64 48.43 44.13 46.18 33.69 26.40 22.94 24.55 Neurjudge(Yue et al., 2021) 71.38 52.86 53.52 52.62 71.85 69.37 71.09 68.66 26.80 26.81 26.85 25.97 EPM(Feng et al., 2022) 84.19 47.21 43.79 44.39 83.49 80.36 83.29 81.87 36.91 30.65 31.61 30.20 CTM(Liu et al., 2022) 79.44 47.83 42.25 43.43 79.33 82.39 83.12 82.81 36.81 27.10 25.96 26.46 Llama3-chinese(Zhichen Zhang, 2024) 3.01 2.21 1.09 1.36 27.72 37.12 27.72 28.28 15.66 19.18 15.66 15.56 PLJP(Bert)(Wu et al., 2023b) 94.18 74.65 76.23 74.84 94.18 90.25 88.67 89.05 43.52 33.37 35.67 31.98 D-LADAN(Xu et al., 2024) 90.65 44.95 48.45 44.49 88.95 29.41 27.64 28.48 46.18 24.80 23.78 22.73 RLJP(ours) 94.55 94.30 90.94 91.28 96.12 97.60 96.76 96.83 48.50 49.44 50.19 49.18 Table 2: Experiment results of LJP in CJO22 dataset. “ Bold ” indicates optimal results, and “ underline ” indicates sub-optimal results. The experimental results represent the average values obtained from five test rounds. Table 3: Results of ablation experiments on CAIL2018. “w/o R”, “w/o Optimize”, “w/o CACL”, and “w/o Can- didate” denote removing judgment rules, optimization modules, CACL method, and candidate labels, respec- tively. MethodLaw Article Charge Prison Term Acc Ma-F Acc Ma-F Acc Ma-F w/o R 18.92 26.53 82.43 81.25 28.05 16.89 w/o Optimize 85.9 82.98 83.13 84.28 39.74 41.16 w/o CACL 86.58 82.31 87.91 86.2 20.81 24.28 w/o Candidate 37.25 30.17 45.64 35.74 17.45 19.00 ours(RLJP) 91.27 88.32 96.00 96.10 54.72 48.45 Table 4: Results of ablation experiments on CJO22. “w/o R”, “w/o Optimize”, “w/o CACL”, and “w/o Can- didate” denote removing judgment rules, optimization modules, CACL method, and candidate labels, respec- tively. MethodLaw Article Charge Prison Term Acc Ma-F Acc Ma-F Acc Ma-F w/o R 14.31 15.59 71.08 68.71 20.12 18.17 w/o Optimize 85.44 83.98 89.7 89.00 37.35 37.52 w/o CACL 84.98 84.08 91.8 90.36 31.52 32.63 w/o Candidate 21.05 15.06 71.58 68.86 23.53 23.17 ours(RLJP) 94.55 91.28 96.40 96.58 48.50 49.18 Table 5: Results of analytical experiments on CAIL2018_long. The experimental results represent the average values obtained from five test rounds. MethodLaw Article Charge Prison Term Acc Ma-F Acc Ma-F Acc Ma-F PLJP(Bert) 28.06 36.17 40.94 44.74 12.32 13.92 ours(RLJP) 79.27 78.27 91.67 91.81 35.14 36.23Table 6: Results of analytical experiments on CJO22_long. The experimental results represent the average values obtained from five test rounds. MethodLaw Article Charge Prison Term
https://arxiv.org/abs/2505.21281v1
Acc Ma-F Acc Ma-F Acc Ma-F PLJP(Bert) 38.93 58.87 42.64 44.71 23.26 22.29 ours(RLJP) 41.67 47.62 97.70 91.95 31.70 35.96 5 Conclusion In summary, our proposed RLJP framework intro- duces three key innovations for LJP: (1) A dynamic rule optimization method that formulates judgment rule optimization with CACL as the process of tree-splitting, effectively addressing fixed rules’ adaptability limitations in complex legal cases; (2) RLJP is a logic-semantic co-reasoning architec- ture combining lightweight semantic prescreening with FOL judgment rules to strengthen judgment reasoning capabilities; (3) Experimental validation demonstrating superior performance over all base- line methods in LJP tasks, particularly in complex, detailed cases. Limitations Although our approach produces promising results on two public datasets, there are certain limitations. In the future, we will continue to dig into these concerns. First, we only evaluate RLJP on two Chinese LJP datasets. We do not conduct experiments on other languages’ LJP datasets to evaluate the validity. Second, the interpretability of LJP is crucial, and 10 users need to understand how the model can get the judgment prediction results. In the examination module, RLJP combined COT with FOL rules and output the explanation for the result, but it lacked sufficient interpretability analysis for the judgment process and results of the model. Ethical Consideration While our RLJP framework enhances legal judg- ment prediction accuracy, its deployment requires rigorous ethical safeguards: (1) Human judges must retain final decision-making authority to mit- igate risks from data biases or model errors; (2) Clear accountability protocols should ensure legal responsibility remains with human practitioners, not AI systems; (3) Ongoing fairness audits are nec- essary to prevent discriminatory outcomes against demographic minorities. References Duo Chai, Wei Wu, Qinghong Han, Fei Wu, and Jiwei Li. 2020. Description based text classification with reinforcement learning. In International conference on machine learning , pages 1371–1382. PMLR. Long Chen, Nuo Xu, and Yue Wang. 2020. Le- gal judgment prediction with label dependen- cies. In 2020 IEEE Intl Conf on Depend- able, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech) , pages 361– 365. IEEE. Yunfen Cheng, Chengyuan Chen, and Yuqi Wang. 2024. Research on legal judgment prediction with multi- task learning and causal logic. In 2024 6th Inter- national Conference on Electronic Engineering and Informatics (EEI) , pages 399–402. IEEE. Chenlong Deng, Kelong Mao, Yuyao Zhang, and Zhicheng Dou. 2024. Enabling discriminative rea- soning in llms for legal judgment prediction. In Findings of the Association for Computational Lin- guistics: EMNLP 2024 , Findings of the Association for Computational Linguistics: EMNLP 2024, pages 784–796. Association for Computational Linguistics. Wentao Deng, Jiahuan Pei, Keyi Kong, Zhe Chen, Furu Wei, Yujun Li, Zhaochun Ren, Zhumin Chen, and Pengjie Ren. 2023. Syllogistic reasoning for legal judgment analysis. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 13997–14009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training ofdeep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the
https://arxiv.org/abs/2505.21281v1
North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers) , pages 4171–4186. Association for Computational Linguistics. Qian Dong and Shuzi Niu. 2021. Legal judgment predic- tion via relational learning. In SIGIR ’21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021 , pages 983–992. ACM. Yi Feng, Chuanyi Li, and Vincent Ng. 2022. Legal judgment prediction: A survey of the state of the art. InProceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022 , pages 5461–5469. ijcai.org. Leilei Gan, Kun Kuang, Yi Yang, and Fei Wu. 2021. Judgment prediction via injecting legal knowledge into neural networks. In Proceedings of the AAAI conference on artificial intelligence , volume 35, pages 12866–12874. Leilei Gan, Baokui Li, Kun Kuang, Yating Zhang, Lei Wang, Anh Luu, Yi Yang, and Fei Wu. 2023. Ex- ploiting contrastive learning and numerical evidence for confusing legal judgment prediction. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023 , pages 12174–12185, Singapore. Association for Computational Linguistics. Congqing He, Tien-Ping Tan, Sheng Xue, and Yanyu Tan. 2025. Simulating judicial trial logic: Dual resid- ual cross-attention learning for predicting legal judg- ment in long documents. Expert Systems with Appli- cations , 261:125462. Congqing He, Tien-Ping Tan, Xiaobo Zhang, and Sheng Xue. 2023. Knowledge-enriched multi-cross atten- tion network for legal judgment prediction. IEEE Access , 11:87571–87582. Zhitao He, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Jiexin Xu, Huaijun Li, Xiaojian Jiang, Kang Liu, and Jun Zhao. 2024. Simu- court: Building judicial decision-making agents with real-world judgement documents. arXiv preprint arXiv:2403.02959 . Yu-Xiang Hong and Chia-Hui Chang. 2023. Improving colloquial case legal judgment prediction via abstrac- tive text summarization. Computer Law & Security Review , 51:105863. Ahmed Izzidien, Holli Sargeant, and Felix Steffek. 2024. Llm vs. lawyers: Identifying a subset of summary judgments in a large uk case law dataset. arXiv preprint arXiv:2403.04791 . 11 Yann LeCun, Bernhard E. Boser, John S. Denker, Don- nie Henderson, Richard E. Howard, Wayne E. Hub- bard, and Lawrence D. Jackel. 1989. Backpropa- gation applied to handwritten zip code recognition. Neural Comput. , 1(4):541–551. Shangyuan Li, Shiman Zhao, Zhuoran Zhang, Zihao Fang, Wei Chen, and Tengjiao Wang. 2025. Ba- sis is also explanation: Interpretable legal judgment reasoning prompted by multi-source knowledge. In- formation Processing & Management , 62(3):103996. Dugang Liu, Weihao Du, Lei Li, Weike Pan, and Zhong Ming. 2022. Augmenting legal judgment prediction with contrastive case relations. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022 , pages 2658–2667. Inter- national Committee on Computational Linguistics. Pengjie Liu, Wang Zhang, Yulong Ding, Xuefeng Zhang, and Shuang-Hua Yang. 2024a. Semdr: A semantic-aware dual encoder model for legal judg- ment prediction with legal clue tracing. In 2024 IEEE International Conference on Systems, Man, and Cy- bernetics (SMC) , pages 3447–3453. IEEE. Pengjie Liu, Wang Zhang, Yulong Ding, Xuefeng Zhang, and Shuang-Hua
https://arxiv.org/abs/2505.21281v1
Yang. 2024b. Semdr: A semantic-aware dual encoder model for legal judg- ment prediction with legal clue tracing. In 2024 IEEE International Conference on Systems, Man, and Cy- bernetics (SMC) , pages 3447–3453. IEEE. Weicong Qin, Zelin Cao, Weijie Yu, Zihua Si, Sirui Chen, and Jun Xu. 2024. Explicitly integrating judg- ment prediction with legal document retrieval: A law-guided generative approach. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2210–2220. Xuran Wang, Xinguang Zhang, Vanessa Hoo, Zhouhang Shao, and Xuguang Zhang. 2024. Legalreasoner: A multi-stage framework for legal judgment prediction via large language models and knowledge integration. IEEE Access . Yiquan Wu, Yifei Liu, Weiming Lu, Yating Zhang, Jun Feng, Changlong Sun, Fei Wu, and Kun Kuang. 2022. Towards interactivity and interpretability: A rationale-based legal judgment prediction framework. InProceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing , pages 4787–4799. Yiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xi- aozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, and Kun Kuang. 2023a. Precedent-enhanced legal judgment prediction with llm and domain-model col- laboration. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, Proceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing, pages 12060–12075. Association for Computational Lin- guistics.Yiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xi- aozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, and Kun Kuang. 2023b. Precedent-enhanced legal judgment prediction with LLM and domain-model collaboration. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2023, Singapore, December 6-10, 2023 , pages 12060–12075. Association for Computational Linguistics. Nuo Xu, Pinghui Wang, Long Chen, Li Pan, Xiaoyan Wang, and Junzhou Zhao. 2020. Distinguish con- fusing law articles for legal judgment prediction. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, Online, July 5-10, 2020 , pages 3086–3095. Associa- tion for Computational Linguistics. Nuo Xu, Pinghui Wang, Junzhou Zhao, Feiyang Sun, Lin Lan, Jing Tao, Li Pan, and Xiaohong Guan. 2024. Distinguish confusion in legal judgment prediction via revised relation knowledge. ACM Trans. Inf. Syst. , 43(1):Article 6. Shuxin Yang, Suxin Tong, Guixiang Zhu, Jie Cao, Youquan Wang, Zhengfa Xue, Hongliang Sun, and Yu Wen. 2022. Mve-flk: A multi-task legal judg- ment prediction via multi-view encoder fusing legal keywords. Knowledge-Based Systems , 239:107960. Shunyu Yao, Qingqing Ke, Qiwei Wang, Kangtong Li, and Jie Hu. 2024. Lawyer gpt: A legal large lan- guage model with enhanced domain knowledge and reasoning capabilities. In Proceedings of the 2024 3rd International Symposium on Robotics, Artificial Intelligence and Information Engineering , pages 108– 112. Yaoyao Yu and Yihui Qiu. 2023. Enhancing legal judg- ment prediction with attentional networks utilizing le- gal event types. In International Conference on Neu- ral Information Processing , pages 393–404. Springer. Linan Yue, Qi Liu, Binbin Jin, Han Wu, and Yanqing An. 2024. A circumstance-aware neural framework for explainable legal judgment prediction. IEEE Trans- actions on Knowledge and Data Engineering . Linan Yue, Qi Liu, Binbin Jin, Han Wu, Kai Zhang,
https://arxiv.org/abs/2505.21281v1
Yanqing An, Mingyue Cheng, Biao Yin, and Day- ong Wu. 2021. Neurjudge: A circumstance-aware neural framework for legal judgment prediction. In SIGIR ’21: The 44th International ACM SIGIR Con- ference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021 , pages 973–982. ACM. Shengbin Yue, Wei Chen, Siyuan Wang, Bingxuan Li, Chenchen Shen, Shujun Liu, Yuxuan Zhou, Yao Xiao, Song Yun, and Xuanjing Huang. 2023. Disc-lawllm: Fine-tuning large language models for intelligent le- gal services. arXiv preprint arXiv:2309.11325 . Han Zhang and Zhicheng Dou. 2023. Case retrieval for legal judgment prediction in legal artificial intel- ligence. In China National Conference on Chinese Computational Linguistics , pages 434–448. Springer. 12 Han Zhang, Zhicheng Dou, Yutao Zhu, and Ji-Rong Wen. 2023. Contrastive learning for legal judgment prediction. ACM Transactions on Information Sys- tems, 41(4):1–25. Yunong Zhang, Xiao Wei, and Hang Yu. 2024. Hd- ljp: A hierarchical dependency-based legal judg- ment prediction framework for multi-task learning. Knowledge-Based Systems , page 112033. Long Chen Zhichen Zhang, Xin LU. 2024. Llama3-chinese. https://github.com/ seanzhang-zhichen/llama3-chinese . Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Legal judg- ment prediction via topological learning. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018 , pages 3540–3549. Association for Computational Linguistics. A The construction of quiz experience. Figure 2: The construction of quiz experience. B The prompt template for optimization direction generation. Figure 3: The prompt template for optimization direc- tion generation. C The prompt template for the quiz in the confusable case set. Figure 4: The prompt template for the quiz in the con- fusable case set. D The prompt template for optimization direction generation. Figure 5: The prompt template for optimization direc- tion generation. E The prompt template for generating a case abstract. Figure 6: The prompt template for generating a case abstract. F The Statistics of LJP Datasets Table 7 is the statistics of two realistic datasets, CAIL2018 and CJO22, which are widely used in the task of LJP. These datasets have been rigorously preprocessed to ensure compliance with ethical and legal standards. All personally identifying informa- tion (PII), including names, addresses, identifica- tion numbers, and other sensitive identifiers, has been anonymized or removed. Additionally, these datasets uncover offensive, discriminatory, or harm- ful content, as such material was systematically filtered during data curation. These measures align with established guidelines for ethical AI research and protect the privacy and dignity of individuals involved in legal cases. 13 Type CAIL2018 CJO22 Law Articles 164 164 Charges 42 42 Prison Terms 10 10 Samples 82138 1698 Avg. # Fact words 288.6 461.7 Table 7: Datasets statistics G The Experimental setting Our computing infrastructure was robust, featur- ing two A800 GPUs, each equipped with 80GB of memory, providing the necessary computational power to handle the large-scale data and complex algorithms involved in our experiments. Addition- ally, we utilized a high-performance computing cluster with a multi-core CPU architecture, high- speed NVMe storage for rapid data access, and a high-bandwidth network interconnect to
https://arxiv.org/abs/2505.21281v1
ensure efficient data transfer and processing. The oper- ating environment was Linux-based, and we em- ployed CUDA and cuDNN libraries to optimize GPU performance. All these details are crucial for understanding the computational resources that underpinned our experimental results. H The Statistics of Two Extracted Datasets Type CAIL2018_long CJO22_long Law Articles 43 29 Charges 41 28 Prison Terms 10 10 Samples 7499 83 Avg. # Fact words 1147.5 10815.9 Max. # Fact words 20397 43030 Table 8: Test sets in the analytical experiment, which contain the top 5% of cases in length from CAIL2018 and CJO22 Table 8 shows the two subsets statistics of two realistic datasets CAIL2018 and CJO22 as the test set used in the analytical experiment. The two subsets contain the top 5% of cases sorted by case length from CAIL2018 and CJO22. 14
https://arxiv.org/abs/2505.21281v1
arXiv:2505.21288v1 [cs.LG] 27 May 2025GSAT: Graph Structure Attention Networks Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 1Monash University, Malaysia Farshad.Noravesh@monash.edu 2Monash University, Australia Gholamreza.Haffari@monash.edu 3Monash University, Malaysia soon.layki@monash.edu 4Monash University, Malaysia arghya.pal@monash.edu Abstract. Graph Neural Networks (GNNs) have emerged as a power- ful tool for processing data represented in graph structure s, achieving re- markable success across a wide range of applications. Howev er, to further improve the performance on graph classification benchmarks , structural representation of each node that encodes rich local topolog ical informa- tion in the neighbourhood of nodes is an important type of fea ture that is often overlooked in the modeling. The consequence of negl ecting the structural information has resulted high number of layers t o connect mes- sages from distant nodes which by itself produces other prob lems such as oversmoothing. In the present paper, we leverage these st ructural in- formation that are modeled by anonymous random walks (ARWs) and introduce graph structure attention network (GSAT) which i s a gen- eralization of graph attention network(GAT) to integrate t he original attribute and the structural representation to enforce the model to auto- matically find patterns for attending to different edges in th e node neigh- bourhood to enrich graph representation. Our experiments s how GSAT slightly improves SOTA on some graph classification benchma rks. Keywords: graph attention networks · anonymous random walk · struc- tural information · graph classification 1 Introduction In graph neural networks (GNNs), message passing is a fundam ental mechanism for aggregating information from neighboring nodes, enabl ing effective learning on graph-structured data. However, traditional message-p assing schemes often suffer from limitations such as oversmoothing and limited re ceptive fields [23]. Combining random walk (RW) techniques with message passing offers a powerful approach to enhance GNN performance by capturing long-rang e dependencies while preserving local structural information [2]. RW enab le nodes to sample diverse neighborhoods beyond immediate neighbors, facili tating more expressive feature propagation [14]. Structural embedding is essential in message passing for GN Ns because it provides a way to encode the topological properties of nodes within the graph. In standard message-passing frameworks, a node aggregates information from 2 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 its neighbors to update its representation. However, witho ut structural embed- dings, this process may fail to capture higher-order connec tivity patterns, leading to suboptimal representations, especially in graphs with c omplex structures or heterophily [34] . Structural embeddings, such as position al or walk-based em- beddings, encode information about a node’s role and positi on within the graph, enhancing the expressiveness of message passing. This allo ws GNNs to generalize better across different graph structures and improve perfor mance in tasks like graph classification, node classification and link predicti on. The present work is only focused on graph classification. The closest approach t o our work for com- bining random walk(RW) and message passing is [2] which aggr egates the RW embeddings and sends messages from these aggregated embedd ings and finally updates the node representation. We take a different
https://arxiv.org/abs/2505.21288v1
approac h and generalize the graph attention to enforce the graph to attend to differen t structural pat- terns of neighbour nodes. Moreover, our approach uses a prep rocessing based on Word2Vec training that provides the embedding of ARWs. In Graph Convolutional Networks (GCN) [17], the effect of wal k length(Hops) is primarily tied to the receptive field of nodes. A longer wal k length allows in- formation to be aggregated from farther nodes, effectively i ncreasing the neigh- borhood size considered during message passing. However, d ue to the inherent smoothing property of GCN, excessively long walks(Hops) th rough multiple lay- ers may lead to over-smoothing [23], where node representat ions become indis- tinguishable. This effect is especially pronounced in homop hilic graphs, where nodes of the same class are closely connected. In contrast, s horter walks restrict the receptive field, limiting the model’s ability to capture long-range dependen- cies but reducing the risk of over-smoothing. Thus, in GCNs, tuning the walk length is crucial to balance locality and over-smoothing eff ects. In graph attention network(GAT) [30], walk length influence s the adaptive weighting of neighbors through the attention mechanism. Un like GCN, which uniformly aggregates features, GAT assigns different impor tance to nodes in the neighborhood, mitigating the over-smoothing problem to so me extent. Longer walks in GAT allow the model to capture more distant relation ships, but at- tention scores decay as the distance increases, reducing th eir impact. Shorter walks, on the other hand, focus on local node interactions, l everaging the at- tention mechanism to refine feature aggregation within a lim ited neighborhood. While GAT is generally more robust to over-smoothing than GC N, excessive walk lengths can still introduce noise and increase computa tional complexity. Therefore, finding an optimal walk length remains essential for effectively utiliz- ing GAT in graph learning tasks. 1.1 Main Contributions The contributions of our work are as follows: 1. We generalized the GAT to the case that combines structura l information with node attributes to guide the message passing via approp riate learned edge attention weights. GSAT: Graph Structure Attention Networks 3 2. We outperformed state-of-the-art (SOTA) baselines for g raph classification on some benchmarks with only one layer of GSAT which is an indi cation that higher layers are not necessary if structural information c ould be captured adequately using ARW. 3. Sensitivity analysis is performed to investigate the effe ct of ARW hyperpa- rameters such as walk length and the size of structural featu res on the graph classification performance. 2 Related Work 2.1 Encoding Structural Similarity Graph Kernels [4] uses graph kernels that compute an inner product on graphs, to extend the standard convolution operator to the g raph domain and provides structural masks that are learned during the train ing process. Similarly, [8] introduced KerGNNs which utilizes trainable hidden gra phs as graph filters and are combined with subgraphs centered at each node to upda te node embed- dings using graph kernels. The first drawback of these types o f kernels is the limited assumption on the
https://arxiv.org/abs/2505.21288v1
number of learnable structures. E ven a big number does not resolve the issue since many of the structures would then have high correlation with each others. The second drawback of [4] is t he lack of modeling for node neighbour structures based on label information si nce [4] has focused on graph classification tasks only. Many methods such as [16] ,[8],[4] that use graph kernels to model structural similarity of two nodes ar e ignoring the node labels as a way to model local structure and therefore can not fully capture heterogeneous graphs or tasks like node classification. Anonymous Random Walk Anonymous random walk (ARW) was originally introduced in [21]. [13] designed task-independent algori thms for learning graph representations in explicit and distributed based on ARW. Definition 1 ([13]). Lets= (u1,u2,...,u k)be an ordered list of elements withui∈V. We define the positional function pos(s,ui)→qsuch that, for any ordered list s= (u1,u2,...,u k)and an element ui∈V, it returns a list q= (p1,p2,...,p l)of all positions pj∈Nat which uioccurs in the list s. ARW play a vital role in structural encoding because they all ow the ex- ploration of graph topology without being influenced by node -specific features, ensuring that the learned representations focus solely on t he underlying struc- ture of the graph. In graph-based machine learning tasks, st ructural encoding refers to the process of capturing patterns of connectivity , such as motifs, com- munity structures, or graph symmetries. While original ran dom walks rely on traversing the graph and potentially incorporating node id entities or features, they may inadvertently bias the exploration toward nodes wi th certain attributes or higher connectivity. This means that original random wal ks may overempha- size the role of specific nodes, leading to representations t hat are skewed by node 4 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 identities, rather than capturing purely structural prope rties of the graph. On the other hand, ARW eliminates the dependence on node identi ties, ensuring that each step in the walk is treated equally, regardless of t he node’s attributes. This method allows for the discovery of structural patterns that are independent of any specific node characteristics, enabling a more genera lizable and accurate encoding of the graph’s topology. Without this anonymity, t raditional random walks may fail to properly generalize in tasks like graph cla ssification, where the goal is to understand the overall structure rather than indi vidual nodes. Thus, ARW is essential for capturing true structural information , leading to more ro- bust and unbiased graph representations and therefore is us ed in the present paper. Definition 2. [13] Ifw= (v1,v2,...,v k)is a random walk, then its correspond- ing anonymous walk is the sequence of integers a= (f(v1),f(v2),...,f(vk)), where integer f(vi) =min pos (w,vi). The aim is to maximize the following average log probability : 1 Tt=T−δ/summationdisplay t=δlogp(wt|wt−δ,...,w t+δ,d) (1) where the graph corresponds to d and δis the window size, i.e. number of context words for each target word. The above probability is defined v ia the following softmax function: p(wt|wt−δ,...,w t+δ,d) =ey(wt)
https://arxiv.org/abs/2505.21288v1
/summationtextη i=1ey(wi)(2) whereηis the number of ARWs of length l. [31] combines breath first search(BFS) and anonymous walk(A W) to define the topological AW which provides bijective mapping between em bedding and local structure of node. To consider diverse heterostructures as opposed to homoge- nous graphs, [11] proposed a theoretically guaranteed tech nique called hetero- geneous anonymous walk (HAW). [20] combines the traditiona l RW with ARW and they used q anonymous walk landmarks to provide q-dimens ional subspace. Similarly, [15] introduced an attention module to model var ying importance of neighbors shown by their structural patterns. Generalized Graph Diffusion [7] introduced pathGCN that learns the coeffi- cients of generalized graph diffusion(GGD) to be able to gene ralize conventional graph convolution operator to a convolution operator over p aths. Learning the coefficients makes the modeling very sensitive to the domain, and in the case of out of domain generalization, a big domain shift would pro duce a poor per- formance. Thus, this motivates us to avoid using GGD since it only gives an expectation and it also mixes the information from different random walks of varying lengths into one conservative variable. Although t he usage of GGD is very convenient in modeling, but it is just a mean and the vari ance could be GSAT: Graph Structure Attention Networks 5 high for different graph datasets. Another drawback of GGD is the memory lim- itations to store these values as well as computational comp lexity for calculating it for big graphs. Graph Random Features [26] proposed general graph random features(g- GRF) that includes many of the most popular graph kernels. g- GRF has sub- quadratic time complexity with respect to the number of node s and it can be distributed across machines. It has a modulation function w hich upweights and downweights the contribution from different random walks de pending on their lengths. Consider the matrices Kα(W)∈RN×Nwhereα= (αk)∞ k=0andαk∈R. We define the following matrix: Kα(W) =∞/summationdisplay k=0αkWk(3) where W is the weighted adjacency matrix. Kαcan be considered as a mapping from a pair of graph nodes to a real number. A special case of Kα(W)is p-step RW which has the following form: (αIN−L)p=p/summationdisplay k=0/parenleftbiggp k/parenrightbigg (α−1)−kWk(4) Diffusion is another popular case which has the following for m: exp(−σ2L/2) =1 k!(σ2 2)k(5) The major goal of g-GRF is to construct a random feature map φ(i) :V→ Rlwithl∈Nthat provides unbiased approximation of Kα(W)in (3). [26] uses a modulation function fwhich is a function of walk length and a load value which is dependent on the degree of the current node. g- GRF of node i is the average of m random walks of arbitrary length starting from node i to produce a random feature vector φf(i)∈Rn. The novelty of this algorithm is that the random length gives an estimate of all kinds of RW whi ch reduces time complexity significantly. Another important property of th is algorithm is that the feature is an unbiased approximation of all behaviours. The walk length is a uniform distribution which makes no distinction from lengt h one and very
https://arxiv.org/abs/2505.21288v1
high length. This means that the random feature has become a very c onservative variable. This is the motivation behind the modulation func tion that diminishes the effect of long walks. The choice of decaying speed in modul ation function is still not based on theoretical backgrounds. [27] introdu ces quasi-Monte Carlo GRFs (q-GRFs), to prove that they yield lower-variance esti mators of the 2- regularised Laplacian kernel under mild conditions. The id ea of q-GRFs is to correlate different random walks to more efficiently explore t he graph [3], and is motivated by orthogonal random features(ORF) [19]. The pro bability of choosing a neighbour w of v can be defined as: p(v,w) =f(N(v,w))/summationtext z∈Nvf(N(v,z))(6) 6 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 whereN(x,y)stands for the number of times that an edge (x,y)has been already used and since we want to deprioritize edges that were alread y frequently visited, f should be a decreasing function. This inductive bias is rel ated to self-repellent random walk in [6]. Note that g-GRF is not used in the present p aper as a proxy of structural representation since the size of the vector is equal to the number of vertices of a graph which is very high for datasets like COR A that have big number of nodes in their connected components. It also uses d egree of the node neighbour to update the g-GRF which makes the RW very biased s ince there are many examples that a node has low degree but has high centr ality. 2.2 Structural Information [12] unified many graph representation learning methods suc h as deepWalk, Node2Walk and GraphSage in a framework that implements enco der, decoder, similarity measures and loss functions distinctly. [29] le verages kernels instead of encoder-decoder architecture in [12] and implements the kernel between two nodes using feature smoothing method of Nadaraya-Watson ke rnel weighted av- erage. Methods in [29] and [12] ignore the local structure of two nodes and optimizes node embeddings so that nearby nodes in the graph h ave similar em- bedding. In many applications, two nodes that are far from ea ch other in the global positioning may have very similar local structures s uch as having simi- lar number of triangle structures. [28] resolved this resea rch gap by introducing struc2vec that generates structural context for nodes. The core of struc2vec is a variable that measures the ordered degree sequence of a par ticular set. The set is the ring of nodes at distance k. Then the structural dis tance between any two nodes can be obtained recursively by measuring the di stance between two ordered degree sequences corresponding to the two nodes . Another type of structure arises in heterogeneous graphs. [11] proposes he terogeneous anonymous walk (HAW) for representation learning on heterostructure s. HAW could be seen as generalization of ARW . Thus, it maps to the same ARW in the o riginal for- mulation of ARW that can distinguish two different sequences by concatenating them with node types. Methods so far do not integrate the rich
https://arxiv.org/abs/2505.21288v1
RW representations w ith message passing methods. To address this research gap, [2] put forwa rd a novel framework that integrates them by aggregating RW embeddings and learn s the encoding of RW end-to-end. However, they neglect the usage of ARW to make their modeling more generalisable. Another drawback of [2] is the limitati on in walk embedding that the entries in the vector are limited to two sequential n ode embedding which neglects the richness of the whole sequence represent ation and cuts off the nonlocal information in the sequence since each sequenc e embedding can be analogous to sentence embedding in natural language proces sing(NLP). GSAT: Graph Structure Attention Networks 7 3 Proposed Method 3.1 Preliminaries Given a graph G= (V,E), we use VandEto denote its nodes and edges, respectively. The nodes are indexed by v and u such that v,u∈V, and an edge connecting nodes v and u is denoted by (v,u)∈E. The connectivity is encoded in the adjacency matrix A∈Rn×nwhere n is the number of nodes. p denotes the width (hidden dimension size), while lis the number of layers. The feature of node v at layer lis written as hl+1 v. 3.2 Problem Formulation Given a graph and its node attributes, the problem is to find a m essage pass- ing algorithm that encodes the structural representation( SR) to guide the mes- sage passing of original attributes(OA). We define latent st ructure representa- tion(LSR) as the representation of each node such that impli cit local structures could be represented as a vector and GSAT provides such messa ge passing algo- rithm. Both Skip-gram and ARW are used in graph embedding methods to learn meaningful node representations. Skip-gram, originally u sed in word embeddings (e.g., Word2Vec [22]), learns vector representations by ma ximizing the likelihood of predicting context nodes given a target node. In the conte xt of graph embed- ding, anonymous random walks generate sequences of node vis its that capture structural properties of the graph without considering spe cific node identities. These sequences serve as input to the skip-gram model, treat ing nodes in a walk similarly to words in a sentence. By optimizing the skip-gra m objective on these walks, the model learns embeddings that capture local and gl obal structural relationships in the graph. 3.3 Preprocessing RWs (such as those used in Node2Vec [10] or DeepWalk[24]) can be interpreted similarly to SkipGram in the sense that nodes in the graph are treated as words, and the RW serves as the context that SkipGram attempts to pre dict. Just as SkipGram learns the relationship between a target word and i ts context words, graph-based random walk methods learn the relationships be tween nodes and their neighbors, encoding the local graph structure into me aningful node embed- dings. Thus, RWs in graph learning methods serve a role analo gous to context windows in SkipGram, both helping to capture local structur e for effective rep- resentation learning. We use skipGram which is a fast Word2V ec algorithm [22]. The ARW could be seen as a sentence which includes
https://arxiv.org/abs/2505.21288v1
repeated wo rds. Before combining the SR with OA, we do preprocessing to calculate wo rd embedding through skipGram algorithm. The sampling of RW is different f rom pretrained model which is done in preprocessing step. 8 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 3.4 Sampling Random Walks The length of the walk can vary, and multiple walks are often s ampled to capture diverse structural patterns within the graph. By sampling a series of RWs, it is possible to explore local neighborhoods of nodes, which can then be used in learning meaningful embeddings or representations of the g raph’s structure. To obtain LSR of each node, an ARW is drawn randomly. The mean vec tor of all word embeddings of an ARW started at node vwill be the LSR in of that node in GSAT and we call it h(s) v. 3.5 Latent Structural Attention In GSAT, RW embeddings are used to inform the attention mecha nism to au- tomatically discern which neighbors are more likely to cont ribute meaningfully to a node’s updated representation based on their structura l proximity in the graph or any other implicit guidance of structural embeddin gs. Note that GSAT uses ARW and not the original RW. Thus, structural encoding i s more effective. Nodes that share many random walk paths are considered struc turally impor- tant to each other and they could be identified as bottlenecks of the graph. Since each graph classification dataset such as PROTEIN has a different graph properties distributions, different datasets respond diffe rently to different walk length. Even the walk length of each node could be personaliz ed but through modeling of the GSAT, we assume that there is no partiality an d in all nodes, and the same number of random walks as well as the same walk len gth is used for numerical experiments. In a GSAT, these random walk embeddi ngs are used to create the attention mechanism to automatically figure out w hich neighbors are more likely to contribute meaningfully to a node’s updated r epresentation based on their structural proximity or any other implicit justific ation in the graph. We decouple the feature vector into two different parts namel y, structural at- tributes h(s) uand original attributes h(orig) u. g-GRF and ARW are examples of structural attributes which have superscript s in our termi nology. Like other GNNs, deep GATs suffer from over-smoothing, where repeated m essage passing causes node representations to become indistinguishable, reducing the model’s expressiveness. One cornerstone for the success of GAT is th e fact that unlike GCNs, which use fixed-weight averaging, GATs assign differen t importance (at- tention scores) to each neighbor, allowing more influential nodes to contribute more to the final representation. So, We draw inspiration fro m graph attention network(GAT)[30] but the attention weights are not based on original attributes and is only calculated using the h(s) vwhich are ARWs of all its neighbours. Thus, the aggregated messages at layer k are: m(k) N(u)=σ(/summationdisplay v∈N(u)αu,vWh(orig) v) (7) where the attention weights are as follows: αu,v=exp(Relu(aT[Wh(s) u||Wh(s)
https://arxiv.org/abs/2505.21288v1
v])) /summationtext v′∈Nuexp(Relu(aT[Wh(s) u||Wh(s) v′]))(8) GSAT: Graph Structure Attention Networks 9 Finally, the nodes are updated using the following combine r ule: h(k+1) u=ReLU(V(k)m(k) N(u)+b(k)) (9) whereV(k)denotes a trainable weight matrix and b(k)is bias term. Note that the present work is only using ARW to calculate the attention weights. A multi- headed formulation could be easily developed to include oth er types of structural features such as ARW or personalized PageRank or GGD to have m ore expres- sive representation of structural attributes. Our modelin g is a type of PNA since each node has different local structure and attending to neig hbours of each node is personalized and unique to that point. Attention mechani sms can be noisy or overly focused on certain parts of the input. The outputs o f multiple heads are averaged which leads to a more robust representation. Th us, GSAT is im- plemented based on multiheaded attention with similar spir it to the original multiheaded GAT. 3.6 GSAT as Generalization of GAT Theorem 1. LetG= (V,E,X)be a graph with node set V, edge set E, and node feature matrix X∈R|V|×d. A Graph Attention Network (GAT) with global pooling is used to generate a graph-level representation hGfor classification. If the GAT does not incorporate structural graph information (ARW embedding), then there exist non-isomorphic graphs G1andG2such that: G1/ne}a⊔ionslash≃G2buthG1=hG2 (10) leading to misclassification and poor generalization. Proof. Consider a GAT layer that updates node embeddings using self -attention. Leth(l) ibe the hidden representation of node iat layerl. The update rule for GAT is given by: h(l+1) i=σ /summationdisplay j∈N(i)α(l) ijWh(l) j  (11) whereWis a trainable weight matrix, σis a nonlinearity, and α(l) ijis the learned attention coefficient: α(l) ij=exp/parenleftBig LeakyReLU (a⊤[Wh(l) i/bardblWh(l) j])/parenrightBig /summationtext k∈N(i)exp/parenleftBig LeakyReLU (a⊤[Wh(l) i/bardblWh(l) k])/parenrightBig (12) The attention mechanism allows nodes to weigh their neighbo rs differently but does not inherently incorporate global graph structure unl ess explicitly encoded. AfterLlayers of GAT, a global pooling function Paggregates node embeddings into a single graph-level representation: hG=P({h(L) i|i∈V}) (13) 10 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 wherePis typically a sum, mean, or max function. Since Pis permutation- invariant, it treats graphs with the same set of node embeddi ngs as identical. Now we aim to remove the structural ambiguity without struct ural information. Consider two non-isomorphic graphs G1andG2with the same node features but different structures. Since GAT message passing is purely fe ature-driven without explicit structural encoding, it follows that: h(L) i(G1) =h(L) i(G2)∀i∈V (14) leading to the same global representation: hG1=P({h(L) i(G1)}) =P({h(L) i(G2)}) =hG2 (15) SinceG1/ne}a⊔ionslash≃G2buthG1=hG2, the classifier cannot distinguish them, caus- ing misclassification and poor generalization. To ensure th atG1andG2are mapped to distinct embeddings, structural encodings (ARW e mbedding) must be involved in node representations: X′= [X/bardblARW] (16) Incorporating ARW alters the attention coefficients αijand the final embeddings hG, ensuring that hG1/ne}a⊔ionslash=hG2, which improves generalization. ⊓ ⊔ 3.7 Computational Complexity To calculate the computational complexity of GSAT we break i t into two parts namely attention calculation for message passing and the hi erarchical pooling based on edgePool [5] which is
https://arxiv.org/abs/2505.21288v1
. Assume the structural size ha s dimension F and H be the number of attention heads , E be the number of edges and N is the number of nodes. Then GSAT has computational complexity ofO(HEF)+ O(NlogN). 4 Experiments Note that all experiments in the present work do not concaten ate the structural features with original features. However, (16) could be con sidered as a more gen- eral formulation which provides a framework for future work s and experiments with more hyperparameters. The hyperparameters used in our experiments is optimized by the values in Table 1. The learning rate is start ed at10−3but is gradually reduced by 90 percent every 20 epochs. Five heads a re used since a single headed attention produced very noisy results with hi gh variance for the performance. There are other hyperparameters that are ment ioned in Table 7. 4.1 Dataset MUTAG, PROTEINS, DD, and NCI1 are widely used datasets for gr aph classifi- cation, particularly in bioinformatics and cheminformati cs [32]. MUTAG consists GSAT: Graph Structure Attention Networks 11 Table 1. hyperparameters hyperparameters values batch 32 num pooling layers 14 heads 5 epochs 100 hidden 64 dynamic learning rate 1e-3 optimizer Adam of molecular graphs where nodes represent atoms and edges re present chemical bonds, with the task of classifying compounds based on their mutagenic proper- ties. PROTEINS contains protein structure graphs, where no des correspond to secondary structure elements, and edges represent interac tions, aiming to clas- sify proteins into functional categories. DD (Drosophila D evelopment) is a larger and more complex protein dataset, making it useful for evalu ating models on di- verse biological structures. NCI1, derived from the Nation al Cancer Institute, consists of molecular graphs used to predict anti-cancer ac tivity. Their statistics are shown in Table 2. Table 2. Bioinformatics Dataset Statistics Dataset MUTAG Proteins DD NCI1 Graphs 188 1,113 1,178 4,110 Classes 2 2 2 2 Average Nodes 17.9 39.1 284.3 29.8 4.2 Comparison With Baselines For fair comparison and reasoning, we developed two version of GSAT namely the GSAT-hp and GSAT-gp which correspond to hierarchical an d global pool- ing respectively. As Table 4 shows, the hierarchical poolin g version (GSAT-hp) produced better results than the global pooling version(GS AT-gp) as expected since the mean pooling simply eliminate the information pro vided by the graph topology which is essential for efficient graph classificatio n. Note that edgePool [5] is used to model hierarchical graph pooling in GSAT-hp. T here is one ad- vantage of using edgePool which is the fact that there is no re quirement to set the number of clusters in advance and this allows the dataset to naturally find appropriate number of clusters in each pooling layer and res pects the distribu- tion of dataset. Table 3 shows how GSAT slightly outperforms performance for NCI1 dataset. It also shows that performance for MUTAG datas et could be en- hanced by 6 percent in comparison with MinCutPool which is a w ell recognized 12 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 approach to hierarchical graph
https://arxiv.org/abs/2505.21288v1
pooling [1]. Some other impo rtant hierarchical pooling methods are [9],[25], [18], [33]. Table 3. Graph classification accuracies on five benchmarks (percent age). The shown accuracies are mean and standard deviation over 10 different runs. We use bold to highlight wins and underline to highlight the second best. Model MUTAG Proteins DD NCI1 TopKPool [9] 67.61±3.36 70.48±1.01 73.63±0.55 67.02±2.25 ASAP [25] 77.83±1.49 73.92±0.63 76.58±1.04 71.48±0.42 SAGPool [18] 73.67±4.28 71.56±1.49 74.72±0.82 67.45±1.11 DiffPool [33] 79.22±1.02 73.03±1.00 77.56±0.41 62.32±1.90 MinCutPool [1] 79.17±1.64 74.72±0.48 78.22±0.54 74.25±0.86 GSAT-hp(ours) 86.33±0.55 74.29±0.76 77.35±1.52 75.12±1.17 Table 4. Comparative Study on four models (percentage). The shown ac curacies are mean and standard deviation over 10 different runs. We use bold to highlight wins and underline to highlight the second best. Model MUTAG Proteins DD NCI1 GCN-gp 80.61±3.36 72.48±1.01 73.63±0.55 67.02±2.25 GAT-gp 81.83±1.49 73.13±0.63 76.58±1.04 71.48±0.42 GIN-gp 81.67±4.28 72.56±1.49 71.72±0.82 67.45±1.11 GSAT-gp(ours) 82.29±2.72 73.92±0.41 76.92±0.39 72.21±0.43 4.3 Ablation Study for walk length Designing an optimal walk length for each graph dataset dist ribution is crucial, particularly in protein graph datasets, where capturing mo tifs and high-order structures significantly impacts model performance. A care fully chosen walk length helps in effectively capturing these motifs and short walks may primar- ily encode local residue interactions, while longer walks c an reveal higher-order structural patterns. If the walk length is too short, the mod el may fail to rec- ognize essential long-range dependencies critical for fun ctional characterization. Conversely, excessively long walks may introduce noise by a ggregating distant, functionally irrelevant nodes, diluting meaningful struc tural signals. Therefore, designing the walk length in alignment with the inherent str uctural properties of the dataset, ensures that graph learning models can accurat ely capture biologi- cally relevant patterns, while minimizing unnecessary inf ormation propagation. We can interpret the results of our ablation study as follows . The walk length in random walks influences graph classification in multiple w ays, particularly for GSAT: Graph Structure Attention Networks 13 Table 5. Ablation study for the effect of walk length in Protein Datase t(percentage). The shown accuracies are mean and standard deviation over 10 different runs. We use bold to highlight wins and underline to highlight the second best. model walk_length =10 walk_length =20 walk_length =40 GIN-gp 71.61±1.06 71.43±1.26 71.32±1.46 GCN-gp 72.61±3.36 72.43±1.36 72.52±3.36 GAT-gp 72.83±1.59 72.71±1.14 72.61±2.12 Table 6. Ablation study for the effect of walk length in Mutag Dataset( percentage). The shown accuracies are mean and standard deviation over 10 different runs. We use bold to highlight wins and underline to highlight the second best. model walk_length =10 walk_length =20 walk_length =40 GIN-gp 82.11±1.04 82.43±1.27 82.12±1.45 GCN-gp 82.28±3.25 82.11±1.24 82.35±1.70 GAT-gp 82.61±1.49 82.31±3.18 82.37±2.12 the PROTEIN and MUTAG datasets as are shown in Table 5 and Tabl e 6 re- spectively. Here’s how different walk lengths impact classi fication performance: The small walk length(short walks) captures local neighbor hood structure which preserves fine-grained structural details and is useful for distinguishing proteins based on small functional motifs but it fails to capture glob al graph connectiv- ity and global topology. This is the reason we experimented m edium
https://arxiv.org/abs/2505.21288v1
walks that balances local and global information that provides more co ntext about neigh- borhood connectivity and helps capture structural variati ons at a mesoscopic scale. Thus, it works well for graphs where medium scale topo logy is important like the case for graph classification for PROTEIN dataset. T he third extreme case is the long walk that approximates global graph structu re. The drawback of large walk length is that it may introduce noise if RW drift to o far from meaning- ful substructures. On the other hand, it captures the overal l graph connectivity and large scale properties but it dilutes the importance of l ocal motifs which are critical for graph classification. 4.4 Sensitivity Analysis The effect of ARW is rooted in three main hyperparameters. The first one is structural_size , which is the size of structural features that skipGram has been trained on. The second parameter is walk_length , which is the number of walks starting from each node. Finally, num_RW_per_node is the number of RW that has been done. Note that this parameter is directly rela ted to the corpus size when training the skipGram model. Here we study the sensitivity of these pa- rameters on the final graph classification performance. From a qualitative point of view, when structural_size increases, more structural information around each node is represented and therefore we expect that the per formance would be increased. However this increase in performance is limit ed by computational 14 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 Table 7. Effect of hyperparameters on the performance of Protein grap h classification (percentage). The shown accuracies are mean and standard de viation over 10 different runs. We use bold to highlight wins and underline to highlight the second best. configstructural_size walk_length num_RW_per_node performance 1 10 10 30 71.41± 0.91 2 10 20 30 71.15± 0.69 3 50 10 30 74.29± 0.57 4 50 20 30 74.92± 0.59 5 100 20 30 73.74± 0.36 6 100 20 60 73.51± 0.84 7 200 20 60 71.27± 1.13 8 400 20 60 70.83± 0.74 resource limitations since the attention weights should be calculated from these high dimensional features for all nodes. Similarly, increa singwalk_length can capture local neighbourhood at higher radius and may includ e bottlenecks in the graphs that are responsible for oversquashing. Increas ingnum_RW_per_node may reduce the noise of structural modeling and produces a ro bust representa- tion of structure since nodes with high centrality will be im plicitly captured by increasing this hyperparameter. 5 Conclusion We have introduced a novel method called GSAT which could be s een as a generalization of GAT. GSAT leverages the structural embed dings of nodes to guide the attention in message passing to learn to automatic ally manage the edge strengths in the message passings that are guided by structu ral information of individual nodes. GSAT could be seen as generalization of GA T and the effect of walk length and walk embedding size is also analyzed. References 1. Bianchi, F.M., Grattarola, D., Alippi, C.: Spectral clus tering with graph neural networks
https://arxiv.org/abs/2505.21288v1
for graph pooling. In: Proceedings of the 37th Inte rnational Conference on Machine Learning. ICML’20, JMLR.org (2020) 2. Chen, D., Schulz, T., Borgwardt, K.: Learning long range d ependencies on graphs via random walks (06 2024) 3. Choromanski, K.: Taming graph kernels with random featur es (04 2023) 4. Cosmo, L., Minello, G., Bicciato, A., Bronstein, M.M., Ro dolà, E., Rossi, L., Torsello, A.: Graph kernel neural networks. IEEE Transacti ons on Neural Net- works and Learning Systems p. 1–14 (2024) 5. Diehl, F.: Edge contraction pooling for graph neural netw orks. arXiv preprint arXiv:1905.10990 (2019) 6. Doshi, V., Hu, J., Eun, D.: Self-repellent random walks on general graphs - achiev- ing minimal sampling variance via nonlinear markov chains ( extended abstract). pp. 8394–8398 (08 2024). https://doi.org/10.24963/ijcai.2024/929 GSAT: Graph Structure Attention Networks 15 7. Eliasof, M., Haber, E., Treister, E.: pathgcn: Learning g eneral graph spatial oper- ators from paths (07 2022) 8. Feng, A., You, C., Wang, S., Tassiulas, L.: Kergnns: Inter pretable graph neural networks with graph kernels. Proceedings of the AAAI Confer ence on Artificial Intelligence 36, 6614–6622 (06 2022) 9. Gao, H., Ji, S.: Graph representation learning via hard an d soft assignment. In: International Conference on Machine Learning (ICML). pp. 1 823–1832. PMLR (2019) 10. Grover, A., Leskovec, J.: node2vec: Scalable feature le arning for networks (2016) 11. Guo, X., Jiao, P., Zhang, W., Pan, T., Jia, M., Shi, D., Wan g, W.: Representation learning on heterostructures via heterogeneous anonymous walks. IEEE Transac- tions on Neural Networks and Learning Systems 35(7), 9538–9552 (2024) 12. Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: Meth- ods and applications. IEEE Data Eng. Bull. 40, 52–74 (2017) 13. Ivanov, S., Burnaev, E.: Anonymous walk embeddings. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machin e Learning. Proceed- ings of Machine Learning Research, vol. 80, pp. 2186–2195. P MLR (10–15 Jul 2018) 14. Jin, D., xia Wang, R., Ge, M., He, D., Li, X., Lin, W., Zhang , W.: Raw-gnn: Random walk aggregation based graph neural network. ArXiv abs/2206.13953 (2022) 15. Jin, Y., Song, G., Shi, C.: Gralsp: Graph neural networks with local structural patterns. ArXiv abs/1911.07675 (2019) 16. Kalofolias, J., Welke, P., Vreeken, J.: SUSAN: The Struc tural Similarity Random Walk Kernel, pp. 298–306 (04 2021) 17. Kipf, T., Welling, M.: Semi-supervised classification w ith graph convolutional net- works. ArXiv (2016) 18. Lee, J., Lee, I., Kang, J.: Self-attention graph pooling . In: International Conference on Machine Learning (ICML). pp. 3734–3743. PMLR (2019) 19. Likhosherstov, V., Choromanski, K., Dubey, K.A., Liu, F ., Sarlós, T., Weller, A.: Chefs’ random tables: Non-trigonometric rando m features. ArXiv abs/2205.15317 (2022) 20. Long, Q., Jin, Y., Wu, Y., Song, G.: Theoretically improv ing graph neural networks via anonymous walk graph kernels. In: Proceedings of the Web Conference 2021. p. 1204–1214. WWW ’21, Association for Computing Machinery , New York, NY, USA (2021) 21. Micali, S., Zhu, Z.A.: Reconstructing markov processes from independent and anonymous experiments. Discrete Applied Mathematics 200, 108–122 (2016) 22. Mikolov, T., Chen, K., Corrado, G.S.,
https://arxiv.org/abs/2505.21288v1
Dean, J.: Efficient e stimation of word rep- resentations in vector space. In: International Conferenc e on Learning Representa- tions (2013) 23. Nguyen, K., Nguyen, T., Ho, N., Nguyen, K., Nong, H., Nguy en, V.: Revisiting over-smoothing and over-squashing using ollivier’s ricci curvature (11 2022) 24. Perozzi, B., Al-Rfou, R., Skiena, S.: Deepwalk: online l earning of social repre- sentations. In: Proceedings of the 20th ACM SIGKDD Internat ional Conference on Knowledge Discovery and Data Mining. p. 701–710. KDD ’14, Association for Computing Machinery, New York, NY, USA (2014) 25. Ranjan, E., Sanyal, S., Liu, C., Peng, J., Ester, M.: Asap : Adaptive structure aware pooling for learning hierarchical graph representat ions. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). vol. 34, pp. 5470–5477 (2020) 16 Farshad Noravesh1, Reza Haffari2, Layki Soon3, Arghya Pal4 26. Reid, I., Choromanski, K., Berger, E., Weller, A.: Gener al graph random features. In: International Conference on Learning Representations (2023) 27. Reid, I., Choromanski, K., Weller, A.: Quasi-monte carl o graph random features. In: Proceedings of the 37th International Conference on Neu ral Information Pro- cessing Systems. NIPS ’23, Curran Associates Inc., Red Hook , NY, USA (2024) 28. Ribeiro, L.F., Saverese, P.H., Figueiredo, D.R.: struc 2vec: Learning node repre- sentations from structural identity. In: Proceedings of th e 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data M ining. p. 385–394. KDD ’17, Association for Computing Machinery, New York, NY, USA (2017) 29. Tian, Y., Zhao, L., Peng, X., Metaxas, D.: Rethinking ker nel methods for node representation learning on graphs (10 2019 ). https://doi.org/10.48550/arXiv.1910.02548 30. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks (2018) 31. Yan, Y., Hu, Y., Zhou, Q., Wu, S., Wang, D., Tong, H.: Topol ogical anonymous walk embedding: A new structural node embedding approach. I n: Proceedings of the 33rd ACM International Conference on Information and Kn owledge Manage- ment. p. 2796–2806. CIKM ’24, Association for Computing Mac hinery, New York, NY, USA (2024) 32. Yanardag, P., Vishwanathan, S.: Deep graph kernels. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discover y and Data Min- ing. p. 1365–1374. KDD ’15, Association for Computing Machi nery, New York, NY, USA (2015) 33. Ying, R., You, J., Morris, C., Ren, X., Hamilton, W.L., Le skovec, J.: Hierarchi- cal graph representation learning with differentiable pool ing. In: Proceedings of the 32nd International Conference on Neural Information Pr ocessing Systems. p. 4805–4815. NIPS’18, Curran Associates Inc., Red Hook, NY, U SA (2018) 34. Zheng, X., Liu, Y., Pan, S., Zhang, M., Jin, D., Yu, P.S.: G raph neural networks for graphs with heterophily: A survey. ArXiv (2022)
https://arxiv.org/abs/2505.21288v1
1 Complex System Diagnostics Using a Knowledge Graph -Informed and Large Language Model -Enhanced Framework Saman Marandi1, Yu-Shu Hu2, Mohammad Modarres1 1Center for Risk and Reliability U niversity of Maryla nd, MD, USA; 2DML Inc., Hsinchu, Taiwan Correspond ing Author : smarandi@umd.edu Abstract In this paper, we present a novel diagnostic framework that integrates Knowledge Graphs (KGs) and Large Language Models (LLMs) to support system diagnostics in high -reliability systems such as nuclear power plants. Traditional diagnostic modeling struggles when systems become too complex, making functional modeling a more attractive approach . Our approach introduces a diagnostic framework grounded in the functional modeling principles of the Dynamic Master Logic (DML) model . It incorporates two coordinated LLM components, including an LLM -based workflow for automated construction of DML logic from system documentation and an LLM agent that facilitates interactive diagnostics. The generated logic is encoded into a structured KG, referred to as KG -DML, which supports hierarchical fault reasoning. Expert knowledge or operational data can also be incorporated to refine the model’s precision and diagnostic depth. In the interaction phase, users submit natural language queries, which are interpreted by the LLM agen t. The agent selects appropriate tools for structured reasoning, including upward and downward propagation across the KG-DML. Rather than embedding KG content into every prompt, the LLM agent distinguishes between diagnostic and interpretive tasks. For diagnostics, the agent selects and executes external tools that perform structured KG reasoning. For general queries, a Graph -based Retrieval - Augmented Generation (Graph -RAG) approach is used, retrieving relevant KG segments and embedding them into the prompt to generate natural explanations. A case study on an auxiliary feedwater system demonstrated the framework’s effectiveness, with over 90% accuracy in key elements and consistent tool and argument extraction, supporting its use in safety -critical diagnostics. Keywords: Large Language Models, Knowledge Graphs, Diagnostics, Dynamic Master Logic (DML) 1. Introduction The analysis of complex engineered systems, particularly those in high -reliability and safety -critical industries such as nuclear power plants, requires systematic approaches to assess system integrity, reliability, and performance. Traditional diagnostic tools have predominantly relied on event -based modeling, where the outcomes of specific failure pathways , faults or abnormal initiating events are analyzed. Although effective in some domains, event -based approaches become extremely complex, incomplete and inadequate when applied to complex, interconnected systems with numerous interdependencies. As the scale and complexity of systems increase, functional modeling approaches are considered more suitable, where the emphasis is placed on understanding the roles, dependencies, and contributions of system components toward achieving primary system objectives. Functional modeling involves the development of system models based on the actions and relationships of their constituent parts [1], [2] . These models are inherently hierarchical, enabling a systematic decomposition of the system into goals, functions, and subfunctions. One important framework within this category is the Dynamic Master Logic (DML) model, introduced by Hu and Modarres [3], which represents system behavior through a 2 structured hierarchy linking functional objectives to underlying structures. By organizing systems into functional and structural layers, DML enables critical causal pathways,
https://arxiv.org/abs/2505.21291v1
system interdependencies, and fault propagation mechanisms to be systematically analyzed. This hierarchical framework supports the tracing of failures from system -level objectives down to elemental components, offering a powerful tool for diagnostic analysis. Although DML models provide a robust framework for system diagnostics, thei r construction, maintenance, and interpretation require significant manual effort and extensive domain expertise. As system complexity grows, the burden associated with constructing and interacting with large DML models increases accordingly. In response to these challenges, powerful Artificial Intelligence (AI) tools such as Large Language Models (LLMs) [4] have been recognized as offering opportunities to enhance the development, interaction, and usability of functional models for diagnosis of faults and failures in complex engineering systems . LLMs have demonstrated strong capabilities in natural language understanding, summarization, and reasoning across a wide range of domains. However, certain limitations such as hallucination and restricted domain -specific reasoning have been observed. To address these challenges and improve diagnostic reliability, stru ctured knowledge representations such as Knowledge Graphs (KGs) [5] can be used alongside LLMs to enable more consistent and interpretable reasoning by guiding diagnostic logic through external tools rather than unconstrained language generation . The integration of LLMs with KGs provides a mechanism to enhance the accessibility, organization, and interpretability of complex functional models. By grounding LLM interactions in verified domain knowledge, the risks associated with hallucination can be reduced and the transparency of diagnostic outputs can be improved. Given the increasing complexity of engineered systems and the need for scalable, interpretable diagnostic tools, there is a strong motivation to explore AI -driven frameworks that combine domain knowledge with language -based reasoning. In this paper , an approach is proposed that leverages LLMs and KGs to support and enhance interaction with DML models, to reduce manual effort, improve transparency, and facilitate more efficient fault analysis in safety -critical applications. The remainder of this paper is organized as follows. Section 2 reviews related work on DML modeling, large language models, and the integration of KGs in diagnostic systems. Section 3 provides an overview of the research approach. Section 4 presents the proposed diagnostic framework, detailing both the model construction and interaction phases. Section 5 describes a case study involving an auxiliary fee dwater system in a nuclear power plant, illustrating the application of the framework. Section 6 reports t he evaluation results from both the model construction and interaction components, based on repeated runs using the case study. Finally, Section 7 concludes the paper with key findings and discusses directions for future work. 2. Background 2.1. From Traditional Diagnostics to Functional Modeling Traditional diagnostic approaches often rely on modeling discrete events, failure modes, or symptom triggers. These include methods like Fault Tree Analysis (FTA) [6], [7] , Event Trees (ET) [8], [9] , and rule - based [10], [11] expert systems that map observed events to likely faults. Such models focus on specific failure events and their consequences . While these event -driven models can effectively represent known failure sequences, they typically require an exhaustive list of fault -event combinations, invariably making them incomplete.
https://arxiv.org/abs/2505.21291v1
If a particular sequence is not anticipated during model development, the system may fail to diagnose it. Moreover, as systems grow in complexity, tracing failure pathways become increasingly infeasible using event -based logic alone . Functional modeling reflects a design er’s intent by capturing system goals and functions rather than enumerating events . These models encode the functional expectations and logical dependencies of the system, enabling the detection of faults based on deviations from expected behavior. These models are hierarchical, representing goals, functions, subfunctions, and 3 supporting structures. They emphasize how components contribute to overall functions, enabling reasoning about failures in terms of lost or abnormal functionality, such as a pump failing to provide flow or a sensor failing to deliver information, rather than focusing solely on s pecific failure events [12]. Functional approaches address several limitations of event -based models . First, they support completeness by enabling a structured analysis of the system’s functional architecture. This allows for the identification of faults that may cause the loss of required functions, even during the design or concept stage before any failu re data is available . Second, functional models naturally handle complex, multi -fault scenarios . Since they capture interactions through functional dependencies, they can reason abo ut multiple simultaneous failures or cascading effects without relying on every combination explicitly . Third, they promote generality. The same functional model can be applied throughout the system’s life cycle and across various analysis tasks. These include design -level Failure Mode and Effects Analysis (FMEA), runtime diagnosis, and what -if analysis. This flexibility is possible because the model abstracts away from specific event sequences and instead focuses on invariant functional relationships [13]. 2.2. DML Model Applications For system -level modeling, DML models are success logics in the form of powerful hierarchies to represent system knowledge [3], [14], [15], [16] . This hierarchical representation becomes particularly valuable in safety -critical domains. System safety focuses on applying engineering principles, standards, and techniques to minimize risk while ensuring a system remains effective, reliable, and cost -efficient throughout its life cycle. Using a DML, the degree of success (or failure), approximate full -scale system physical values, and transition effects amongst components can be analyzed. DML provides an effective model for describing the causal effect s of failures or disturbances in complex systems [17]. Figure 1 conceptually illustrates the DML framework, highlighting the hierarchical decomposition from objectives to basic elements and the interdependencies between functional and structural. The functional hierarchy (top-down) moves from objectives to functions and sub -functions, capturing the system’s purpose and behavior. The structural hierarchy decomposes the system into elements and basic components, reflecting its physical or logical com position. Arrows indicate causal ("Why –How") and compositional ("Part -of") relationships, highlighting interdependencies. This structure supports systematic analysis of complex systems by linking high -level goals to low -level elements. Figure 1. Conceptual DML Model [15] 4 Two key causal relationships can be extracted from DML. The first involves determining the ultimate effect of a failure, and the second involves identifying the paths through which a function can
https://arxiv.org/abs/2505.21291v1
be achieved or a subsystem can successfully operate. It applies reductionist principles, where qualities represent functions and goals, while objects and relationships are structured through success trees and logical modeling including Boolean, physical, and fuzzy logic [14]. This integration enhances DML’s ability to model time - dependent behaviors and fault propagation within complex systems. This model has been applied to various applications of modeling . In nuclear power plants, DML has been used to model Direct Containment Heating in Pressurized Water Reactor [16]. In renewable energy, it has supported reliability analysis of geared wind turbines [18]. DML has also proven effective in analyzing interactions between hardware, software, and human elements in cyber -physical systems [19], [20] . In the aerospace sector, it has been used to identify critical points in system reliability [21]. Additionally, DML has supported quality assurance across the software development life cycle [22]. It is important to note that several naming conventions have been used to describe what is fundamentally a single family of DML models with common underlying principles. The earliest form of this modeling approach was the Goal Tree Success Tree (GTST) [12], which laid the foundation for functional reasoning in complex systems. This was followed by the development of the Master Plant Logic Diagram (MPLD) [23], [24] , originally created for use in nuclear power plants to represent plant logic and support reliability assessment . Over time, MPLD evolved into several extensions and refinements. A prominent variant is the GTST with Master Logic Diagram (MLD) , referred to as GTST -MLD [17], [22] . This version combines a functional hierarchy, represented by the GTST with a structural model, represented by the MLD that captures component relationships and dependencies, supporting both dynamic and static system representations. Another well -established form is the Dynamic Master Logic Diagram (DMLD) [14] which emphasizes the modeling of uncertain, evolving, and time -dependent behaviors in complex systems. While terminology may differ depending on modeling emphasis, these approaches are functionally equivalent and share the common goal of representing syst em logic for diagnostic reasoning and reliability analysis. Throughout this paper, this family of models is collectively referred to as DML . 2.3. LLMs and KGs in Fault Diagnostics Recent advancements in LLMs have accelerated the use of fault diagnostics, particularly in industrial and safety -critical domains. Traditional fault diagnosis methods rely heavily on rule -based models or historical fault logs, which require extensive domain expertise and are often inefficient in handling compl ex, multi - layered system interactions. Integrating LLMs with diagnostic methods makes timely detection of faults more practical and easier to scale by enabling an understanding of fault causes and providing decision support through NLP . As demonstrated in [25], a diagnostic model was used to detect faults in a nuclear power plant, while an LLM acted as an interface to explain fault conditions and their potential causes to human operators, enhancing situational awareness and response. A study introduced the method of FD-LLM [26], fram ed machine fault diagnosis as a text classification task by converting vibration signals into textual token
https://arxiv.org/abs/2505.21291v1
sequences or summaries. Fine -tuned LLaMA models were employed, achieving superior performance compared to traditional deep learning models and demonst rating that LLMs can effectively process non -textual diagnostic information when appropriately structured. Improvements were reported in accurately identifying fault root causes from symptom descriptions, highlighting the potential of LLMs for natu ral language -based symptom interpretation and fault retrieval. To enhance the reliability and reasoning capabilities of LLM -based diagnostics, several studies have proposed integration with KGs. One approach, Root -KGD [27], combined industrial process data with a structured KG representing system topology and causal dependencies, enabling more accurate root cause failure identification by guiding LLM reasoning through formalized domain knowledge . In the context of CNC machine fault diagnosis [28], a KG -embedded LLM architecture was proposed. A 5 machining -process KG was automatically constructed from maintenance records, and its structured information was embedded into the LLM to support fault classification and provide explainable fault identification through natural language outputs . Integration of KGs into fault diagnostics has been explored to address the need for structured causal reasoning. A multi -level KG was developed to represent rotating machinery faults [29], and Bayesian inference was applied to trace symptom -cause pathways within the graph, achieving a 91.1% diagnostic accuracy under missing data condition s. This demonstrated that the structured representation of symptom - cause relationships within a KG can enhance diagnostic robustness. The combination of structured knowledge retrieval and LLM reasoning enabled interpretable and accurate fault diagnosis based on free - form symptom descriptions. The combination of LLMs with KGs has also demonstrated enhanced fault reasoning capabilities. A hybrid model for aviation assembly diagnostics achieved 98.5% accuracy in fault localization and troubleshooting through subgraph -based reasoning [30]. In vehicle fault diagnostics, a KG -driven analysis system was developed that maps error codes and system alerts to potential failure causes [31]. This system leveraged LLMs to process unstructured diagnostic data, such as error logs and maintenance reports, transforming it into structured representations within a KG. A reasoning framework was introduced to infer root causes by linking symptoms to failure mechanisms stored in the KG. In [32], a KG-based in -context learning approach was proposed to enhance fault diagnosis in industrial sensor networks. A domain -specific KG was constructed to encode expert knowledge, and a long -length entity similarity retrieval mechanism was used to select relevant knowledge, which was then supplied to a large language model for causal reasoning over fault symptom text. The method demonstra ted improved fault localization accuracy and enhanced the interpretability of diagnostic outputs compared to traditional LL M approaches. These studies demonstrated that incorporating LLM -driven fault reasoning improved diagnostic accuracy and reduced troubleshooting time compared to traditional rule -based or statistical models. Structuring fault data within a KG provided traceable and explainable reason ing paths, assisting engineers and technicians in understanding failure causes. Collectively, these studies illustrate the growing role of LLMs and KGs in fault diagnostics. LLMs enable flexible interpretation of symptom descriptions and support natural language interaction, while KGs structure domain -specific knowledge to guide
https://arxiv.org/abs/2505.21291v1
reasoning processes. Their integration shows promise for enhancing fault retrieval and root cause analysis across various technical domains. 3. Research Overview As discussed in Section 2 , although LLMs and KGs show promise for assisting with system diagnostic s from textual documentation, their role in structured functional decomposition is not fully developed. Similarly, KGs are primarily used for retrieving fault information rather than representing system logic or dependencies, and few studies have explored integrating com putational reasoning layers to enable real -time diagnostic analysis. To address these gaps, this paper introduces an LLM -informed, KG -based dia gnostic framework for constructing and using the DML model derived from system design and operational documents. The framework leverages LLMs to support diagnostics, reliability assessment, and decision - making for a specific engineering system . It has three main objectives: a. Automate the generation of scalable functional models by extracting structured relationships from system documentation , including system descriptions and specifications . b. Enable interactive, hierarchical fault analysis through natural language -driven upward and downward reasoning. Upward reasoning evaluates how individual component failures affect 6 higher -level system functions, while downward reasoning explores which functional paths and component conditions must be satisfied to maintain or restore system objectives . c. Support interpretive analysis of system goals, functions, and dependencies by leveraging Graph - based Retrieval -Augmented Generation (Graph -RAG) to retrieve and contextualize relevant segments of the KG in response to user queries . 4. Proposed Approach Figure 3 presents the LLM -Informed Diagnostic Framework, which consists of two main stages: model construction and model interaction. In the model construction stage, system descriptions are processed by an LLM -based workflow to extract DML logic, which organizes the system into hierarchical functions , components and their relationships . This representation is then used to build a KG, which serves as the core system model. The KG can be further enhanced by incorporating expert knowledge and real -time operat ional data processed throug h Machine Learning (ML) or Deep Lea rning ( DL) models , allowing it to infer and reflect system states and conditions dynamically. In the diagnostic model interaction section, an LLM agent would determine the intention of the user and would invoke one of the tools available to it based on the query to execute. These tools are used to trace the KG to generate diagnostic insights. The results would be communicated to the user by the LLM. Figure 2. LLM -Informed Diagnostic Framework 7 4.1. Model Construction The development of the diagnostic model begins with processing textual system information, including manuals, system descriptions, and technical documentation. Preprocessing techniques such as text summarization and LLM -based Named Entity Recognition (NER) are applied to extract key details about system components, functions, and dependencies. These extracted elements are then used to derive a DML hierarchical structure that organizes the system into goals, functions, subfuncti ons, and components. This structured logic is subsequently translated into Cypher code, the query language used to construct the KG that represents the system’s components and functional relationships. The resulting KG provides
https://arxiv.org/abs/2505.21291v1
a structured repository to support querying and reasoning for diagnostics and decision -making. In this research, the KG is deployed using Neo4j [33], a graph database platform that represents entities and their interconnections as property graphs, with attributes stored as node and relationship properties. Cypher enables the definition of system dependencies within the KG, linking components to their success conditions and subfunctions to their parent functions. All language -based tasks in this framework were performed using OpenAI's GPT -4 model through the ChatGPT API. 4.2. Model Interaction Once the model is built, it needs to be used to generate diagnostic insights. This framework aims to go beyond simple queries by enabling deeper system analysis , addressing the fundamental diagnostic questions : • What is happening? The model identifies current system conditions . For example, it can determine which components are degrading and which system functions are at risk. • Why is it happening? By tracing system dependencies, the model identifies apparent root causes of failures and analyzes contributing factors. • How will it impact the system? The model assesses failure propagation and risk severity, predicting how component failure will affect system operations. To address these questions, the model enables cause -and-effect reasoning, allowing users to explore system behavior beyond basic retrieval. Instead of just fetching stored information, it supports queries such as: • If certain components fail, how will it affect the overall system? • What conditions must be met for a specific function to succeed? • Which components are essential to maintaining system functionality? To support this process, a set of predefined functions is made available to the LLM agent as tools for interfacing with the KG. These tools enable structured upward and downward tracing to analyze system dependencies and generate diagnostic insights. When a user submits a query, the LLM interprets the intent and selects the appropriate tool to execute. The output is then passed back to the LLM, which produces a human -readable explanation. In addition to tool -based diagnostics, the model supports general sys tem queries, such as explaining the system's hierarchy or functional structure . For these interpretive queries, the framework employs a Graph -RAG approach where relevant graph segments , such as goals or functions , are retrieved and embedded into the LLM’s prompt to support natural language generation . Both approaches rely on the KG, but in complementary ways. Diagnostic tools perform structured graph traversals to compute logic -based results, while Graph -RAG enables the LLM to produce contextual explanation s based on retrieved subgraphs. This dual strategy ensures the KG remains central to both reasoning and explanation. 8 5. Case Study The proposed system illustrated in the Piping and Instrumentation Diagram (P&ID ) of Figure 3 has been used as a case study to implement the proposed LLM -informed diagnostic framework. This P&ID represents a simplified auxiliary feedwater system of a nuclear power plant. The auxiliary feedwater system in a pressurized light water nuclear plant ensures the safe and efficient supply of emergency cooling water to the steam generators during unexpected plant tra nsients.
https://arxiv.org/abs/2505.21291v1
The system is structured with the main goal defined as "Ensure safe and effective operation of the system". This goal is supported by four primary functions: "Supply Feedwater", "Control Water Flow", "Manage System Integration and Response", and "Provide Emergency and Automated Response". Figure 3. P&ID of Simplified Auxiliary Feedwater Syste m [34] 5.1. Model Construction To construct a KG representing DML logic from a system description (which included only major components and functions), a structured prompt chaining workflow was implemented, with each stage handled by a dedicated LLM call. The hierarchy follows the DML s tructure, starting with a high -level goal and breaking down into functions, subfunctions, and components, each linked to success conditions. Logical relationships are shown using binary AND or OR logic gates, which define how lower -level elements contribut e to achieving higher -level objectives. The workflow begins with an initial LLM call that summarizes the system description and extracts goals, functions, subfunctions, components, and success conditions . The result is passed to a second LLM that converts this information into a structured JSON format aligned with the DML hierarchy. A third LLM then transforms the JSON into Cypher queries for KG construction. Each LLM call is followed by a gate, implemented as another LLM, that validates the output before the next stage proceeds. If validation fa ils, 9 the workflow routes the input back to the relevant LLM for revision. The first gate checks for missing or incomplete information in the summary, including vague goals, incomplete function chains, or missing success criteria. The second gate validates the J SON structure by checking key formatting, nesting, and logical gate consistency. The third gate examines the generated Cypher queries to ensure they are syntactically correct. This gated prompt chaining design improves consistency, filters out errors early, and manages variability in LLM outputs. It is especially effective when task steps are clearly defined and expected outputs are explicitly structured . Figure 4 illustrates the prompt chaining workflow described above, showing the sequential LLM tasks and validation gates leading from the system description to the final KG -DML output. Each task is followed by an LLM -based gate, which ensures the correctne ss of the output before advancing to the next stage. Feedback loops are included to allow correction and regeneration when validation fails. The specific prompts used for each LLM call in this workflow are provided in the Appendix to this paper. Figure 4. Model Construction Implementation Through LLM -Based Workflow The KG representing the DML logic is structured hierarchically, starting from a high -level goal and descending through functions, subfunctions, components, and finally success conditions. Logical gates , such as AND or OR, define how each level contributes to achieving the level above. A goal may be achieved by multiple functions, each of which depends on one or more subfunctions that require specific components to operate successfully. At the lowest level of the hierarchy, components are connected to succes s conditions through additional gates. Success conditions reflect observable or measurable outcomes that confirm whether a component is performing as
https://arxiv.org/abs/2505.21291v1
intended. Attributes are stored within the nodes themselves and may include expert knowledge or informatio n derived from ML or DL models based on operational data or manual inspections. For example, a component such as a turbine -driven pump may contain attributes indicating the probability of being in various states , such as operational, degraded, or failed. Attributes may also be present in higher - level nodes , such as functions or goals. Their role and interpretation will be discussed in the Model Interaction (next section ). This hierarchical structure supports reasoning, traceability, and consistency throughout the KG -DML representation. Figure 5 illustrates how the DML model is represen ted within the KG. For example, the subfunction "Manage Condensation Tanks" is fulfilled only if all three Condensation Storage Tanks (CSTs) operate successfully, as defined by an AND gate. Each tank must meet two success conditions : maintaining an appropriate water level and ensuring the absence of excessive sediment. This same hierarchical logic applies when tracing the model upward through functions and system -level goals. 10 Figure 5. KG Reflecting the DML Model 5.2. Model Interaction As shown in Figure 2, model interaction begins when a user submits a natural language query to the system. The LLM agent interprets the query and selects from a set of predefined diagnostic functions, available to it as tools. These tools perform specialized tasks such as upw ard fault tracing and generation of success path sets. Each tool is implemented as an external code module that analyzes the KG based on the logical structure derived from the model. The LLM uses these tools to carry out structured reasoning over the graph and generate diagnostic insights. To enable accurate selection, the agent was fine -tuned on a dataset consisting of diverse user queries paired with their corresponding tool calls. This included both diagnostic queries requiring tool invocation and interpretive queries requiring Cypher query generation for KG retrieval . This dataset was manually constructed to include multiple phrasings, semantic variations, and tones in which users might pose the same diagnostic intent. These examples were then used to gu ide the fine-tuning process so that the model learns to map a wide range of natural language inputs to the appropriate tool. In the implemented tool for upward propagation, the success probability for each success condition 𝑗 associated with a component is first evaluated. This is done using Equation 1, which computes the probability as a weighted sum over the component’s possible operational states. Each term combines the likelihood of the component being in state 𝑖 with the probability that it fulfills success condition 𝑗 in that state. A state refers to a possible condition of a component, such as operational, degraded, or failed. Each state influences the component's ability to fulfill its associated success conditions. For example, consider a CST in the auxiliary feedwater system. One possible state of the CST is "failed ", which may be inferred from sensor data. The success condition for the CST could be defined as "maintains sufficient water level for feedwater supply." If operational
https://arxiv.org/abs/2505.21291v1
data indicates a high probability that the CST is in a failed state, the likelihood of satisfying this success condition would be correspondingly low. This affects the overall success probability of the subfunction "Manage Condensation Tanks," which depends on all CSTs through an AND gate. Thus, the fail ure of even one CST can reduce the success probability of the higher -level function and system goal. 11 𝑃(𝑆𝑢𝑐𝑐𝑒𝑠 𝑠𝑗|𝐷𝑎𝑡𝑎 )= ∑𝑃(𝑆𝑢𝑐𝑐𝑒𝑠 𝑠𝑗|𝑆𝑡𝑎𝑡 𝑒𝑖)𝑃(𝑆𝑡𝑎𝑡 𝑒𝑖|𝐷𝑎𝑡𝑎 )𝑁 𝑖=1 (1) • 𝑁: Total number of operational states for a component . • 𝑃(𝑆𝑢𝑐𝑐𝑒𝑠 𝑠𝑗|𝑆𝑡𝑎𝑡 𝑒𝑖): The probability of success for the success condition 𝑗. This reflects how likely the component will fulfill the success condition 𝑗 under state 𝑖. • 𝑃(𝑆𝑡𝑎𝑡 𝑒𝑖|𝐷𝑎𝑡𝑎 ): The probability of the component being in state 𝑖 given the data which can be evidence of events or numerical information. After evaluating 𝑃(𝑆𝑢𝑐𝑐𝑒𝑠 𝑠𝑗|𝐷𝑎𝑡𝑎 ) for all success conditions associated with each component in the system, the results are aggregated using logical gates to compute a single success probability for each component . Success probability represents how well the component is fulfilling its intended function. It is based on the combined satisfaction of all defined success conditions. Each condition reflects a specific performance indicator. The aggregation of these conditi ons provides a quantitative measure of the component’s overall operational effectiveness. The KG stores each element of Equation 1 as attributes within component nodes, including both the conditional success probabilities and the state likelihoods deriv ed from data. In the context of engineering diagnostics, this data may include sensor readings (e.g., temperature, pressure, vibration), event logs, failure reports, and maintenance histories. These attributes serve as the basis for upward propagation and are retained in the KG to support traceability and diagnostic reporting. Once a single success probability is determined for each component, these values are propagated upward through the DML hierarchy using additional logical gates. The gates define how component -level probabilities combine to determine the success of associat ed subfunctions, functions, and ultimately system -level goals. If the success probability of an upper -level node falls below a predefined threshold, the tool co nsiders that node to be impacted. The corresponding logic that performs the probabilistic propagation is captured in the pseudocode shown in Figure 6. To estimate 𝑃(𝑆𝑡𝑎𝑡 𝑒𝑖|𝐷𝑎𝑡𝑎 ), various strategies can be applied depending on data availability and system characteristics. In the absence of real -time sensor data, these probabilities can be derived from expert judgment, or reliability reports, which provide baseline estimates of fai lure or degradation likelihoods. These priors can be refined as new operational data becomes available. When numerical indicators such as temperature, pressure, or vibration readings are accessible, ML or DL models trained on historical labeled data can be used to estimate state probabilities more dynamically. In systems requiring continuous monitoring and p robabilistic inference under uncertainty, particle filtering techniques may be used. Particle filters apply a sequential Monte Carlo approach to approxi mate probability distributions using a set of weighted samples, enabling real -time Bayesian inference even in nonlinear or non -Gaussian conditions
https://arxiv.org/abs/2505.21291v1
[35], [36] . For downward propagation, given an upper -level node, the tool traces the KG downward to determine the required paths for achieving that node’s success. Using the defined gates, it identifies the necessary dependencies at each level. The path -set generation method determines the minimal components required for system functionality by recursively traversing the KG. Starting from a specified node, the process follows dependencies downward until reaching the Component and Success Condition levels. At each step, the method evaluates the logical dependencies based on the gate type. If an AND gate is present, all dependencies must be met simultaneously, requiring a Cartesian product of the success path -sets from the child nodes to generate valid paths. In contrast, for an OR gate, only one dependency needs to succeed, so the success path -sets from the child nodes are aggregated without combination, representing alternative paths to success. This structured approach ensures that the generated success path -sets accura tely reflect 12 the minimal elements necessary to maintain system operability. The approach of downward propagation is formalized by the pseudocode in Figure 7. Figure 6. Upwards Propagation Pseudocode 13 Figure 7. Downward Propagation Pseudocode 5.3. Interaction Interface The diagnostic interface enables natural language interaction between the user and the system, allowing users to explore system behavior and fault scenarios. As illustrated in Figure 8, users can ask questions such as the impact of a specific component fai lure or how a given function can succeed. When queried about the impact of failure of a CST, the LLM agent invokes the upward propagation tool to trace the impact across the system hierarchy. Because the CSTs are connected through an AND gate, the success of higher -level nodes depends on the simultaneous functionality of all CSTs. Therefore, the failure of even a single CST significantly reduces the probability of success for the related subfunctions, functions, and overall system goals. Conversely, when as ked about the success conditions for a function like "Supply Feedwater ", the agent employs downward tracing to identify all minimal success paths. Results are returned in a human 14 readable format, supporting transparent and intuitive diagnostic analysis without requiring technical familiarity with the underlying model. Figure 8. Interaction Interface Example Containing User Sample Questions 6. Evaluation The evaluation of the proposed framework was designed to assess both the structural accuracy of the KG generated from system documentation using the DML hierarchy and the effectiveness of the LLM agent to correctly interpret and respond to diagnostic queries through tool invocation and knowledge retrieval. 6.1. KG Validation To assess the accuracy of the model construction pipeline, we conducted five independent runs using the same system description for the auxiliary feedwater system. In each run, the framework automatically extracted elements of the DML model, including goal s, functions, subfunctions, components, success conditions, and logical gates. These outputs were then manually validated. The validation involved examining the resulting KG -DML and cross -referencing its contents with the original system description . An el ement was considered correctly identified if it was both semantically relevant and
https://arxiv.org/abs/2505.21291v1
structurally consistent with the source material. Elements were labeled as hallucinated if they introduced information that was not present in the documentation or if they misrepresented relationships. The average results across the five runs are summarized in Table 1, which reports the ground truth element counts, the number of correctly extracted elements, the average number of hallucinated elements, and the corresponding extraction accuracy for each model component. 15 KG Element Ground Truth Avg. Correct Avg. Hallucinated Extraction Accuracy (%) Goals 1 1 0 100.0 Functions 4 3.8 0.2 95.0 Subfunctions 9 8.6 0.4 95.6 Components 19 18.2 0.8 95.8 Logical Gates (AND/OR) 33 30.8 2.2 93.3 Success Conditions 39 37.2 1.8 95.4 Table 1. Average Validation of KG Elements Across 5 Runs 6.2. LLM Agent Query Evaluation We evaluated the performance of the LLM agent in interpreting natural language queries and selecting the appropriate diagnostic tools or the knowledge -based retrieval mechanism. A test set comprising 60 queries was developed, with queries evenly distribute d across three primary task types: upward reasoning, downward reasoning, and explanatory queries. The u pward reasoning task involved diagnosing how faults propagate from component -level failures to higher -level system functions. The d ownward reasoning task focused on identifying the minimal set of components required to achieve a particular function or system goal. Explanatory queries required the agent to retrieve structural or functional information from the KG using a Graph -RAG method, in which relevant nodes and attributes are extracted via automatically generated Cypher queries by the LLM . The evaluation was conducted across five independent runs of the same 60 -query dataset to account for variability in LLM outputs. Each query was assessed based on three criteria. These included whether the agent correctly classified the task type, whether the tool or retrieval method was selected appropriately, and whether the extracted arguments or generated Cypher queries were correct. Argument extraction and Cypher generation accuracy were calculated only for correctly classified queries to ensure that e xecution quality reflects successful task interpretation. Task Type Query Set Size Avg. Correct Task Classificatio n Avg. Valid Tool/Query Input Extraction Accuracy (%) Upward Reasoning 20 19.8 19.2 97.0 Downward Reasoning 20 19.6 19.6 100.0 Explanatory Query 20 20.0 19.2 96.0 Table 2. LLM Agent Evaluation by Task Type Across 5 Run s The LLM agent demonstrated consistent performance across all query types. Averaged over five independent runs of a 60 -query test set, the agent achieved high classification accuracy, correctly identifying the intended reasoning or retrieval task in nearly all cases. For reasoning tasks, the agent achieved an argument extraction accuracy of 97.0% for upward reasoning, with an average of 19.2 correct extractions out of 19.8 correctly classified queries per run. For downward reasoning, both classification and extraction accuracy averaged 19.6 per run, corresponding to an extraction accuracy of 100.0 %. For explanatory 16 queries, the agent correctly classified all 20 queries per run on average and successfully generated Cypher queries for 19.2 of them, resulting in a 9 6.0% accuracy following correct classification. These results demonstrate the agent’s reliability in
https://arxiv.org/abs/2505.21291v1
distinguishing between diagnostic and interpretive tasks and its effectiveness in performing structured reasoning and knowledge retrieval based on the syste m model. 7. Conclusion and Outlook 7.1. Limitations and Challenges Despite the demonstrated effectiveness of the proposed LLM - and KG -based diagnostic framework for automating DML model construction and enabling interactive fault analysis, several limitations remain. LLM outputs, while powerful, are varied and may generate hallucinated or incomplete structures even when guided by validation gates and the prompt -chaining workflow. This highlights the need for a human -in-the- loop process, where domain experts review and revise the automatically generated DML models to ensure logical consistency and domain accuracy. The framework also assumes access to well -structured documentation and reliable operational or historical data, which may not always be available in practice. Although the proposed architecture reduces manual effort, expert oversight remains essential. Furthermore, while the evaluation reports high element -level extraction accuracy, it does not capture the semantic impact of missing critical nodes. In DML models, elements such as gat es, subfunctions, or success cond itions are often essential to maintaining the integrity of fault propagation paths. The omission of even a single high - impact node can break logical chains and lead to incomplete or misleading diagnostics. This limitation suggests a need for broader evalua tion strategies that assess whether generated models preserve full diagnostic reasoning capabilities. The current validation also relies on a curated query set based on typical expert interactions, which, while practical, may not reflect edge cases, ambigu ous phrasing, or linguistic variation. More comprehensive testing involving adversarial queries, paraphrased inputs, and real -user feedback will be necessary to improve the robustness and generalizability of the system. Lastly, adapting the framework to do mains beyond nuclear diagnostics may require customized prompts and tool modifications, which could limit its immediate applicability elsewhere. Additionally, while the framework has been demonstrated on a moderately scoped system, its performance and scalability in large -scale, highly complex systems remain untested. The current case study involves a relatively simple subsystem with limited compon ents and interactions. Future work should investigate how the framework performs when applied to large, mission -critical systems with deeply nested hierarchies, extensive dependencies, and real -time operational data. Understanding the computational require ments, performance bottlenecks, and accuracy trade -offs in such settings will be essential for broader adoption. 7.2. Conclusion The integration of LLMs and KGs into diagnostic modeling marks a significant advancement in automating complex system analysis. This research introduces a scalable, AI -driven framework that streamlines the generation of structured diagnostic models and enh ances predictive accuracy and fault reasoning through natural language interaction. By reducing reliance on manual modeling and enabling structured, explainable diagnostics, the approach adapts effectively to evolving system configurations. Depending on th e query type, system knowledge from the KG is either processed through diagnostic tools or embedded into the LLM prompt, as previously described . This enables the LLM to generate responses that are both context - aware and grounded in system logic. The framework also facilitates human -AI collaboration by allowing
https://arxiv.org/abs/2505.21291v1
users to interact with system behavior through intuitive queries, lowering the technical barrier to advanced diagnostics. It extends the utility of functional modeling techniques such as DML, which h ave traditionally required extensive domain expertise and manual effort. While demonstrated on a nuclear power application, 17 the framework is broadly applicable to other mission -critical domains, including advanced manufacturing systems such as integrated circuit fabrication facilities. Designed for continual refinement using operational feedback and real -time data, the framewor k is positioned to support high -reliability industries with improved diagnostics, proactive risk management, and system resilience. The proposed framework was validated through comprehensive evaluations of both KG construction and LLM -based query interpret ation. Across five independent runs, the system consistently produced accurate DML -based models, achieving over 90% extraction accuracy for critical logic structures such as gates and success conditions, and perfect identification of high -level goals. The LLM age nt demonstrated strong performance across 60 diagnostic and explanatory queries, with high classification accuracy, reliable tool invocation, and consistent argument extraction. These results validate the framework’s ability to generate interpretabl e, graph -based diagnostics directly from unstructured documentation. Overall, this work establishes a solid foundation for next - generation diagnostic systems that combine natural language interaction with structured reasoning. The implementation code and supplementary materials for the proposed framework are available at: https://github.com/s -marandi/LLM -Based-Complex -System-Diagnostics 7.3. Future Work Future work will focus on advancing the evaluation and validation of automatically generated DML models. Building on the current framework, which includes multi -run assessments of extraction accuracy and hallucination rates, future evaluations will incorpo rate deeper semantic validation techniques. One direction involves benchmarking generated cut -sets and path -sets against expert -engineered baselines to assess both logical soundness and coverage. In addition to element -level accuracy, graph -level structura l metrics such as connectivity fidelity, dependency correctness, and fault propagation traceability will be introduced. Evaluation will also extend to the robustness of LLM behavior under varied prompt conditions and alternative phrasings. Autonomous LLM a gents will be further explored for iterative model refinement, leveraging feedback loops to enhance precision, reducing hallucinations, and adaptively correct errors over time by moving toward scalable, self -improving diagnostic model generation. Advanced diagnostic capabilities will also be developed, including probabilistic assessments of component criticality in cases where partial functionality can be tolerated. This will enable the framework to recommend optimal maintenance or mitigation strategies und er uncertainty. As systems grow more complex, enhancements will focus on integrating real -time operational data and applying data fusion techniques to combine inputs such as sensor readings, maintenance records, and expert observations into the KG. To bett er model dynamic behavior, future efforts will introduce more sophisticated gating mechanisms capable of capturing time - dependent relationships and evolving system states. The model construction process will also be extended with advanced NER techniques to improve the precision and depth of information extracted from technical documentation. For systems with lengthy descriptions that exceed a single LLM context window, more effective text chunking and summarization strategies will be implemented, along with models that
https://arxiv.org/abs/2505.21291v1
support larger token capacities. Together, these enhancements will further align the framework with the demands of real -time diagnostics in safety -critical, complex engineered systems. 18 References [1] U. Yildirim, F. Campean, and H. Williams, “Function modeling using the system state flow diagram,” Artif. Intell. Eng. Des. Anal. Manuf. , vol. 31, no. 4, pp. 413 –435, Nov. 2017, doi: 10.1017/S0890060417000294. [2] M. Modarres, R. Irehvije, and M. Lind, “A Comparison of Three Functional Modeling Methods,” Jun. 1995. [3] Y.-S. Hu and M. Modarres, “Time -dependent system knowledge representation based on dynamic master logic diagrams,” Control Eng. Pract. , vol. 4, no. 1, pp. 89 –98, Jan. 1996, doi: 10.1016/0967 - 0661(95)00211 -5. [4] A. Vaswani et al. , “Attention Is All You Need,” Aug. 02, 2023, arXiv : arXiv:1706.03762. doi: 10.48550/arXiv.1706.03762. [5] A. Hogan et al. , “Knowledge Graphs,” 2020, doi: 10.48550/ARXIV.2003.02320. [6] C. A. Ericson, Hazard analysis techniques for system safety , 2. ed. Hoboken, NJ: Wiley, 2016. [7] M. Stamatelatos and W. E. Vesley, “Fault tree handbook with aerospace applications,” 2002. [Online]. Available: https://api.semanticscholar.org/CorpusID:61105226 [8] R. Mareş and M. P. Stelea, “The application of event tree analysis in a work accident at maintenance operations,” MATEC Web Conf. , vol. 121, p. 11013, 2017, doi: 10.1051/matecconf/201712111013. [9] J. D. Andrews and S. J. Dunnett, “Event -tree analysis using binary decision diagrams,” IEEE Trans. Reliab. , vol. 49, no. 2, pp. 230 –238, Jun. 2000, doi: 10.1109/24.877343. [10] H.-L. Zhu, S. -S. Liu, Y. -Y. Qu, X. -X. Han, W. He, and Y. Cao, “A new risk assessment method based on belief rule base and fault tree analysis,” Proc. Inst. Mech. Eng. Part O J. Risk Reliab. , vol. 236, no. 3, pp. 420 –438, Jun. 2022, doi: 10.1177/1748006X211011457. [11] P. Weber and L. Jouffe, “Complex system reliability modelling with Dynamic Object Oriented Bayesian Networks (DOOBN),” Reliab. Eng. Syst. Saf. , vol. 91, no. 2, pp. 149 –162, Feb. 2006, doi: 10.1016/j.ress.2005.03.006. [12] I. S. Kim and M. Modarres, “Application of goal tree -success tree model as the knowledge -base of operator advisory systems,” Nucl. Eng. Des. , vol. 104, no. 1, pp. 67 –81, Oct. 1987, doi: 10.1016/0029 - 5493(87)90304 -9. [13] J. Wu, X. Zhang, M. Song, and M. Lind, “Challenges in Functional Modelling for Safety and Risk Analysis,” in Proceeding of the 33rd European Safety and Reliability Conference , Research Publishing Services, 2023, pp. 1892 –1899. doi: 10.3850/978 -981-18-8071 -1_P132 -cd. [14] Y.-S. Hu and M. Modarres, “Evaluating system behavior through Dynamic Master Logic Diagram (DMLD) modeling,” Reliab. Eng. Syst. Saf. , vol. 64, no. 2, pp. 241 –269, May 1999, doi: 10.1016/S0951 -8320(98)00066 -0. [15] M. Modarres and S. W. Cheon, “Function -centered modeling of engineering systems using the goal tree –success tree technique and functional primitives,” Reliab. Eng. Syst. Saf. , vol. 64, no. 2, pp. 181–200, May 1999, doi: 10.1016/S0951 -8320(98)00062 -3. [16] Y.-S. Hu and M. Modarres, “Logic -Based Hierarchies for Modeling Behavior of Complex Dynamic Systems with Applications,” in Fuzzy Systems and Soft Computing in Nuclear Engineering , vol. 38, D. Ruan, Ed.,
https://arxiv.org/abs/2505.21291v1
in Studies in Fuzziness and Soft Computing, vol. 38. , Heidelberg: Physica -Verlag HD, 2000, pp. 364 –395. doi: 10.1007/978 -3-7908 -1866 -6_17. 19 [17] M. Modarres, “Functional modeling of complex systems with applications,” in Annual Reliability and Maintainability. Symposium. 1999 Proceedings (Cat. No.99CH36283) , Washington, DC, USA: IEEE, 1999, pp. 418 –425. doi: 10.1109/RAMS.1999.744153. [18] Y. F. Li, S. Valla, and E. Zio, “Reliability assessment of generic geared wind turbines by GTST - MLD model and Monte Carlo simulation,” Renew. Energy , vol. 83, pp. 222 –233, Nov. 2015, doi: 10.1016/j.renene.2015.04.035. [19] Z. Hao, F. Di Maio, and E. Zio, “A sequential decision problem formulation and deep reinforcement learning solution of the optimization of O&M of cyber -physical energy systems (CPESs) for reliable and safe power production and supply,” Reliab. Eng. Syst. Saf. , vol. 235, p. 109231, Jul. 2023, doi: 10.1016/j.ress.2023.109231. [20] F. D. Maio, “Simulation -Based Goal Tree Success Tree for the Risk Analysis of Cyber -Physical Systems”. [21] C. Guo, S. Gong, L. Tan, and B. Guo, “Extended GTST‐MLD for Aerospace System Safety Analysis,” Risk Anal. , vol. 32, no. 6, pp. 1060 –1071, Jun. 2012, doi: 10.1111/j.1539 -6924.2011.01718.x. [22] M. Modarres and N. Kececi, Software Development Life Cycle Model to Ensure Software Quality . 1998. [23] R. N. M. Hunt and M. Modarres, “Integrated Economic Risk Management in a Nuclear Power Plant,” in Uncertainty in Risk Assessment, Risk Management, and Decision Making , V. T. Covello, L. B. Lave, A. Moghissi, and V. R. R. Uppuluri, Eds., Boston, MA: Springer US, 1987, pp. 435 –443. doi: 10.1007/978 -1-4684 -5317 -1_34. [24] M. Modarres, J. H. Zamanali, and J. Wang, “Applications of Master Plant Logic Diagram (MPLD) PC -Based Program in Probabilistic Risk Assessment,” Feb. 1991. [25] A. J. Dave, T. N. Nguyen, and R. B. Vilim, “Integrating LLMs for Explainable Fault Diagnosis in Complex Systems,” Feb. 08, 2024, arXiv : arXiv:2402.06695. doi: 10.48550/arXiv.2402.06695. [26] H. A. A. M. Qaid, B. Zhang, D. Li, S. -K. Ng, and W. Li, “FD -LLM: Large Language Model for Fault Diagnosis of Machines,” Dec. 02, 2024, arXiv : arXiv:2412.01218. doi: 10.48550/arXiv.2412.01218. [27] J. Chen, J. Qian, X. Zhang, and Z. Song, “Root -KGD: A Novel Framework for Root Cause Diagnosis Based on Knowledge Graph and Industrial Data,” Jun. 19, 2024, arXiv : arXiv:2406.13664. doi: 10.48550/arXiv.2406.13664. [28] P. Wu, X. Mou, L. Gong, H. Tu, L. Qiu, and B. Yang, “An automatic machine fault identification method using the knowledge graph –embedded large language model,” Int. J. Adv. Manuf. Technol. , Apr. 2025, doi: 10.1007/s00170 -025-15555 -2. [29] C. Cai, Z. Jiang, H. Wu, J. Wang, J. Liu, and L. Song, “Research on knowledge graph -driven equipment fault diagnosis method for intelligent manufacturing,” Int. J. Adv. Manuf. Technol. , vol. 130, no. 9 –10, pp. 4649 –4662, Feb. 2024, doi: 10.1007/s00170 -024-12998 -x. [30] P. Liu, L. Qian, X. Zhao, and B. Tao, “Joint Knowledge Graph and Large Language Model for Fault Diagnosis and Its Application in Aviation Assembly,” IEEE Trans. Ind. Inform. , vol. 20, no. 6, pp. 8160 –8169, Jun. 2024,
https://arxiv.org/abs/2505.21291v1
doi: 10.1109/TII.2024.3366977. [31] T. Sun, F. Zeng, and X. Liu, “A Fault Analysis and Reasoning Method for Vehicle Information Systems Based on Knowledge Graphs,” in 2024 IEEE 24th International Conference on Software Quality, Reliability, and Security Companion (QRS -C), Cambridge, United Kingdom: IEEE, Jul. 2024, pp. 926 – 933. doi: 10.1109/QRS -C63300.2024.00123. 20 [32] X. Xie, J. Wang, Y. Han, and W. Li, “Knowledge Graph -Based In -Context Learning for Advanced Fault Diagnosis in Sensor Networks,” Sensors , vol. 24, no. 24, p. 8086, Dec. 2024, doi: 10.3390/s24248086. [33] Neo4j, Inc., “Neo4j GitHub Repository,” GitHub. [Online]. Available: https://github.com/neo4j/neo4j [34] M. Modarres, M. Kaminskiy, and V. Krivtsov, Reliability engineering and risk analysis: a practical guide , Third edition. Boca Raton: CRC Press, Taylor & Francis Group, CRC Press is an imprint of the Taylor & Francis Group, an informa business, 2017. [35] J. Elfring, E. Torta, and R. Van De Molengraft, “Particle Filters: A Hands -On Tutorial,” Sensors , vol. 21, no. 2, p. 438, Jan. 2021, doi: 10.3390/s21020438. [36] A. Doucet, N. Freitas, and N. Gordon, Eds., Sequential Monte Carlo Methods in Practice . New York, NY: Springer New York, 2001. doi: 10.1007/978 -1-4757 -3437 -9. 21 Appendix Figure 9, Step 1 and Gate 1 Prompt s Figure 10, Step 2 and Gate 2 Prompts 22 Figure 11, Step 3 and Gate 3 Prompts
https://arxiv.org/abs/2505.21291v1
arXiv:2505.21298v1 [cs.MA] 27 May 2025Large Language Models Miss the Multi-Agent Mark Emanuele La Malfa1∗Gabriele La Malfa2∗Samuele Marro3 Jie M. Zhang2Elizabeth Black2Micheal Luck4Philip Torr3Michael Wooldridge1 1Department of Computer Science, University of Oxford 2Department of Informatics, King’s College London 3Department of Engineering, University of Oxford 4University of Sussex Abstract Recent interest in Multi-Agent Systems of Large Language Models (MAS LLMs) has led to an increase in frameworks leveraging multiple LLMs to tackle complex tasks. However, much of this literature appropriates the terminology of MAS with- out engaging with its foundational principles. In this position paper, we highlight critical discrepancies between MAS theory and current MAS LLMs implementa- tions, focusing on four key areas: the social aspect of agency, environment design, coordination and communication protocols, and measuring emergent behaviours. Our position is that many MAS LLMs lack multi-agent characteristics such as autonomy, social interaction, and structured environments, and often rely on over- simplified, LLM-centric architectures. The field may slow down and lose traction by revisiting problems the MAS literature has already addressed. Therefore, we systematically analyse this issue and outline associated research opportunities; we advocate for better integrating established MAS concepts and more precise terminology to avoid mischaracterisation and missed opportunities. 1 Introduction The recent machine learning literature has seen an upsurge in popularity of Large Language Models (LLMs) used in coordination to solve complex tasks, a line of research that goes by the name of “Multi-Agent Systems of LLMs” (MAS LLMs) [ 42,113]. In MAS LLMs, each LLM-agent specialises in a task to accomplish a goal. A few examples of MAS LLMs use are software engineering [ 43], multi-robot planning [ 19], data analysis [ 92], scientific production, reasoning and debating [ 29, 111,141], and social simulations [ 85], among many others. There is also an increasing interest in open-ended MAS LLMs, systems whose complex interactions give rise to human-like emergent behaviours [3, 38, 85]. However, labelling MAS LLMs as “Multi-Agent Systems” has already raised concerns in the scientific MAS LLMs: a problematic definitioncommunity [ 17]. Influential frameworks employed in MAS LLMs applications, such as ReAct [ 134], which in turn leverages methods like Chain of Thought [ 120], Tree of Thought [ 133], etc., are single-agent prompting techniques that overlook concurrency and shared states;2At the same time, agentic frameworks developed by large companies are monolithic orchestrators that leverage (and only cite) machine learning research, taking little notice of decades of MAS research.3 While the term MAS LLMs was introduced in 2023 [ 110], the first MAS works date back to the 1980s and the 1990s [ 32,103,124]. In this sense, our position advocates for using precise scientific terminology and cautions against the risk of reinventing the wheel. For a relatively new field like that ∗Equal contribution. Keep the correspondence to emanuele.lamalfa@cs.ox.ac.uk and gabriele.la_malfa@kcl.ac.uk 2https://gist.github.com/yoavg/9142e5d974ab916462e8ec080407365b Preprint. of MAS LLMs, failing to engage with the broader MAS literature may lead to overlooked insights and missed opportunities for meaningful advancement. We articulate our position by identifying three core aspects of MAS that most MAS LLMs in the open issues in MAS LLMs literature overlook or violate,
https://arxiv.org/abs/2505.21298v1
as well as an aspect related to benchmarking those systems. We criticise the notions of agents’ social intelligence and environment as proposed in the MAS LLMs literature (Sections 2 and 3), and discuss what is missing in terms of coordination and communication (Section 4). Further, we observe that the interest in open-ended environments and emergent behaviours is not supported by benchmarks to define, identify or measure such emergence ; the results are primarily descriptive and risk to over-inflate arguments for LLMs’ general and super-intelligence. (Section 5). We summarise each point below, then state our position. I. Social intelligent agents: LLM agents lack native social behaviour. MAS agents populate an the three aspects of intelligence environment and receive high-level goals to fulfil. Such goals necessitate the realisation, specification, and completion of other possibly unanticipated sub-tasks. In this context, a high-level goal tests an agent’s reactivity, proactiveness, and social abilities [ 124]. A reactive agent dynamically perceives the environment and takes the initiative to satisfy its design objectives. In doing so, a social agent is capable of interacting and intelligently competing with other agents. In the context of LLMs, agents are both reactive and proactive: they exhibit remarkable adaptability to changes of an input prompt (which account for their environment, in most cases), and can be trained to take the initiative on how to split a task into its elementary components and then complete them, as shown by frameworks such as AutoGPT [131]. In the current MAS LLM literature, we highlight the lack of consideration given to the social aspects the social aspects of intelligence of reactivity and proactiveness. While collaborating and competing are prerequisites of any MAS, most LLMs are fine-tuned as single agents, with no proper multi-agent pre-training procedure [ 54].3 In other words, LLMs are trained in isolation to respond to users’ requests, rather than to interact with each other. This leads to poor performance and unexpected failures when LLMs are benchmarked to identify other agents’ beliefs, desires and intentions, i.e., with Theory of Mind problems [100, 115]. These simplifications limit the field’s potential and risk misdirecting current efforts toward MAS, when in some cases the problem may be better addressed by aggregation methods that combine many independent components, such as ensembles [30, 53, 65, 107].4 In Section 2, we expand on these arguments. II. Environment design: MAS LLMs environments are LLM-centric. MAS traditionally model MAS or MAS LLMs design?the environment with no strong assumption on the architecture or configuration of the agents that populate it [ 23,86,126]. Conversely, MAS LLMs subvert this approach: the environment design assumes the agents are LLMs that communicate and coordinate via natural language. This Section warns that this assumption overlooks the intrinsic limitations of LLMs and argues for addressing such problems by designing MAS environments that are not LLM-centric. Consider the environment predictability: for an environment that is meant to be deterministic, LLMs non-determinism are inherently not [ 58,83]; thus, a fully deterministic environment cannot exist when involving LLMs, providing little control over procedures one wants to guarantee to be safe or terminate in a
https://arxiv.org/abs/2505.21298v1
specific state. Further, agents should receive and consistently maintain a unique representation of the environ- inferring beliefs, desires and intentionsment [ 31,90], particularly when settings are dynamic and partially observable [ 81]. To foster effective cooperation and competition, that should not reduce to the environment but include rep- resenting other agents’ actions, beliefs and intentions. Unfortunately, LLMs are not only known to struggle with inferring beliefs and intentions [ 97,115], a long-standing challenge in the MAS literature [ 93,102], but also with maintaining a consistent representation of the environment due to issues like hallucinations and memory persistency. Another critical aspect of MAS is how the environment is represented and then perceived by the environment, perception and hallucinationsagents. In most MAS LLMs settings, the environment is textual or translated as such, with text being 3https://www.kaggle.com/whitepaper-agents ,https://www.anthropic.com/engineering/ building-effective-agents ,https://openai.com/index/new-tools-for-building-agents/ 4The authors of “More agents is all you need” make explicit that their technique is an ensemble, as per their Figure 1 [65]. 2 themedium of reference for the largest majority of open- and closed-source LLMs. Storing such representations may rapidly exceed a model’s context length and cause hallucinations [61, 67]. To realise LLMs as MAS agents that work in coordination with humans and other non-LLM agents, we should design open, multi-modal environments that address their lack of long-term memory persistency [ 140], non-determinism and propensity to hallucinations [ 58], and the costs and intrinsic ambiguity of natural language as a storage and communication medium [76]. Section 3 expands on these issues. III. Coordination and communication issues in MAS LLMs. Interaction among agents plays a crucial role in enabling intelligent, decentralised coordination. However, the current MAS LLMs overlook several critical aspects of MAS, including synchronised coordination, concurrent systems, and communication methods. A prototypical scenario in a concurrent MAS is an agent that processes data generated, at random concurrency in MAS intervals, by another agent. An error may occur when the first agent receives new data before it has finished processing a previous batch. Standard solutions include storing the data or a mechanism to inform the other system to send the data later. Asynchronicity is typically absent in MAS LLMs, as LLM-agents often operate in strictly sequential concurrency in MAS LLMs pipelines or parallel, rather than as independent, concurrently operating agents. While some works are moving in this direction [ 39], we argue for leveraging the body of knowledge from the field of multi-agent concurrent systems that is operative since the 1990s (see, for example, [60, 125]). Another fundamental aspect of MAS and MAS LLMs is communication between agents. Most communication in MAS LLMs MAS LLMs literature assumes agents communicate via natural language [ 88]. This smoothness is a simplification that overlooks the complexity of real agent interaction and decades of research in MAS communication systems and protocols [11, 48, 96]. Natural language communication is, in fact, inefficient and ambiguous: in contrast, MAS com- natural language is inefficient munication protocols in the form of structured languages (e.g., KQML [ 36] or Agent-Oriented Programming [ 103]) are consolidated performative standards that describe agents’ beliefs, commit- ments, and
https://arxiv.org/abs/2505.21298v1
actions. In line with the MAS literature, we argue mechanisms to negotiate and implement communication methods that integrate the principles of speech acts [ 6,104] and Gricean maxims [ 41] to minimise the cost of communication and maximise its effectiveness. We expand on these points in Section 4. IV . MAS LLMs do not quantify the emergence of complex behaviours. Several recent works emergent behaviours considered MAS LLMs as open-ended systems [ 3,85,128], i.e., scenarios where the long-term evolution of the system produces complex behaviours and solutions [ 112,117]. Intuitively, large-scale, long-term interactions would play a similar role as size (the so-called “scaling laws for LLMs” [ 52]) in shaping complex behaviour and dynamics that would not emerge otherwise. Currently, despite growing hype around the potential of LLMs in MAS contexts, which often exceed grounding emergent behaviourstheir demonstrated abilities [ 98], we foresee the risk of a birth and death of interest in emergent behaviours in LLMs. Emergence alone, primarily when arising from loosely constrained prompts and undefined interaction dynamics, further compromised by hallucinations and memory issues in LLMs, does not justify claims of MAS-level coordination or long-horizon planning. These factors make distinguishing between genuine coordination and coincidental or spurious outputs increasingly difficult. We therefore argue that the evaluation of emergent behaviours in LLMs should rely on quantifiable metrics rooted in established MAS [8, 33, 56, 84]. We elaborate on these concerns and devise a research path in Section 5. In summary, our position in this paper is that: 3 Paper Position Current MAS-LLMs often fail to embody fundamental multi-agent system characteris- tics, such as autonomy, social interaction, and structured environments, by overemphasising the role of LLMs and overlooking solutions that already exist in MAS literature. 2 Social Intelligent Agents: LLM Agents Lack Native Social Behaviour In MAS, an agent is commonly defined as intelligent if it is reactive, proactive, and shows social reactive and proactive agents behaviours [94, 123]. While LLMs are reactive and proactive, this section discusses why they often fail to be social agents and where this discrepancy originates. For any agent, the prerequisite of reactiveness is perception, i.e., interpreting an input to take an reactive LLMs associated action. LLMs are reactive agents, i.e., respond to stimuli from a changing environ- ment, with the input being directly injected into the prompt [ 99,105,132] or retrieved via external information [51, 109]. The proactiveness of an LLM is its capacity to initiate tasks with limited or without human interven- proactive LLMs tion [ 72]. Independence eventually arises in LLMs only when the model can self-generate the prompt or self-inject information [82]. Recent works have focused on proactive LLMs that ask clarification to the user on the instructions provided, retrieve additional information from external resources or are fine-tuned to anticipate the next actions [136, 139]. The third intelligent characteristic of an agent is the capacity to behave socially, thus to compete or social agents cooperate [ 21,124]. Competition and cooperation arise from the capacity to be both reactive and proactive, i.e., react to other agents’ actions and initiate new ones. Crucially, agents
https://arxiv.org/abs/2505.21298v1
must be socially reactive and proactive, i.e., able to grasp and reason about other agents’ goals, negotiate with them, and even enlist their cooperation when needed. The literature on LLMs includes a substantial body of works aimed at developing competitive lack of social behaviour and ToMand cooperative systems of LLMs [ 71]. In most of them, the agents’ roles are scripted through prompting [ 29,79] or fine-tuned [ 73,108,130], but not natively trained to cooperate or compete with one another. A recent survey, identified that 37% of the failure cases of MAS LLMs are errors caused by inter- pre-training or fine-tuning? agents misalignment or agents’ coordination issues [ 14] (Figure 2). In the same spirit, [ 62] shows how multi-agent reinforcement learning agents outperform LLMs at planning tasks, highlighting the role of pre-training. Concurrently, a growing body of research in Machine Theory of Mind (ToM) [ 89] evidences how LLMs struggle to express their beliefs, desires, and intentions [ 27,91], and those of other agents [106, 115, 116]. Furthermore, when multiple LLMs interact, their specialisation, achieved through different prompts LLM agents as ensembles or fine-tuning, often aims to maximise a single objective such as accuracy, at the expense of other desirable behavioural properties. As a consequence, several MAS LLMs reduce to aggregation mechanisms like majority vote [30, 53, 65, 107]. The lack of native social behaviour, the tendency to converge to ensembles of LLMs rather than native interactive agents developing concurrent strategies, and the well-known issues of LLMs with ToM, contribute to not making LLMs natively interactive agents; in most agentic tools,3theenvironment makes them cooperative (e.g., via an orchestrator, workflows, or by prompting them with each other’s output). To summarise, this section points out that most frameworks rely on orchestrators and workflows to direct the LLMs’ behaviour, with interactions that occur as a by-product of initial instructions. From a MAS perspective, LLMs reactivity and proactiveness are not socially directed. Instead, in the following paragraph, we argue that social agents should be natively trained to interact, collaborate, and cooperate with other entities such as LLMs, humans, other algorithms and tools, etc. Research directions. We argue for LLMs whose pre-training phase encompasses cooperation and competition in different scenarios. As LLMs are trained on textual corpora to approximate the distribution of human language, we should consider teaching agents the basics of multi-agent cooperation and competition. 4 Figure 1: We categorise the environments of approximately 100MAS LLMs Benchmarks & Evalua- tions papers published between 2023 and 2025. For those papers that describe or allow inference of their environment characteristics (Section 3), we present the data using wheel plots. See the papers list in Appendix A. Recent advances in the field allow training LLMs directly on textual feedback they receive from other models [ 135]; that, alongside cooperative or competitive reinforcement learning, where agents are trained not only to complete tasks but also to adapt to and influence each other’s behaviour, can serve as a path towards MAS in which LLMs show reactive and proactive behaviours beyond answering to prompts. An agent trained in this way would
https://arxiv.org/abs/2505.21298v1
learn to respond appropriately while also interpreting, anticipating, and reacting to the actions of other agents, based on feedback from their interactions. In conclusion, while fine-tuning is proven to be promising to specialise and assign roles to LLMs [ 68, 73,108], we argue it is alone insufficient to provide them with the capacity to cooperate and compete. 3 Environment Design: MAS LLMs Environments are LLM-centric As anticipated in the Introduction, MAS traditionally model the environment with few assumptions LLM-centric environments regarding the architectures of the agents that will populate it. On the other hand, MAS LLMs centre around LLM-powered agents, overlooking the interoperability with agents that do not conform to them. Furthermore, the intrinsic limitations of LLMs, such as their proneness to hallucinations [ 69,143], lack of determinism, and long-term memory persistency [ 83] may hinder the development of the field, in particular in scenarios where safety and time (as measured in seconds [123]) are paramount. In MAS, an environment is the external context that agents perceive and react to, and can be charac- MAS environments terised across five dimensions [ 94,122].Observability determines the amount of information an agent has about the environment and other agents. Predictability states an environment’s determinism level (from fully deterministic to stochastic). Temporality concerns the dependency between successive environment states. Evolution determines how the environment changes with time (i.e., from discrete or continuous). Manipulation represents the degrees of freedom agents have in interacting and modifying the environment. Of around 110MAS LLMs articles published between 2023 and2025 and whose results we sum- MAS LLMs environments marise in Figure 1, most MAS LLMs operate in partially observable, deterministic, temporally dynamic, discrete environments [ 66,77] (Further details on the papers in Appendix A). Furthermore, the vast majority of LLMs articles (see Figure 1, bottom-right) employ textual representations.5 In a fully observable setting, agents receive the same, shared representations of the environment; observability in MAS LLMS conversely, partially observable environments provide partial, unique representations to each agent. In MAS LLMs observability, whether partial or full, comes with some issues due to the nature of LLMs: in fact, these models notoriously fail at inferring other agents’ beliefs, desires, and intentions [ 97,100,115]. As discussed in Section 2, LLMs tend to integrate all the information they have access to without distinguishing what they know as opposed to what they know what other agents know , the so called problem of kthorder beliefs [ 87]. In this sense, a centralised MAS LLMs where an LLM orchestrates and monitors the others may incur issues regarding attribute who did what orwhat an agent wants to achieve . Even systems with two LLMs suffer from these issues [ 61], as we later discuss in the last paragraph of this section. While most MAS LLMs papers assume the environment is deterministic and can be modified (i.e., non-determinism in MAS LLMS 5As communication systems are not intrinsically part of the environment, we further elaborate on the non-sustainability of text as the preferred medium of communication in large networks of MAS LLMs in Section 4. 5 it changes according to
https://arxiv.org/abs/2505.21298v1
the actions the LLMs make), the intrinsic non-determinism of LLMs flaws this setting. In terms of safety, one can design specific procedures, such as safety mechanisms and guardrail measures, that deterministically trigger when particular events happen; on the other hand, non-deterministic LLMs provide no guarantees they will behave accordingly [ 20]. Even setting the temperature of an LLM to zero or sampling deterministically their strategies is not a solution, as many LLMs are known to behave always non-deterministically [58, 83]. In two popular frameworks, Camel [ 61] and MetaAgent [ 67], the authors illustrate how their models hallucinations in text environments hallucinate and underperform when MAS LLMs are specialised to perform a subtask. In Camel, two LLMs autonomously generate prompts to solve complex tasks, reducing reliance on user input. The authors notice that LLMs can inadvertently swap their roles, generate repeated or non-useful instructions or get stuck in infinite message exchanges. MetaAgent focuses on collaborative agents that accomplish coordination tasks. Through perception, memory, reasoning, and execution modules, LLMs interact with the environment, store valuable information, and learn rewarding skills. The authors show that LLMs deviate from their predefined identities, hallucinating their competence and thus compromising collaboration. We argue that these problems are a byproduct of the overreliance on natural language and free-text as the reference medium for coordinating agents. Natural language is inherently ambiguous and prone to misinterpretation [ 10,64], suffers from information loss [ 62], and requires high costs for storing and retrieving information. In the following paragraph, we propose some research directions to mitigate these issues. Research directions. We propose the following research directions to address the issues with textual environments. First, we argue it is crucial to explore multi-modal environments where the stimuli are turned into actionable steps without intermediate translation into natural language [ 49,70,137]. The intuition is that the fewer mediations a stimulus goes through, the less likely the signal will contain distortions or introduce errors. Furthermore, using structured formats (taking inspiration from performative agentic languages such as KQML [ 36]) can remove the ambiguities inherent in natural language communication. Last but not least, integrating LLMs and formal planners or neuro-symbolic methods can provide guarantees that precise actions will be carried out correctly: LLMs excel at extracting a task’s specifics from noisy and incomplete specifications, while formal methods can provide plans that are guaranteed to achieve the goal. 4 Coordination and Communication Issues in MAS LLMs This section addresses the lack of asynchronicity and standardised communication methods. Coordination and the lack of asynchronous MAS LLMs. As described in Section 1, asynchronic- the benefits of asynchronicity ity is a key component of multi-agent concurrent systems: in its basic form, concurrent systems encompass multiple, diverse tasks executed simultaneously, without assuming when they start or end. Examples of notorious problems that can only be modelled with asynchronicity are those that necessitate concurrent algorithms to be solved [ 24,26]: these include classic problems such as the “dining philosophers”, handshaking protocols, etc. Asynchronicity also arises in many practical scenarios, such as email and chat exchanges and database management access. Many real-world scenarios
https://arxiv.org/abs/2505.21298v1
cannot be modelled without asynchronicity and require simplification (e.g., by assuming agents act sequentially through an orchestrator). Notably, while most closed-source LLM providers offer asynchronous APIs, agents employing LLMs LLMs are asynchronous... tend to be predominantly used in synchronous or parallel fashion [65]. In this sense, we surveyed the MAS LLMs literature published between 2022 and2025 , to understand their interactions are not! how many works directly employ asynchronicity or enable the deployment of asynchronous agents. We identified few works ( 22) that explicitly model or discuss asynchronous agent interactions.6Fur- thermore, in those few cases, asynchronous interactions are implemented through conversations and by employing frameworks and languages that are not natively asynchronous, which adds unnecessary complexity and reduces interoperability. More information is provided in Appendix B. 6As a reference, this survey counts more than 1400 MAS LLMs articles: https://github.com/ AGI-Edgerunners/LLM-Agents-Papers 6 For example, we consider the case of AutoGen, an influential MAS LLMs framework [ 127] that enables building LLM applications through multiple interacting agents. Although AutoGen supports asynchronous calls,7developers must define asynchronous calls for each action and event; asyn- chronous programming with synchronous languages is well-known to be prone to bugs that impede the system from being fully asynchronous. As we expand later, we argue for a reverse approach. Every agent and environment should be natively asynchronous to be considered as a MAS,8with sequential calls being the exception, rather than the norm. Communications methods in MAS LLMs. Traditionally, there are three levels at which (Multi- Agent) communication is analysed [ 6]. An utterance that conveys some meaning may have to the hearer no intended effect (an illocutionary act , e.g., "the sky is blue"), some meaning intended to warn agent communication principles(aperlocutionary act , "a train is passing"), or some meaning that acts as a request (a performative , e.g., "please open the window"). These distinctions, alongside the so-called rules of conversation (implicature, the Gricean maxims [41], etc.), constitute the foundation of agents’ communication in MAS [11, 48, 96]. In MAS, illocutionary conversations are usually handled using descriptive and structured languages, performative communication such as JSON and RDF. Their purpose is to exchange information, not to perform or request actions. On the other hand, KQML [ 36], FIPA’s ACL,9and rational programming and Agent-Oriented Programming [ 103] are examples of consolidated performative standards that describe agents’ beliefs, commitments, and actions. For instance, for an agent that exchanges a file with another and asks to summarise it, the MAS literature proposes using a structured language for the exchange and a performative query for the summarisation. When it comes to LLMs, humans interact with them in free-text form; by extension, most MAS LLMs natural language is ambiguous systems adopt natural language as the primary communication medium. While natural language captures the nuances of the principles mentioned above, it comes at the cost of complexity and ambiguity: an utterance such as "a car is coming" may range from being illocutionary (conveying information) to performative (conveying an implicit warning or an order). A few recent works in MAS LLMs propose to handle handshakes and errors with
https://arxiv.org/abs/2505.21298v1
natural language communications; any other communication that routines and protocols can implement should otherwise favour structured languages [76]. To conclude, overlooking the importance of communication by assuming natural language as the standard has the concrete risk of developing MAS LLMs that are expensive (the cost of generating responses would dwarf any other in the system) and where language ambiguities cause failures that are hard to inspect, fix, and prevent. Research directions. In terms of asynchronicity, we devise two complementary research directions that are worth investigating: on the one hand, frameworks that model MAS as asynchronous sys- tems [ 22] should be adapted to MAS LLMs; the capacity of modelling asynchronous systems can provide insights and guarantees into their long-term evolution. One example is that described in [ 1], which models MAS with Petri nets, a model of computation that is inherently asynchronous, enabling analyses on reachability, boundedness and invariance of the system. On the other hand, as mentioned above, providers often expose asynchronous LLM APIs, for which we should develop suitable environments. Critical points in this sense are how deadlocks and starvation are handled to ensure a consistent evolution of the environment [ 39,95]. To summarise our position, we argue for MAS LLM frameworks that are natively asynchronous and where sequential actions are the exception. As regards communication systems, we argue for standard, open-source frameworks to reason and build the key components of any LLMs communication (e.g., aspects such as security, identity preservation, trust, message exchange, etc.). In line with some recent initiatives,10we believe the MAS LLMs community should work towards standard agent protocols guided, where needed, by MAS principles. In the spirit of Agent Oriented Programming [ 121], researchers would have abstract templates that already implement security routines (e.g., a communication between web agents would only happen over a secure channel such as HTTPS). 7https://github.com/microsoft/autogen?tab=readme-ov-file#web-browsing-agent-team 8This core characteristic is discussed in detail in [123], Chapter 1.3. 9http://www.fipa.org/specs/fipa00061/XC00061D.html 10https://github.com/google/A2A ,https://nanda.media.mit.edu/ ,https://las-wg.org/ 7 In summary, we argue for communication systems between LLMs built on top of what the computer science and the MAS community consider good practice and standards regarding security, identity preservation, trust, handshakes, etc. 5 MAS LLMs do not Quantify the Emergence of Complex Behaviours In many disciplines, such as system theory, economics, MAS, etc., an emergent behaviour is observed emergence when a complex entity has properties or behaviours that its parts do not have on their own, and emerge only when they interact in a wider whole [ 4,37]. In the context of LLMs, emergence describes the increasing capabilities of a model at varying model size [ 12,52]. With MAS LLMs, behaviours that cannot be predicted via a static analysis of agents and their environment are also considered emergent [62, 85]. Emergent behaviours in MAS and MAS LLMs are often associated with open-ended environments, observational emergence i.e., those MAS settings where LLMs interact and evolve freely. In an influential work [ 85], Park et al. use LLMs to simulate a sandboxed society, with a focus on how single instructions influence the population and information spreads across the environment. While the
https://arxiv.org/abs/2505.21298v1
authors show that simply nudging one agent causes other agents to engage in complex behaviours, the concept of emergence itself is never addressed formally. Another recent work studies how LLMs build agent societies within a Minecraft environment [ 3]. While the work claims that agents can achieve significant milestones towards AI civilisations,11 results are primarily observational, i.e., the system is let evolve for a long time and then researchers make their observations on interesting behaviours the LLMs adopt. Other works approach emergence from a similar perspective [85, 128], and a body of research studies emergence before the advent of ChatGPT [40, 59]. We thus surveyed papers published in the MAS LLMs and machine learning community between emergence in MAS LLMs 2023 and2025 that mention emergent behaviours to understand what methods and metrics employ to identify and quantify them. Out of more than 60papers analysed, only a few define clear metrics to measure emergent behaviours, while the majority qualitatively evaluate such behaviours and report the most notable. More details about the analysis and the list of papers are reported in Appendix C. In light of these observations, we question whether the behaviours described in these works are natural outcomes of the actions of powerful general-purpose LLMs, as discussed in Section 2, or truly represent emergent behaviours. In conclusion, in this body of research, we observe (i) no systematic definition of what constitutes an quantifying emergence emergent behaviour, without reference to the system being analysed, therefore (ii) a lack of proper benchmarks to quantify them. In contrast, traditional MAS research generally insists on rigorously defined environments, quantifiable objectives, and formal verification methods. Emergence alone, especially when derived from loosely constrained prompts and undefined interaction dynamics, does not suffice to claim long-horizon MAS-level coordination or planning capabilities [114]. Research directions. We argue for a proper definition of emergence and emergent behaviours in open-ended MAS LLMs, where the concept is well established and scientifically falsifiable [4, 37]. For example, in economics, emergence is sometimes characterised in its core aspects, and encom- passes behaviours that produce outcomes the theory can explain [ 57]: in other words, emergent phenomena are economics phenomena. In this sense, since MAS LLMs share with economics the interest in agents’ behaviour, a characterisation of emergence can encompass phenomena that relate to the agents’ objective functions (e.g., in a system where agents have to maximise productivity, one finds a way to hack the reward function). Conversely, when emergent phenomena are beyond the scope of the system (e.g., in a system where agents have to maximise productivity, another objective function spontaneously emerges), that would represent a different facet of emergence. While this definition has the advantage of being measurable, it fits the notions of weak and strong emergence in computer science [ 15] (a weak emergent phenomenon can be derived from the underly- 11A video showcasing their emergent behaviours, https://www.youtube.com/watch?v=9piFiQJ-mnU . 8 ing system; a strong phenomenon requires new laws or assumptions), thus making the adaptation to MAS LLMs more straightforward. 6 Alternative Views Section 2 - MAS LLMs do not need social pre-training to
https://arxiv.org/abs/2505.21298v1
be social agents & Section 4 - Central orchestration is enough to build complex MAS LLMs. The main argument against our position in Section 2 and the first paragraph of Section 4 are that (i) agentic tools do not need to pre-train LLMs to enhance their social behaviour and capabilities to interact with other agents and that (ii) simple orchestrators and agentic workflows suffice to coordinate complex MAS LLMs interactions. Implementations of MAS LLMs tools3propose workflows orchestrated through predefined code paths, with LLM-powered agents maintaining control over how they accomplish tasks. Human-designed workflows orchestrate coordination, as well as the general scope of intra-agent iterations. Google also published its agentic framework,3which puts emphasis is given to how agents perform their tasks leveraging existing techniques such as ReAct, Chain of Thought, Tree of Thought, etc. [ 120,133,134], and no mention to the social aspects, and the relative training, of agent interactions. Similarly, Microsoft AutoGen [ 127] (which we discussed extensively in Section 4) and OpenAI Agents [ 101],3do not mention the possibility to pre-train agents to develop social behaviour , as they see agents as components that solve a task managed by the orchestrator. Finally, the debate around whether LLMs possess social behaviour and a Theory of Mind has prominent supporters [ 55], whose arguments are strong but mostly empirical [106]. Section 3 - MAS LLMs environments are not LLM-centric. The main concurrent arguments to our point in Section 3 are that (i) most practical scenarios do not require open environments [ 123] and that (ii) progress in the field will overcome their limitations regarding lack of stable memory, hallucinations, and the costs of storing and retrieving information in (textual) natural language. Regarding point (i), influential work from industry underscores the importance of avoiding overly complex frameworks for MAS LLMs, advocating for simple, modular design principles [ 14]. These principles are adopted by Anthropic, Google, Microsoft, and OpenAI frameworks, enabling LLM capabilities testing in open-ended environments. However, while the tools to construct such challeng- ing scenarios exist, the community has shown limited practical interest in doing so. Instead, current research efforts focus more on exploring what can be built with MAS LLMs than on stress-testing them in challenging settings. As regards point (ii), many recent works address the problem of equipping LLM agents with long- term memory [ 118], retrievable with reduced computational costs [ 50], as well as methods to reduce hallucinations [34, 35] while maintaining text as the primary input and communication medium. Section 4 - Basic communication systems are enough to build complex MAS LLMs. The main concurrent argument to our point in Section 4 is that natural language is sufficient for large-scale communication in MAS LLMs. Some works in the literature have already identified scaling issues in MAS LLMs, and have proposed potential countermeasures [ 119,138]. In [ 138], a reduction in token usage is achieved by 28-73%, but requires an expensive RL-based optimisation of the communication graph that does not transfer across tasks. Similarly, in [ 119], an average 22% reduction in prompt tokens and an average 18% reduction
https://arxiv.org/abs/2505.21298v1
in completion tokens is achieved, but requires optimising a set of parameters that is not transferable across tasks. Section 5 - The point of open-ended MAS LLMs is not benchmarking. The main concurrent argument to our point in Section 5 is that emergent behaviours are all those systems that exhibit, in the long run, complex and often surprising capabilities that were not programmed or predicted during development. This phenomenon of emergence is not unique to artificial intelligence; it is well-recognised across other disciplines. In biology, for example, the flocking of birds or the organisation of ant colonies are classic emergent phenomena, i.e., complex patterns arising from simple rules followed by individuals [13]. 9 Similarly, in economics, market trends and crashes often emerge from the interactions of countless agents acting on local information [ 5]. In these fields, emergent behaviours are typically treated as observational phenomena recognised through empirical study mirroring how they are now being explored in machine learning [78]. 7 Conclusion Our work argues that the current literature on MAS LLMs overlooks key issues already explored in the MAS literature. First, LLM agents currently lack native social behaviour; We think that LLMS can be pre-trained to learn cooperation and competition through multi-agent scenarios and interactive feedback, enabling them to develop socially adaptive behaviours beyond prompt-based responses. Second, MAS LLMs environments are overly centred on LLMs themselves; We argue for prioritising the development of LLMs capable of accessing the current state of their environment, through structured memory systems, without relying on their memory. In parallel, we support the design of external reward mechanisms that can reliably assess LLMs performance in multi-agent settings. Third, MAS LLMs lack asynchronicity and over-rely on natural language as the primary communication protocol; We propose developing MAS LLM frameworks that are natively asynchronous and grounded in standardised, open-source communication protocols, ensuring security, identity, and trust, drawing from established practices in multi-agent distributed systems and communication. Finally, while emergent behaviour is cited as a desirable property in MAS LLMs, it lacks a rigorous definition in this context. We suggest establishing a clear, falsifiable definition of emergent behaviours in open-ended MAS LLMs, aligning with how emergence is rigorously treated in MAS. 10 References [1] F. Adobbati and Łukasz Mikulski. Asynchronous multi-agent systems with petri nets, 2025. [2]G. Aher, R. I. Arriaga, and A. T. Kalai. Using large language models to simulate multiple humans and replicate human subject studies, 2023. Preprint. [3]A. AL, A. Ahn, N. Becker, S. Carroll, N. Christie, M. Cortes, A. Demirci, M. Du, F. Li, S. Luo, P. Y . Wang, M. Willows, F. Yang, and G. R. Yang. Project sid: Many-agent simulations toward ai civilization, 2024. [4]P. Altmann, J. Schönberger, S. Illium, M. Zorn, F. Ritz, T. Haider, S. Burton, and T. Gabor. Emergence in multi-agent systems: A safety perspective, 2024. [5] W. B. Arthur. Complexity and the economy. Science , 284(5411):107–109, 1999. [6]J. L. Austin. How to do things with words . William James Lectures. Oxford University Press, 1962. [7]A. Bakhtin, N. Brown, E. Dinan, G. Farina, C. Flaherty, D. Fried, et al. Human-level play
https://arxiv.org/abs/2505.21298v1
in the game of diplomacy by combining language models with strategic reasoning. Science , 378(6624):1067–1074, 2022. [8]D. Balduzzi, M. Garnelo, Y . Bachrach, W. M. Czarnecki, J. Perolat, M. Jaderberg, and T. Graepel. Open-ended learning in symmetric zero-sum games, 2019. [9]C. Bandi, H. Bandi, and A. Harrasse. Adversarial multi-agent evaluation of large language models through iterative debate, 2024. Manuscript submitted to ICLR 2025. [10] R. Battle and T. Gollapudi. The unreasonable effectiveness of eccentric automatic prompts, 2024. [11] M. Berna-Koes, I. Nourbakhsh, and K. Sycara. Communication efficiency in multi-agent systems. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 , volume 3, pages 2129–2134. IEEE, 2004. [12] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners, 2020. [13] S. Camazine, J.-L. Deneubourg, N. R. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau. Self-organization in biological systems . Princeton University Press, 2003. [14] M. Cemri, M. Z. Pan, S. Yang, L. A. Agrawal, B. Chopra, R. Tiwari, K. Keutzer, A. Parameswaran, D. Klein, K. Ramchandran, M. Zaharia, J. E. Gonzalez, and I. Stoica. Why do multi-agent llm systems fail?, 2025. [15] D. J. Chalmers. Strong and weak emergence. The re-emergence of emergence , 675:244–256, 2006. [16] C.-M. Chan, W. Chen, Y . Su, J. Yu, W. Xue, S. Zhang, J. Fu, and Z. Liu. Chateval: Towards better llm-based evaluators through multi-agent debate, 2023. Preprint. [17] L. Chen, J. Q. Davis, B. Hanin, P. Bailis, I. Stoica, M. Zaharia, and J. Zou. Are more llm calls all you need? towards scaling laws of compound inference systems, 2024. [18] W. Chen, Y . Su, J. Zuo, C. Yang, C. Yuan, C. Qian, C.-M. Chan, Y . Qin, Y . Lu, R. Xie, et al. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents. arXiv preprint arXiv:2308.10848 , 2023. [19] Y . Chen, J. Arkin, Y . Zhang, N. Roy, and C. Fan. Scalable multi-robot collaboration with large language models: Centralized or decentralized systems? In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 4311–4317, 2024. 11 [20] Z. Chen, Z. Xiang, C. Xiao, D. Song, and B. Li. Agentpoison: Red-teaming llm agents via poisoning memory or knowledge bases, 2024. [21] V . Conitzer and C. Oesterheld. Foundations of cooperative ai. In Proceedings of the Thirty- Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence , AAAI’23/IAAI’23/EAAI’23. AAAI Press, 2023. [22] D. Cornforth, D. G. Green, and D. Newth. Ordered asynchronous processes in multi-agent systems. Physica D: Nonlinear Phenomena , 204(1):70–82, 2005. [23] M. Cossentino. From requirements to code with passi methodology. In Agent-oriented methodologies , pages 79–106. IGI Global, 2005. [24] G.
https://arxiv.org/abs/2505.21298v1
Coulouris, J. Dollimore, T. Kindberg, and G. Blair. Distributed Systems: Concepts and Design . Addison-Wesley Publishing Company, USA, 5th edition, 2011. [25] I. Dasgupta, C. Kaeser-Chen, K. Marino, A. Ahuja, S. Babayan, F. Hill, and R. Fergus. Collaborating with language models for embodied reasoning. arXiv preprint arXiv:2302.00763 , 2023. [26] J. Díaz and I. Ramos. Formalization of Programming Concepts: International Colloquium, Peniscola, Spain, April 19-25, 1981. Proceedings , volume 107. Springer Science & Business Media, 1981. [27] M. d’Inverno, M. Luck, M. Georgeff, D. Kinny, and M. Wooldridge. The dmars architecture: A specification of the distributed multi-agent reasoning system. Autonomous Agents and Multi-Agent Systems , 9:5–53, 2004. [28] Y . Dong, X. Jiang, Z. Jin, and G. Li. Self-collaboration code generation via chatgpt, 2023. Preprint. [29] Y . Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch. Improving factuality and reasoning in language models through multiagent debate, 2023. [30] Z. Du, C. Qian, W. Liu, Z. Xie, Y . Wang, Y . Dang, W. Chen, and C. Yang. Multi-agent software development through cross-team collaboration, 2024. [31] E. H. Durfee. Distributed Problem Solving and Planning , pages 118–149. Springer Berlin Heidelberg, Berlin, Heidelberg, 2001. [32] E. H. Durfee, V . R. Lesser, and D. D. Corkill. Trends in cooperative distributed problem solving. IEEE Trans. on Knowl. and Data Eng. , 1(1):63–83, Mar. 1989. [33] B. Ellis, J. Cook, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. Foerster, and S. Whiteson. Smacv2: An improved benchmark for cooperative multi-agent reinforcement learning. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 37567–37593. Curran Associates, Inc., 2023. [34] W. Fan, Y . Ding, L. Ning, S. Wang, H. Li, D. Yin, T.-S. Chua, and Q. Li. A survey on rag meeting llms: Towards retrieval-augmented large language models, 2024. [35] S. Farquhar, J. Kossen, L. Kuhn, and Y . Gal. Detecting hallucinations in large language models using semantic entropy. Nature , 630(8017):625–630, 2024. [36] T. Finin, R. Fritzson, D. McKay, and R. McEntire. Kqml as an agent communication language. InProceedings of the Third International Conference on Information and Knowledge Man- agement , CIKM ’94, page 456–463, New York, NY , USA, 1994. Association for Computing Machinery. [37] J. Fromm. Types and forms of emergence. arXiv preprint nlin/0506028 , 2005. [38] C. Gao, X. Lan, Z. Lu, J. Mao, J. Piao, H. Wang, D. Jin, and Y . Li. S3: Social-network simulation system with large language model-empowered agents, 2023. 12 [39] A. A. Ginart, N. Kodali, J. Lee, C. Xiong, S. Savarese, and J. Emmons. Asynchronous tool usage for real-time agents, 2024. [40] L. Graesser, K. Cho, and D. Kiela. Emergent linguistic phenomena in multi-agent communica- tion games, 2020. [41] H. P. Grice. Logic and conversation. In D. Davidson, editor, The logic of grammar , pages 64–75. Dickenson Pub. Co., 1975. [42] T. Guo, X. Chen, Y . Wang, R. Chang, S. Pei, N. V . Chawla, O. Wiest, and X. Zhang. Large language model based multi-agents: A survey of
https://arxiv.org/abs/2505.21298v1
progress and challenges. In K. Larson, editor, Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI- 24, pages 8048–8057. International Joint Conferences on Artificial Intelligence Organization, 8 2024. Survey Track. [43] S. Holt, M. R. Luyten, and M. van der Schaar. L2mac: Large language model automatic computer for extensive code generation, 2024. [44] S. Hong, M. Zhuge, J. Chen, X. Zheng, Y . Cheng, C. Zhang, J. Wang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, C. Ran, L. Xiao, C. Wu, and J. Schmidhuber. Metagpt: Meta programming for a multi-agent collaborative framework. arXiv preprint arXiv:2308.00352 , 2023. [45] J. J. Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical Report 31122, National Bureau of Economic Research, 2023. [46] W. Hua, L. Fan, L. Li, K. Mei, J. Ji, Y . Ge, L. Hemphill, and Y . Zhang. War and Peace (WarA- gent): Large language model-based multi-agent simulation of world wars, 2023. Preprint. [47] D. Huang, Q. Bu, J. M. Zhang, M. Luck, and H. Cui. Agentcoder: Multi-agent-based code generation with iterative testing and optimisation, 2023. Preprint. [48] M.-W. Jang, A. Ahmed, and G. Agha. Efficient agent communication in multi-agent systems. In Software Engineering for Multi-Agent Systems III: Research Issues and Practical Applications 3, pages 236–253. Springer, 2005. [49] C. Jeong. Beyond text: Implementing multimodal large language model-powered multi- agent systems using a no-code platform. Journal of Intelligence and Information Systems , 31(1):191–231, Mar. 2025. [50] B. Jin, J. Yoon, J. Han, and S. O. Arik. Long-context llms meet rag: Overcoming challenges for long inputs in rag. arXiv preprint arXiv:2410.05983 , 2024. [51] S. Jin, J. Xu, Y . Lei, and L. Zhang. Reasoning grasping via multimodal large language model, 2024. [52] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models, 2020. [53] S. Kapoor, B. Stroebl, Z. S. Siegel, N. Nadgir, and A. Narayanan. Ai agents that matter, 2024. [54] D.-K. Kim, S. Sohn, L. Logeswaran, D. Shim, and H. Lee. Multiprompter: Cooperative prompt optimization with multi-agent reinforcement learning, 2023. [55] M. Kosinski. Evaluating large language models in theory of mind tasks. Proceedings of the National Academy of Sciences , 121(45), Oct. 2024. [56] P. Kouvaros, A. Lomuscio, E. Pirovano, and H. Punchihewa. Formal verification of open multi- agent systems. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems , AAMAS ’19, page 179–187, Richland, SC, 2019. International Foundation for Autonomous Agents and Multiagent Systems. [57] M. Kuperberg. The two faces of emergence in economics. Soundings: An Interdisciplinary Journal , 90(1/2):49–63, 2007. 13 [58] E. La Malfa, A. Petrov, S. Frieder, C. Weinhuber, R. Burnell, R. Nazar, A. Cohn, N. Shadbolt, and M. Wooldridge. Language-models-as-a-service: Overview of a new paradigm and its challenges. J. Artif. Int. Res. , 80, Sept. 2024. [59] A. Lazaridou and M. Baroni. Emergent multi-agent communication in the deep learning era, 2020. [60] V . Lesser. A retrospective view of fa/c distributed problem
https://arxiv.org/abs/2505.21298v1
solving. IEEE Transactions on Systems, Man, and Cybernetics , 21(6):1347–1362, 1991. [61] G. Li, H. A. A. K. Hammoud, H. Itani, D. Khizbullin, and B. Ghanem. Camel: Commu- nicative agents for "mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760 , 2023. [62] H. Li, Y . Chong, S. Stepputtis, J. Campbell, D. Hughes, C. Lewis, and K. Sycara. Theory of mind for multi-agent collaboration via large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing . Association for Computational Linguistics, 2023. [63] H. Li, Y . Q. Chong, S. Stepputtis, J. Campbell, D. Hughes, M. Lewis, and K. Sycara. Theory of mind for multi-agent collaboration via large language models. arXiv preprint arXiv:2310.10701 , 2023. [64] J. Li, E. Mynatt, V . Mishra, and J. Bell. "always nice and confident, sometimes wrong": Developer’s experiences engaging large language models (llms) versus human-powered q&a platforms for coding support, 2025. [65] J. Li, Q. Zhang, Y . Yu, Q. Fu, and D. Ye. More agents is all you need, 2024. [66] X. Li, S. Wang, S. Zeng, Y . Wu, and Y . Yang. A survey on llm-based multi-agent systems: workflow, infrastructure, and challenges. Vicinagearth , 1(1):9, 2024. [67] Y . Li, Y . Zhang, and L. Sun. Metaagents: Simulating interactions of human behaviors for llm-based task-oriented coordination via collaborative generative agents, 2023. [68] J. Liao, M. Wen, J. Wang, and W. Zhang. Marft: Multi-agent reinforcement fine-tuning, 2025. [69] F. Liu, Y . Liu, L. Shi, H. Huang, R. Wang, Z. Yang, L. Zhang, Z. Li, and Y . Ma. Exploring and evaluating hallucinations in llm-powered code generation, 2024. [70] Y . Liu, W. Liu, X. Gu, Y . Rui, X. He, and Y . Zhang. Lmagent: A large-scale multimodal agents society for multi-user simulation, 2024. [71] Z. Liu, W. Yao, J. Zhang, L. Xue, S. Heinecke, R. Murthy, Y . Feng, Z. Chen, J. C. Niebles, D. Arpit, R. Xu, P. Mui, H. Wang, C. Xiong, and S. Savarese. Bolaa: Benchmarking and orchestrating llm-augmented autonomous agents, 2023. [72] Y . Lu, S. Yang, C. Qian, G. Chen, Q. Luo, Y . Wu, H. Wang, X. Cong, Z. Zhang, Y . Lin, W. Liu, Y . Wang, Z. Liu, F. Liu, and M. Sun. Proactive agent: Shifting llm agents from reactive responses to active assistance, 2024. [73] H. Ma, T. Hu, Z. Pu, B. Liu, X. Ai, Y . Liang, and M. Chen. Coevolving with the other you: Fine-tuning llm with sequential cooperative multi-agent reinforcement learning, 2025. [74] Z. Mandi, S. Jain, and S. Song. Roco: Dialectic multi-robot collaboration with large language models. arXiv preprint arXiv:2307.04738 , 2023. [75] S. Mao, Y . Cai, Y . Xia, W. Wu, X. Wang, F. Wang, T. Ge, and F. Wei. Alympics: Language agents meet game theory. arXiv preprint arXiv:2311.03220 , 2023. [76] S. Marro, E. L. Malfa, J. Wright, G. Li, N. Shadbolt, M. Wooldridge, and P. Torr. A scalable communication protocol for networks of large language models, 2024. [77] S. Minaee, T. Mikolov, N. Nikzad, M. Chenaghlu, R.
https://arxiv.org/abs/2505.21298v1
Socher, X. Amatriain, and J. Gao. Large language models: A survey, 2025. 14 [78] M. Mitchell. Complexity: A Guided Tour . Oxford University Press, 2009. [79] M. Mosquera, J. S. Pinzon, M. Rios, Y . Fonseca, L. F. Giraldo, N. Quijano, and R. Man- rique. Can llm-augmented autonomous agents cooperate?, an evaluation of their cooperative capabilities through melting pot, 2024. [80] G. Mukobi, H. Erlebach, N. Lauffer, L. Hammond, A. Chan, and J. Clifton. Welfare diplomacy: Benchmarking language model cooperation. arXiv preprint arXiv:2310.08901 , 2023. [81] F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs . Springer Publishing Company, Incorporated, 1st edition, 2016. [82] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welin- der, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback, 2022. [83] S. Ouyang, J. M. Zhang, M. Harman, and M. Wang. An empirical study of the non-determinism of chatgpt in code generation. ACM Transactions on Software Engineering and Methodology , 34(2):1–28, Jan. 2025. [84] X. Pan, M. Liu, F. Zhong, Y . Yang, S.-C. Zhu, and Y . Wang. Mate: Benchmarking multi-agent reinforcement learning in distributed target coverage control. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 27862–27879. Curran Associates, Inc., 2022. [85] J. S. Park, J. C. O’Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442 , 2023. [86] S. Park and V . Sugumaran. Designing multi-agent systems: a framework and application. Expert Systems with Applications , 28(2):259–271, 2005. [87] D. Premack and G. Woodruff. Does the chimpanzee have a theory of mind? Behavioral and brain sciences , 1(4):515–526, 1978. [88] L. Qin, Q. Chen, X. Feng, Y . Wu, Y . Zhang, Y . Li, M. Li, W. Che, and P. S. Yu. Large language models meet nlp: A survey, 2024. [89] N. C. Rabinowitz, F. Perbet, H. F. Song, C. Zhang, S. M. A. Eslami, and M. Botvinick. Machine theory of mind, 2018. [90] A. S. Rao and M. P. George. Bdi agents: From theory to practice. In Proceedings of the First International Conference on Multi-Agent Systems (ICMAS-95) , pages 312–319, 1995. [91] A. S. Rao, M. P. Georgeff, et al. Bdi agents: from theory to practice. In Icmas , volume 95, pages 312–319, 1995. [92] Z. Rasheed, M. Waseem, A. Ahmad, K.-K. Kemell, W. Xiaofeng, A. N. Duc, and P. Abrahams- son. Can large language models serve as data analysts? a multi-agent assisted approach for qualitative data analysis, 2024. [93] M. Rocha, H. H. da Silva, A. S. Morales, S. Sarkadi, and A. R. Panisson. Applying theory of mind to multi-agent systems: A systematic review. In M. C. Naldi and R. A. C. Bianchi, editors, Intelligent Systems , pages 367–381, Cham, 2023. Springer Nature Switzerland. [94] S.
https://arxiv.org/abs/2505.21298v1
Russell and P. Norvig. Artificial Intelligence: A Modern Approach . Prentice Hall, 3 edition, 2010. [95] H. Sami, M. ul Islam, S. Charas, A. Gandhi, P.-E. Gaillardon, and V . Tenace. Nexus: A lightweight and scalable multi-agent framework for complex tasks automation, 2025. [96] G. M. Saunders and J. B. Pollack. The evolution of communication schemes over continuous channels. From Animals to Animats , 4:580–589, 1996. 15 [97] M. Sclar, S. Kumar, P. West, A. Suhr, Y . Choi, and Y . Tsvetkov. Minding language models’ (lack of) theory of mind: A plug-and-play multi-character belief tracker, 2023. [98] M. Shanahan. Talking about large language models, 2023. [99] S. Shankar and A. G. Parameswaran. Building reactive large language model pipelines with mo- tion. In Companion of the 2024 International Conference on Management of Data , SIGMOD ’24, page 520–523, New York, NY , USA, 2024. Association for Computing Machinery. [100] N. Shapira, M. Levy, S. H. Alavi, X. Zhou, Y . Choi, Y . Goldberg, M. Sap, and V . Shwartz. Clever hans or neural theory of mind? stress testing social reasoning in large language models, 2023. [101] Y . Shavit, S. Agarwal, M. Brundage, S. Adler, C. O’Keefe, R. Campbell, T. Lee, P. Mishkin, T. Eloundou, A. Hickey, et al. Practices for governing agentic ai systems. Research Paper, OpenAI , 2023. [102] H. Shi, S. Ye, X. Fang, C. Jin, L. Isik, Y .-L. Kuo, and T. Shu. Muma-tom: Multi-modal multi-agent theory of mind, 2025. [103] Y . Shoham. Agent-oriented programming. Artificial intelligence , 60(1):51–92, 1993. [104] Y . Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations . Cambridge University Press, USA, 2008. [105] R. Sinha, A. Elhafsi, C. Agia, M. Foutter, E. Schmerling, and M. Pavone. Real-time anomaly detection and reactive planning with large language models, 2024. [106] J. W. Strachan, D. Albergo, G. Borghini, O. Pansardi, E. Scaliti, S. Gupta, K. Saxena, A. Rufo, S. Panzeri, G. Manzi, et al. Testing theory of mind in large language models and humans. Nature Human Behaviour , 8(7):1285–1295, 2024. [107] H. Su, R. Chen, S. Tang, Z. Yin, X. Zheng, J. Li, B. Qi, Q. Wu, H. Li, W. Ouyang, P. Torr, B. Zhou, and N. Dong. Many heads are better than one: Improved scientific idea generation by a llm-based multi-agent system, 2025. [108] V . Subramaniam, Y . Du, J. B. Tenenbaum, A. Torralba, S. Li, and I. Mordatch. Multiagent finetuning: Self improvement with diverse reasoning chains, 2025. [109] L. Sun, D. K. Jha, C. Hori, S. Jain, R. Corcodel, X. Zhu, M. Tomizuka, and D. Romeres. Interactive planning using large language models for partially observable robotic tasks. In 2024 IEEE International Conference on Robotics and Automation (ICRA) , pages 14054–14061, 2024. [110] Y . Talebirad and A. Nadiri. Multi-agent collaboration: Harnessing the power of intelligent llm agents. arXiv preprint arXiv:2306.03314 , 2023. [111] X. Tang, A. Zou, Z. Zhang, Z. Li, Y . Zhao, X. Zhang, A. Cohan, and M. Gerstein. Medagents: Large language models as collaborators for zero-shot medical reasoning, 2024. [112] O. E. L. Team, A.
https://arxiv.org/abs/2505.21298v1
Stooke, A. Mahajan, C. Barros, C. Deck, J. Bauer, J. Sygnowski, M. Tre- bacz, M. Jaderberg, M. Mathieu, N. McAleese, N. Bradley-Schmieg, N. Wong, N. Porcel, R. Raileanu, S. Hughes-Fitt, V . Dalibard, and W. M. Czarnecki. Open-ended learning leads to generally capable agents, 2021. [113] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V . Pham, B. O’Sullivan, and H. D. Nguyen. Multi-agent collaboration mechanisms: A survey of llms, 2025. [114] A. M. Turner, A. Saxena, and P. Tadepalli. Formalizing the problem of side effect regularization, 2022. [115] T. Ullman. Large language models fail on trivial alterations to theory-of-mind tasks, 2023. 16 [116] M. J. van Duijn, B. Van Dijk, T. Kouwenhoven, W. De Valk, M. R. Spruit, and P. van der Putten. Theory of mind in large language models: Examining performance of 11 state-of- the-art models vs. children aged 7-10 on advanced tests. arXiv preprint arXiv:2310.20320 , 2023. [117] R. Wang, J. Lehman, J. Clune, and K. O. Stanley. Paired open-ended trailblazer (poet): End- lessly generating increasingly complex and diverse learning environments and their solutions, 2019. [118] W. Wang, L. Dong, H. Cheng, X. Liu, X. Yan, J. Gao, and F. Wei. Augmenting language models with long-term memory, 2023. [119] Z. Wang, Y . Wang, X. Liu, L. Ding, M. Zhang, J. Liu, and M. Zhang. Agentdropout: Dynamic agent elimination for token-efficient and high-performance llm-based multi-agent collaboration. arXiv preprint arXiv:2503.18891 , 2025. [120] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. [121] D. Weyns. Architecture-based design of multi-agent systems . Springer Science & Business Media, 2010. [122] D. Weyns, H. V . D. Parunak, F. Michel, T. Holvoet, and J. Ferber. Environments for multiagent systems state-of-the-art and research challenges. In E4MAS , pages 1–47, 2004. [123] M. Wooldridge. An Introduction to MultiAgent Systems . Wiley Publishing, 2nd edition, 2009. [124] M. Wooldridge and N. Jennings. Intelligent agents: Theory and practice. Knowledge Engi- neering Review , 10(2):115–152, 1995. [125] M. Wooldridge and N. R. Jennings. Agent theories, architectures, and languages: A survey. In M. J. Wooldridge and N. R. Jennings, editors, Intelligent Agents , pages 1–39, Berlin, Heidelberg, 1995. Springer Berlin Heidelberg. [126] M. Wooldridge, N. R. Jennings, and D. Kinny. The gaia methodology for agent-oriented analysis and design. Autonomous Agents and multi-agent systems , 3:285–312, 2000. [127] Q. Wu, G. Bansal, J. Zhang, Y . Wu, B. Li, E. Zhu, L. Jiang, X. Zhang, S. Zhang, J. Liu, A. H. Awadallah, R. W. White, D. Burger, and C. Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation. arXiv preprint arXiv:2308.08155 , 2023. [128] C. Xie, C. Chen, F. Jia, Z. Ye, S. Lai, K. Shu, J. Gu, A. Bibi, Z. Hu, D. Jurgens, J. Evans, P. Torr, B. Ghanem, and G. Li. Can large language model agents simulate human trust behavior?, 2024. [129] Y . Xu, S. Wang, P. Li, F. Luo, X. Wang, W. Liu, and Y . Liu. Exploring large language models for communication games: An empirical study on werewolf.
https://arxiv.org/abs/2505.21298v1
arXiv preprint arXiv:2309.04658 , 2023. [130] B. Yang, H. Tian, J. Ren, H. Zhang, J. Klein, T. F. Bissyandé, C. L. Goues, and S. Jin. Multi-objective fine-tuning for enhanced program repair with llms, 2024. [131] H. Yang, S. Yue, and Y . He. Auto-gpt for online decision making: Benchmarks and additional opinions, 2023. [132] Z. Yang, L. Ning, H. Wang, T. Jiang, S. Zhang, S. Cui, H. Jiang, C. Li, S. Wang, and Z. Wang. Text2reaction : Enabling reactive task planning using large language models. IEEE Robotics and Automation Letters , 9(5):4003–4010, 2024. [133] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y . Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. [134] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y . Cao. React: Synergizing reasoning and acting in language models, 2023. 17 [135] M. Yuksekgonul, F. Bianchi, J. Boen, S. Liu, Z. Huang, C. Guestrin, and J. Zou. Textgrad: Automatic "differentiation" via text, 2024. [136] S. Zeighami, Y . Lin, S. Shankar, and A. Parameswaran. Llm-powered proactive data systems, 2025. [137] D. Zhang, Z. Li, P. Wang, X. Zhang, Y . Zhou, and X. Qiu. Speechagents: Human- communication simulation with multi-modal multi-agent systems, 2024. [138] G. Zhang, Y . Yue, Z. Li, S. Yun, G. Wan, K. Wang, D. Cheng, J. X. Yu, and T. Chen. Cut the crap: An economical communication pipeline for llm-based multi-agent systems. arXiv preprint arXiv:2410.02506 , 2024. [139] X. Zhang, Y . Deng, Z. Ren, S.-K. Ng, and T.-S. Chua. Ask-before-plan: Proactive language agents for real-world planning, 2024. [140] Z. Zhang, X. Bo, C. Ma, R. Li, X. Chen, Q. Dai, J. Zhu, Z. Dong, and J.-R. Wen. A survey on the memory mechanism of large language model based agents, 2024. [141] Z. Zheng, O. Zhang, H. L. Nguyen, N. Rampal, A. H. Alawadhi, Z. Rong, T. Head-Gordon, C. Borgs, J. T. Chayes, and O. M. Yaghi. Chatgpt research group for optimizing the crystallinity of mofs and cofs. ACS Central Science , 9(11):2161–2170, 2023. [142] W. Zhou, Y . E. Jiang, L. Li, J. Wu, T. Wang, S. Qiu, J. Zhang, J. Chen, R. Wu, S. Wang, et al. Agents: An open-source framework for autonomous language agents. arXiv preprint arXiv:2309.07870 , 2023. [143] X. Zhou, P. Liang, B. Zhang, Z. Li, A. Ahmad, M. Shahin, and M. Waseem. Exploring the problems, their causes and solutions of ai pair programming: A study on github and stack overflow. J. Syst. Softw. , 219(C), Jan. 2025. 18 A Categories of Environments The112papers used to produce the results in Figure 1 are listed in this repository, https://github. com/AGI-Edgerunners/LLM-Agents-Papers?tab=readme-ov-file#Infrastructure , last access May 9th2025 . They account for the most influential and popular MAS LLMs “Bench- mark & Evaluation” papers published between 2023 and April 2025 . We focus on the category “Benchmark & Evaluation” as it encompasses new benchmarks built to test the capabilities of LLMs in Multi-Agent settings, as well as extensive evaluations. B Papers that Leverage or Study Asynchronicity in MAS LLMs
https://arxiv.org/abs/2505.21298v1
arXiv:2505.21301v1 [cs.CL] 27 May 2025How Humans and LLMs Organize Conceptual Knowledge: Exploring Subordinate Categories in Italian Andrea Pedrottiα, Giulia Rambelliβ, Caterina Villaniβ, Marianna Bolognesiβ αIstituto di Scienza e Tecnologie dell’Informazione “A. Faedo” (ISTI-CNR) andrea.pedrotti@isti.cnr.it βUniversità di Bologna {giulia.rambelli4,caterina.villani6,m.bolognesi}@unibo.it Abstract People can categorize the same entity at mul- tiple taxonomic levels, such as basic ( bear), superordinate ( animal ), and subordinate ( griz- zly bear ). While prior research has focused on basic-level categories, this study is the first attempt to examine the organization of cate- gories by analyzing exemplars produced at the subordinate level. We present a new Italian psycholinguistic dataset of human-generated exemplars for 187 concrete words. We then use these data to evaluate whether textual and vision LLMs produce meaningful exemplars that align with human category organization across three key tasks: exemplar generation, category induction, and typicality judgment. Our findings show a low alignment between hu- mans and LLMs, consistent with previous stud- ies. However, their performance varies notably across different semantic domains. Ultimately, this study highlights both the promises and the constraints of using AI-generated exemplars to support psychological and linguistic research.1 1 Introduction Concepts are the “building blocks” of human cog- nition, allowing us to interpret and categorize re- ality (Murphy, 2002). The same category can be represented at different levels of inclusiveness ( cat- egorical specificity ; Bolognesi et al., 2020). For instance, a two-wheeled object may simultaneously be categorized as an electric bike , abike, or a vehi- cle, reflecting a hierarchical taxonomy that ranges from a very specific and not inclusive category that only includes members with many common fea- tures ( mountain bikes ,electric bikes ) to a more general and inclusive category that includes a wide variety of items that do not necessarily share many common features ( bike, cars, bus ). Most studies on hierarchical organization of cat- egories in the human mind have focused on basic- 1Data and code is available on GitHub and OSF. Figure 1: Visual representation of studies’ design. En- glish exemplars are used for illustration only. level categories, showing their advantages in pro- cessing and acquisition (Rosch et al., 1976; Ha- jibayova, 2013 for a review), paying little attention to the more specific subordinate categories. Yet, words at the subordinate level are crucial for effec- tive communication in specialized domains, as their lexicon conveys richer and more precisely defined semantic content, often derived through linguistic combinations. Current cognitive theories acknowledge that both sensorimotor and linguistic experiences con- tribute to our conceptual representation (Barsalou et al., 2008; Louwerse, 2018; Davis and Yee, 2021). For instance, one may observe that apples can be red, yellow, or green, but learn in a book that the word Fuji refers to a specific variety of apples. Although concepts can be represented indepen- dently from words, linguistic labels often act as cues (Lupyan, 2012; Lupyan and Lewis, 2019) that help to create and organize our knowledge, group- ing items based on perceived similarities, even if we have never encountered a particular instance before. The extent to which the organization of human conceptual
https://arxiv.org/abs/2505.21301v1
categories is influenced by the distributional properties of linguistic input remains a central question in cognitive science, linguistics, and artificial intelligence (van Hoef et al., 2023). This paper investigates the organization and the contents of conceptual categories produced at a subordinate level by humans and Large Language Models (LLMs). The remarkable success of LLMs raises questions about their plausibility as models of human cognition, as their performance closely resembles human-like language understanding and generation across several tasks (Wang et al., 2018; Brown et al., 2020; Floridi and Chiriatti, 2020; Bommasani et al., 2022; Wei et al., 2022). How- ever, while their functional linguistic competence— reflected in their general knowledge and reasoning skills through language—is undeniable, their paral- lelism with the human mind remains highly debated (i.a., Bender and Koller, 2020; Marcus, 2020; Ma- howald et al., 2024). In contrast to LLMs, human conceptual categories emerge from the integration of linguistic and extra-linguistic (sensory) informa- tion. Investigating the structural organization of categories in LLMs may provide insight into the extent to which category formation depends exclu- sively on linguistic experience; thus, contributing to the larger debate on the role of language in learn- ing semantic knowledge (Lupyan and Lewis, 2019). While previous works have explored the organiza- tion of superordinate categories in both humans and LLMs, we are the first to investigate the or- ganization of basic-level categories. Specifically, we present two studies to address the following research questions: •RQ1: How do humans create and organize basic-level categories, considering the ex- emplars produced at a subordinate level? We introduce a new Italian psycholinguistic dataset, collecting exemplars of 187 basic con- crete categories generated by human partici- pants (§3). We explore the variability of exem- plars as a function of category types, assum- ing that this variability reflects the richness of the linguistic vocabulary and linguistic knowl- edge in semantic domains. •RQ2: Do LLMs have the same category structure as humans? We probe recent LLMs to generate exemplars for the same 187 basic-level categories and compare their pre- dictions with humans (§4), as illustrated in Figure 1. We assess whether LLMs capturehuman conceptual organization using two clas- sification subtasks: category induction (§5.1) andtypicality prediction (§5.2). Finally, we compare vision LMs (vLMs) to investigate whether pre-training extra-linguistic knowl- edge enhances overall performance. 2 Background and Related Works 2.1 Categories in the Human Mind Classical cognitive research showed that categories are organized hierarchically in the human mind: a bulldog is a type of dog, which is a type of mam- mal, and more broadly an animal , with each cate- gory including the previous one. In other words, categories vary in level of specificity —i.e., how inclusive the category of reference is (Cohen and Lefebvre, 2005; Bolognesi et al., 2020). Superordi- nate categories (e.g., furniture, vehicle ) encompass broader classes, while subordinate categories (e.g., wooden upholstered chairs, red sports cars ) repre- sent more specific instances. The basic level (e.g., chair, car ), often considered the most informative level, lies between these two extremes, and words that denote basic-level categories are typically eas- ier to understand
https://arxiv.org/abs/2505.21301v1
and process (Murphy, 2002). A common approach for investigating the struc- ture of categorical knowledge involves analyzing typicality effects, by asking typicality ratings on a Likert scale (i.e., “How typical is a catfor the category mammal ?”) or by instructing participants to freely name members of a given category. The ladder, called “semantic fluency” or “category in- stance generation” (Castro et al., 2021), requires participants to actively retrieve exemplars of a cat- egory, which is a more cognitively demanding task than simply judging its typicality within a category. However, typicality ratings can be extracted from category instance generation tasks by aggregating the frequency of exemplar productions. Conversely, words judged as typical for their category are usu- ally more available than words judged to be rela- tively atypical (Natividad Hernández-Muñoz and Ellis, 2006). In seminal studies, Rosch (1978) ob- served that some exemplars (e.g., robin, crow ) are perceived as more representative of a category (e.g., birds ) than others (e.g., penguin, ostrich ). This graded structure, as explained by prototype the- ory (Rosch, 1975), reflects the fact that frequently shared properties among category members tend to be integrated into a central prototype. While cognitive research has extensively focused on superordinate and basic-level categories, subor- dinate categories have received less attention. Con- cepts at the subordinate level have some notable peculiarities. First, their referents share more at- tributes than those within basic-level categories (Rosch et al., 1976). Additionally, subordinate con- cepts encode greater perceptual detail, making it more challenging to process individual exemplars. As a result, people tend to name objects at the basic level unless subordinate-level information is partic- ularly relevant. Finally, language plays a crucial role in forming subordinate categories, often cre- ated through linguistic compositionality ( electric car, sports car ). To the best of our knowledge, no studies in English or any other language have inves- tigated the organization and contents of basic-level categories (e.g., dog,hammer ), by asking partici- pants to generate concepts at the subordinate level. 2.2 Categories in LLMs Previous works on predicting category structure in LLMs have primarily focused on the typical- ity of a category member, yielding mixed results. Heyman and Heyman (2019) predicted human typ- icality ratings by correlating similarity scores be- tween category exemplars (e.g., robin, crow ) and prototype vectors ( bird), finding that static em- beddings poorly accounted for human judgments. Renner et al. (2023) improved predictions using BERT and WordNet metrics, showing that their combination aligns best with human judgments. Recently, Heyman and Heyman (2024) found out that ChatGPT produces typicality ratings compa- rable to human participants (.60-.64). Conversely, Misra et al. (2021) tested LLMs on taxonomic cat- egorization (“football is a sport”), showing modest correlations with human ratings (between 0.24 and 0.41) and weaker distinctions between typical and atypical items (as observed in other experimental settings, i.e., Kauf et al., 2023). Moreover, Misra et al. (2023) highlighted that LLMs struggle with fine-grained property attributions, questioning their plausibility as models of human semantic memory. Beyond typicality, Nighojkar et al. (2022) used Transformer models (RoBERTa-Large, Dis- tilBERT, and miniBERTa-med-small) to
https://arxiv.org/abs/2505.21301v1
model the semantic fluency task (§2.1). They designed differ- ent approaches to predict the next item in a given list (“Examples of fruits are the strawberry and the [MASK]”) for five superordinate categories (Fruits, Vegetables, Animal, Supermarket items,Tool, Foods). Among the models, RoBERTa-Large proved to be the best-performing approach, al- though it still achieved low performance (16% over- all accuracy). Concurrently, researchers have investigated whether vision models align with human concep- tual understanding (Peterson et al., 2018; Battleday et al., 2020; Günther et al., 2023; Upadhyay et al., 2022). Regardless of the specific experimental de- sign, these studies correlated vision-based similar- ity scores between any pair of exemplar and cate- gory images and evaluated these similarities against human typicality judgments. Recently, Vemuri et al. (2024) evaluated both language and vision models, comparing their correlation with human typicality ratings, and found that textual models are better than vision models for 27 categories, surpassing prior results from Castro et al. (2021). Recent works have also tested the abstract rea- soning abilities of LLMs. For example, Samadarshi et al. (2024) assessed LLMs performance on the New York Times Connections game, finding better performance in Semantic Relations and Encyclo- pedic Knowledge, which might be due to existing information in pre-training data. However, LLMs accuracy remains below 50%. All the aforementioned works focused exclu- sively on English, and primarily explore the inter- nal organization of superordinate categories (e.g., fruit, tools ). To our knowledge, no research has yet explored this for Italian or investigated the inter- nal organization of basic-level categories (e.g., dog, hammer ). 3 S TUDY 1: A New Psycholinguistic Dataset of Basic-Level Exemplars Methods. Stimuli consist of 187 basic-level con- crete categories previously produced by Italian na- tive speakers as the most representative concepts for 12 superordinate semantic categories2(Mon- tefinese et al., 2012). We administer an exemplar generation task to 365 Italian L1 speakers on Pro- lific. Each participant is presented with a list of 15-16 categories and asked to produce as many ex- emplars as possible for each concept (e.g., List a type of ) at their own pace. The final dataset, after post-processing typos and misspellings, consists of 24.659 exemplars. 2ANIMALS, BODY PARTS, CLOTHES, FOODS, FUR- NISHING, FURNITURE, HOBBIES, HOUSING, KITCHEN, PLANTS, STATIONERY , VEHICLES. We compute the same measures as Montefinese et al. (2012) to describe the relationship between a given concept and its exemplars, such as the propor- tion of participants who produce a target exemplar given a category ( dominance ), the mean output po- sition of each exemplar for a category ( mean rank order ), and the proportion of participants who pro- duce a given exemplar as their first response ( first occurrence value ). We primarily focus on exem- plar availability , which represents how readily an exemplar is produced as a member of a category. This measure is determined by the exemplar’s po- sition in a participant’s response list, its overall production frequency within the category, the earli- est position it appears across participants, and the total number of participants who mention it. Results and
https://arxiv.org/abs/2505.21301v1
Discussion. In line with Monte- finese et al. (2012), we find that dominance, avail- ability, and first occurrence are all strongly and positively correlated ( rs= 0.95, 0.75, 0.89; for dominance vs. availability, dominance vs. first occurrence, and availability vs. first occurrence); whereas mean rank order of production correlates weakly and negatively with the other three mea- sures ( rs= - 0.09; -0.21, -0.15; for dominance, first occurrence, and availability respectively). To iden- tify the most representative exemplars for each con- cept, we retain only those exemplars with a dom- inance value higher than or equal to 0.1 (i.e., ex- emplars produced by at least 10% of participants). This cut-off criterion results in a total of 1696 ex- emplars in the final dataset. Figure 2 shows the numbers of dominant exem- plars for the 12 subordinate categories. The highest number of exemplars is produced for the FOOD category ( 270exemplars), followed by CLOTHES (206exemplars), whereas the category of PLANTS has the smallest number of exemplars (77). Indeed, the number of dominant exemplars varies consid- erably within each basic-level category, spanning from a minimum of 1 exemplar (e.g., sunflower , rubber plant ) to a maximum of 31 exemplars (e.g., pasta ,dog). The fact that the extent of our subor- dinate lexicon varies in human cognition suggests thatsome subordinated categories might pose challenges in terms of accessibility to semantic memory . This could be due to their low frequency or familiarity, or to a higher degree of individual variability in knowledge within a specific domain compared to others. Subsequently, for each basic-level category, we Figure 2: Number of valid exemplars across 12 superor- dinate categories for humans and textual LLMs. compare the top-1 and top-5 exemplars ordered by availability with those ordered by dominance, examining whether both the exemplars and their order align. Overall, 77.0% of the top-1 dominant exemplar is also the top-1 available. This indicates that more frequently produced exemplars tend to be more readily available in participants’ responses, reflecting their prominence in the conceptual cate- gory. However, only 13.9% of the top-5 dominant exemplars overlap with the ranking of the top-5 most available exemplars. For example, the top-5 dominant exemplars for the basic category “ cereal ” areoat, spelt, wheat, corn, barley , while the top-5 available were wheat, oat, spelt, corn, barley . This outcome points to potential variability in the fre- quency of production and availability of exemplars within different conceptual categories. In conclu- sion, we observe that this task is more challenging than retrieving exemplars of superordinate-level categories and that some categories are more acces- sible than others. 4 S TUDY 2: LLMs’ Exemplars Generation We probe several LLMs on the task described in §3 to compare their organization of subordinate-level conceptual representations with human subjects. We assess models’ performance considering: ( i) the number of hallucinations generated (i.e., non- existent exemplars created by combining words intoad hoc instances); ( ii) the overlap with human subjects regarding the most available (typical) ex- emplar, and ( iii) whether discrepancies between human and LLMs-generated exemplars follow a consistent pattern.
https://arxiv.org/abs/2505.21301v1
We analyse our data from two complementary ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.63 0.74 0.76 0.84 0.61 0.69 0.67 0.72 0.59 0.50 0.63 0.63 0.67 llama-3.1-8B 0.48 0.64 0.64 0.86 0.61 0.69 0.74 0.72 0.49 0.41 0.49 0.63 0.62 llama-3.1-70B 0.81 0.77 0.89 0.98 0.82 0.83 0.85 0.93 0.80 0.68 0.61 0.83 0.82 mistral-7B 0.52 0.61 0.43 0.79 0.50 0.41 0.69 0.61 0.46 0.42 0.39 0.57 0.53 nemo-12B 0.71 0.72 0.69 0.90 0.71 0.65 0.79 0.86 0.56 0.47 0.58 0.70 0.69 mixtral-8x7B 0.73 0.76 0.77 0.95 0.74 0.76 0.81 0.86 0.67 0.54 0.53 0.79 0.74 llava-7B 0.52 0.60 0.54 0.67 0.57 0.53 0.70 0.61 0.48 0.48 0.57 0.61 0.57 idefics2-8B 0.64 0.76 0.62 0.80 0.75 0.67 0.82 0.71 0.53 0.67 0.65 0.65 0.69 category avg 0.63 0.70 0.67 0.85 0.66 0.66 0.76 0.75 0.57 0.52 0.56 0.68 0.67 Table 1: Percentage of valid exemplars generated by various LLMs. perspectives. On the one hand, we assess the mod- els’ accuracy based on their similarity to human- generated exemplars (our gold standard). On the other, we perform some qualitative analyses to ex- plore whether and how the categorical knowledge encoded by language models differs from that of humans. Setup. Building upon the methodology described in §3, we task the models with generating ex- emplars for the same 187 basic-level concepts presented to human subjects. We use two LLMs families: (i)LLaMA family , includ- ingLLaMA-v3.1 in its 8 and 70B versions, and LLaMA-v3.2-3B (LlamaTeam, 2024), and (ii)Mis- tral family , comprising Mistral-7B (Jiang et al., 2023), Mixtral-8x7B (Jiang et al., 2024), and NeMO3. Furthermore, to investigate the impact of perceptual extra-linguistic stimulus, we also use the vLMs LLaV A (Liu et al., 2023) and Idefics2 (Laurençon et al., 2024) (cf. Appendix B.1 for an in-depth description). We model the generation process as a few-shot setting (Brown et al., 2020) completion task. The model receives a simplified version of the instruc- tions from §3 to obtain comparable results. The in- struction is followed by a question-answer example before generating exemplars for a new concept. We follow the few-shot prompting scenario, as this ap- proach should positively affect the model’s perfor- mance. We experiment with parameters to obtain an outcome balanced between predictability and creativity. For each model, we perform five runs for each basic-level category (cf. Appendix B.5). 4.1 Analysis 1: LLMs Tends to Generate ad hocExpressions instead of Exemplars The generated responses consist of a list of exem- plars separated by newlines (i.e., ‘ \n’). To ensure data quality, we first clean the outputs by removing duplicate exemplars, keeping only their first oc- currences. We then validate the outputs by check- 3https://mistral.ai/news/mistral-nemo/ing whether each exemplar appears at least once in the Italian corpus ItTenTen (Jakubí ˇcek et al., 2013; Suchomel et al., 2012)4, thereby distinguish- ing valid exemplars from (possible) hallucinations. This data-cleaning step allows for an overall eval- uation of the quality of the generated exemplars in terms of the percentage of valid (i.e., existing expression) exemplars. Table 1 shows that the per- formance
https://arxiv.org/abs/2505.21301v1
differs widely across models, with larger and more recent LLMs generating a higher propor- tion of valid exemplars in comparison to smaller models or vLMs. For instance, LLaMA-v3.1-70B generates 82% valid exemplars, while Mistral-7B generates only 52% valid exemplars. The lowest performance is observed by LLaVa-7B (44%). Notably, the number of valid exemplars varies depending on the superordinate category. Cate- gories, such as FOOD (85% ),HOBBIES (76% ), andHOUSING (75% ), yield a higher proportion of valid exemplars across models. In contrast, cate- gories like KITCHEN and PLANTS exhibit more noise, with only 57% and 52% of valid exemplars, respectively. This indicates that models acquire a non-uniform knowledge of subordinate-level exem- plars, with a broader and more precise coverage of certain basic-level concepts, while showing a more brittle grasp of others. These results partially align with human behaviour: the categories’ exemplars that are easiest (FOOD) and those that are most difficult (PLANTS) to recall are the same for both humans and LLMs. Considering unattested expressions, LLMs of- ten rely on their compositional abilities to gen- erate surface-acceptable expressions. However, this ‘creative’ process produces invalid multi-word expressions (i.e., hallucinations) that lack valida- tion among human speakers (i.e., their corpus fre- quency is zero) and/or real-world referents. We conduct a qualitative analysis of zero-frequency items to identify recurring generative tendencies onLLaMA-3.1-70B (the best-performing model 4We use the SketchEngine API to collect frequencies. in terms of valid exemplars generated). Among others, we observe that the model tends to repli- cate the surface-level syntactic or morphological structure of a valid, attested exemplar, leading to the overgeneralization of that structure to produce novel combinations. For instance, the expression abete rosso (‘red fir’) and abete di Douglas (‘Dou- glas fir’) serve as a template for generating further expressions like abete bianco di Scozia (‘white Scotch fir’) or abete rosso di California (‘red Cali- fornia fir’), none of which refer to real-world refer- ents. Similarly, the models extract from candelabro a 5 braccia (‘5-armed candelabrum’) the syntac- tic pattern a N bracci/a to build multiple vari- ants, as a 13 bracci . Therefore, models tend to identify productive syntactic patterns and extend them compositionally, rather than drawing on ac- tual distributional evidence or domain knowledge. In essence, imitation-based errors are structural extrapolations that mirror known exemplars too closely, prioritizing form over grounded meaning. Additionally, the generated expressions are grammatically well-formed but semantically in- coherent, implausible, or internally contradictory. For example, geranio a foglie di quercia (‘gera- nium with oak leaves’) or a foglie di rosmarino (‘with rosemary leaves’) attribute biologically im- plausible features. Similarly, maglia a punto croce (‘knitwear in cross-stitch’) is semantically inco- herent, because punto croce is a specific embroi- dery technique used to decorate fabrics—not for constructing knitwear. In these cases, LLMs ap- ply compositional plausibility without conceptual coherence: models generate a surface-acceptable phrase that violates domain-specific knowledge or real-world constraints, thereby rendering the ex- pression nonsensical . Finally, some generated out- puts are not attested exemplars but rather novel, ad hocinstances (Barsalou, 1983). For example, the model generates instances of cassettiera (‘dresser’) based on spatial
https://arxiv.org/abs/2505.21301v1
context (e.g., c. da corridoio ‘hall- way dresser’, c. da esterno ‘outdoor dresser’) or intended contents (e.g., c. per giocattoli ‘for toys’, per oggetti di cancelleria ‘for stationery items’). While such expressions might be interpretable and even plausible, they are not attested in usage and do not correspond to established members of the cate- gory, i.e., they do not qualify as exemplars stored in long-term memory. Additional examples of these generative patterns are provided in Tables 7 and 8 (cf. Appendix B.8).Overall, these examples illustrate how hallucina- tions often arise from systematic, though flawed, generalization strategies, revealing a gap between surface-level fluency and semantic grounding . 4.2 Analysis 2: Humans and LLMs Disagree on the Most Available Exemplars In the second analysis, we compare the valid exemplars generated by the LLMs with human- generated exemplars. Specifically, we sort both hu- man and LLMs exemplars according to their avail- ability score , which reflects the ease with which a word can be produced as a category member (§3). Table 2 reports the results of the intersec- tion between the top- n(n={1,3,5}) most avail- able human-generated and machine-generated ex- emplars, with overlap computed regardless of the production order. The best results are observed for top-5 matches, with Nemo-12B reaching an over- lap of 24% of the generated exemplars. The num- ber of matches varies across categories (cf. Ap- pendix B.7). The most significant overlap is ob- served within the categories of FOODS ( Nemo-12B : 37%, overall: 29%) and ANIMALS ( Nemo-12B : 36%, overall: 29%). In contrast, the lowest overlap emerges within the categories BODY PARTS and FURNISHING ( Nemo-12B : 16%, overall: 12%). These lower scores may arise for two rea- sons. First, the model generates valid exem- plars, sometimes even matching those produced by humans, but not the most available ones. For example, the top-5 human-generated exemplars ofcane ‘dog’ ( labrador ,pastore tedesco ‘ Ger- man shepherd’, bassotto ‘dachshund ’, chihuahua , golden retriever ) only partially overlap with those generated by nemo-12B (pastore tedesco , golden retriever ,beagle ,labrador ,husky siberi- ano‘siberian husky’). Besides, bulldog is in the top-5 most available exemplars in five models, de- spite having a lower corpus frequency than other words (e.g., chihuahua ,dalmatian ). The variation among models suggests that there are no specific criteria (e.g., frequency) that determine the gen- eration of one exemplar over another, implying a category organization that is essentially flat . Secondly, some models produce incorrect exem- plars: in some cases, meronyms are generated (i.e., polpaccio ‘calf’ as a type of gamba ‘leg’), in oth- ers, the basic-level category is misinterpreted due to polysemy (i.e., the word braccio ‘arm’ refers both to a human body part and to an extension of some- Model Top-1 Top-3 Top-5 llama-3.2-3B 0.09 0.13 0.14 llama-3.1-8B 0.14 0.18 0.20 llama-3.1-70B 0.18 0.20 0.21 mistral-7B 0.13 0.12 0.13 nemo-12B 0.25 0.24 0.24 mixtral-8x7B 0.18 0.19 0.19 llava-7B 0.12 0.13 0.15 idefics2-8B 0.08 0.10 0.10 Table 2: Matches among the top- nhuman and machine- generated most available exemplars. thing), resulting in nonsensical outputs. Incorrect exemplar generation is
https://arxiv.org/abs/2505.21301v1
especially evident in vLMs. For example, idefics2-8B not only relies on com- positional operations but also lists other types of trees (e.g., acacia, eucalyptus, maple as exemplars ofabete ‘fir’), failing to generate subordinate exem- plars and generating basic-level exemplars instead. 5 Are LLMs Sensitive to Human Category Structure? The comparative analyses of human and LLMs- generated exemplars revealed no significant over- lap between these two sets. However, despite some noisy ad hoc exemplars, models also produce valid exemplars that humans did not recall. We use human data to build two additional classification tasks: A.Category Induction : Given the 10 most avail- able human-generated exemplars, select their basic/superordinate category; B.Typicality Detection : Given the most and least available human-generated exemplars, identify the typical (i.e., most available) mem- ber of the basic category. These tasks are designed to evaluate the model’s consistency in representing categories and their ex- emplars using close-ended formats. Rather than generating exemplars, the model selects correct answers based on its perplexity score, making eval- uation easier and more reliable. 5.1 S UBTASK A: Category Induction Previous studies revealed that basic-level members of a category can elicit the activation of their cor- responding superordinate categories in the mental lexicon (Barsalou, 1982; Ross and Murphy, 1999). While tasks in §4 were focused on exemplar gen- eration, here we explore to what extent LLMs areModel Basic-level Superordinate llama-3.2-3B 0.84 0.52 llama-3.1-8B 0.96 0.63 llama-3.1-70B 0.95 0.64 mistral-7B 0.89 0.59 nemo-12B 0.95 0.46 mixtral-8x7B 0.98 0.57 llava-7B 0.93 0.59 idefics2-8B 0.94 0.38 Table 3: SUBTASK A–Accuracy for basic-level and su- perordinate category prediction at the aggregated level. able to identify the category to which an exemplar belongs to. Specifically, we investigate whether subordinate-level members of a given category can activate their (i)basic and (ii)superordinate cate- gory in LLMs. This allows us to compare recall performances at different levels of taxonomy, from the (more specific ) basic and (more general ) su- perordinate categories, and to better investigate the organization of conceptual categories in the learned latent space of LLMs. Setup. The task is structured as a classification task. Given an input sentence containing a se- quence of subordinate-level exemplars, the model has to select the correct category that has produced the listed exemplars. The category can be: ( i) one of the 187 basic-level categories (e.g., abete ‘fir’, aereo ’plane’), or ( ii) one of the 12 superordinate categories (e.g., pianta ‘plant’, veicolo ‘vehicle’). We select up to 10 most available human-generated exemplars for each basic-level concept. Each list is converted into a prompt in the form: “ e1, e2,..., e10are types of {category} ”, where endenotes the n-th selected human-produced ex- emplar and category is a category name, either at basic-level or superordinate one. We then compute the model’s perplexity for each pair and select the category associated with the sentence that has the lowest perplexity score. Results. Overall, models obtain higher results when predicting the basic-level concept (e.g., abete ‘fir’) rather than the more abstract superordinate category (e.g., pianta ‘plant’; cf. Table 3). This result is surprising, considering that the number of
https://arxiv.org/abs/2505.21301v1
superordinate categories is smaller (12 vs 187 con- cept terms). A possible explanation is that models have seen the occurrence <exemplar, basic-level concept> more frequently than the pair <exemplar, superordinate-level concept>. In addition, most of Model Low Medium High llama-3.2-3B 0.65 0.62 0.42 llama-3.1-8B 0.58 0.60 0.42 llama-3.1-70B 0.73 0.68 0.61 mistral-7B 0.50 0.57 0.47 nemo-12B 0.53 0.69 0.52 mixtral-8x7B 0.72 0.55 0.57 llava-7B 0.48 0.62 0.48 idefics2-8B 0.53 0.58 0.45 Table 4: SUBTASK B– Typicality Accuracy for basic- level categories for the three coverage groupings. the time, the exemplar itself can contain the concept sub-string, e.g., abete di Natale (‘Christmas tree’) vs?pianta di Natale (‘Christmas plant’). Interest- ingly, LLM performance varies across semantic domains: models score nearly perfectly on ANI- MALS, KITCHEN, and VEHICLES, but perform poorly on FURNISHING, HOBBIES, and STA- TIONERY (cf. Appendix C). As expected, LLMs more effectively acquire taxonomic relations for categories shaped by encyclopedic knowledge (factual information typically learned through ed- ucation or texts, e.g., “a lion is a mammal") than those grounded in commonsense knowledge (e.g. “domino is a game"). 5.2 S UBTASK B: Typicality Prediction One key aspect of category structure that has been extensively studied with LLMs is typicality (§2.2): Some members of a category are considered more representative than others (e.g., robin vs.penguin as types of birds). Previous studies have found only a moderate correlation between human judgments and LLMs. In addition, their focus was basic-level exemplars of superordinate categories. In this sub- task, we investigate whether, despite their misalign- ment with humans in generating the most available exemplars (§4), LLMs can still recognize that the most available item (e.g., bicchiere di vetro , ‘glass tumbler’) is more typical than the less available one (e.g., bicchiere da shot ‘shot glass’) for a given category (e.g., bicchiere ‘glass’). Setup. We group the 187 basic-level categories by the number of exemplars produced by humans into three groups: ( i)low(up to 5 exemplars), ( ii) medium (6–10 exemplars), and ( iii)high produc- tivity (more than 10 exemplars). This grouping allows us to test if the internal dimension of the category impacts typicality detection results. For each basic-level concept, we then select the mostModel Low |∆|Medium |∆|High|∆| llama-3.2-3B 0.47 0.58 0.61 llama-3.1-8B 0.45 0.50 0.57 llama-3.1-70B 0.59 0.61 0.70 mistral-7B 0.41 0.49 0.53 nemo-12B 0.43 0.49 0.69 mixtral-8x7B 0.49 0.55 0.69 llava-7B 0.42 0.47 0.61 idefics2-8B 0.46 0.47 0.49 Table 5: SUBTASK B– Typicality Accuracy for basic- level categories, grouped by the absolute difference in exemplars availability. available and the least available human-generated exemplars and evaluate the models’ perplexity on the two sentences: “ {1st exemplar} is a type of {concept} vs.{last exemplar} is a type of {concept} .” Similarly to §5.1, a pair is con- sidered a positive prediction if the perplexity for the first sentence is lower than that assigned to the second one. Results. Overall, LLaMA-3.1-70B performs best across the three groupings, reaching 73% accuracy in the low-productivity setting (cf. Table 4), a good score compared to past studies. However, accuracy varies across groupings: as the number of human exemplars for a category
https://arxiv.org/abs/2505.21301v1
increases, LLMs are less likely to detect the typical item. This sug- gests that when humans provide fewer exemplars, the first one is cognitively dominant compared to the other ones, a distinction reflected in the model’s perplexity scores. However, in richer categories, the cognitive distinctive attributes among exem- plars diminish, thus resulting in LLMs’ lower ac- curacies (cf. Appendix D). Effect of Availability Differences. Additionally, we assess accuracy across groups defined by the absolute difference in availability ( |∆|) between the most and least available exemplars. We catego- rize these differences into three levels: low∆for differences less than 0.2,high ∆for differences greater than 0.4, and medium ∆for all other cases. This grouping results in a balanced distribution of pairs (57, 75, and 55, respectively). Looking at the average results in Table 5, we observe that pairs with a higher typicality delta are easier to predict , yielding higher accuracy scores. For ex- ample, the best performing model LLaMA-3.1-70B achieves almost a 20% increase when moving from the low to the high ∆setting (and a ∼30% on aver- age across all the models). This additional analysis reveals that LLMs are sensitive to the internal struc- ture of human basic-level categories: the smaller the variability in human availability ,the more difficult it becomes for the model to identify the most typical items . 6 General Discussion and Conclusions This study explored basic-level category organiza- tion in humans, who integrate linguistic and sen- sory information, and LLMs, which rely solely on linguistic data. In a generation task, Italian speak- ers and various LLMs and vLMs produced lists of exemplars for 187 basic-level concrete categories. We hypothesized that the most frequent exemplars generated by models would align with those of hu- mans, as subordinate concepts reflect specialized knowledge and are constrained by language. Findings in §4 reveal a low alignment between model and human performance. However, compar- ative analyses show that some models (particularly LLaMA-3.1-70B ) can still generate meaningful ex- emplars comparable to those produced by humans across many semantic domains. Interestingly, these models produce more exemplars than humans for technical and specialized categories that require access to encyclopedic knowledge (i.e., PLANTS): e.g., LLaMA-3.1-70B generates 26 real exemplars fororchidea ‘orchid’, while humans generated only 5. This ability points to a possible use of LLMs in automatically generating exemplars for large sets of concepts (i.e., for automatic ontology popula- tion), in line with similar findings for semantic fea- ture production norms (Hansen and Hebart, 2022). However, our results also call for some caution. First, the models often generate hallucinations and incorrect exemplars, especially for categories where extralinguistic information plays a more crit- ical role than linguistic data. This is especially evident in the BODY PARTS category, where con- ceptual confusion ( piede di porco ‘crowbar’) or ad hocinstances ( testa di cavallo ‘horse head’) are common. While frequency analysis can help re- duce hallucinations, human annotation is needed to verify accuracy, at least at this taxonomic concep- tual level. Secondly, LLMs do not show the same categorical organization of humans. The generated exemplars vary significantly across
https://arxiv.org/abs/2505.21301v1
models, with alignment to human responses below 25% (§4). Additional subtasks in §5 illustrate that models struggle to build a hierarchical conceptual organi- zation like humans, limiting their ability to reasonalong the taxonomic axis (§5.1). While they per- form well in basic-level category induction, they underperform in the superordinate category setting. Moreover, LLMs often fail to identify the most typ- ical exemplar when a category includes multiple similarly available items (§5.2) but perform bet- ter when one exemplar clearly dominates in avail- ability. These results suggest that (proto)typicality effects are harder to detect within basic-level cat- egories, likely due to their relatively flat internal structure and the high number of shared attributes among subordinate exemplars. Finally, we found that vLMs still perform poorly in the exemplar gen- eration task, in line with previous research (Vemuri et al., 2024), showing that text-based models align more closely with human typicality judgments. Our study has several methodological implica- tions worth mentioning. We provided a dataset of human-generated exemplars for basic-level con- crete categories in Italian, along with statistical measures, extending Montefinese et al. (2012). Since existing Italian datasets often lack concepts spanning multiple taxonomic levels, this resource will be useful in cognitive psychology and AI re- search on semantic category structure. This need for comprehended datasets becomes evident when comparing existing resources in other languages, such as English (e.g., Banks and Connell, 2023). Moreover, our study highlights the potential and limitations of LLMs in capturing human categor- ical knowledge at the subordinate level, in line with previous literature. Future work should ex- plore how LLMs generate exemplars for superordi- nate categories (e.g., animals, plants ) and whether they align more with human behaviour at this level. Additionally, comparing results across languages could also reveal cultural influences on concept representation and potential biases in LLMs. In conclusion, our results show that the organiza- tion of subordinate categories varies as a function of semantic domains in both humans and LLMs. Notably, the more extralinguistic or linguistic infor- mation is relevant to a given category, the more the performance of LLMs and humans diverges. These observations have practical implications for NLP systems, such as educational tools (e.g., vocabu- lary teaching, interactive learning apps), knowl- edge base population, and generally, to improve category-aware language generation (i.e., chatbots that better interpret user intent by responding with the appropriate level of specificity). Limitations 1.Cultural Biases : Model are trained on En- glish and/or multilingual corpora which may not reflect the lexical preferences of Italian speakers. 2.Methodology in 4.2 : In the comparison be- tween LLMs and human-generated exemplars, we used a simple string matching, so abete di Natale ‘Christmas fir’ and abeti di Na- tale‘Christmas firs’ are considered different strings. While this approach could count good strings as mismatches, the human judgments are manually normalized, and models prefer the singular form consistently. In conclusion, we believe that this approximation does not exclude too many possibly good exemplars. 3.Exclude GPT from analyses : We did not use GPT because we cannot access the perplexity values of the model. While some could ar-
https://arxiv.org/abs/2505.21301v1
gue that GPT last models could achieve better performances for the presented tasks, we pre- fer open models that can be accessed in their internal representations. Ethical Considerations •We administrated the exemplars generation task described in §3 to a total of 365 partici- pants (48.5% women; 49.9% man; 1.6% non- binary; M age = 26.3; SD age = 3.76; range age 18-35) on Prolific. All participants were Italian native speakers and reported no lan- guage or attentional disorders. Participants were compensated with Euro e1.80 for gen- erating exemplars in a single list, with an av- erage survey duration of 15 minutes. The data is anonymized to make identification of indi- viduals impossible. •Since the human data were collected in 2023 and never released, all LLMs have not been exposed to these stimuli, allowing us to test the emerging abilities of these models and their semantic knowledge. •This research demonstrates the utility of lan- guage models as valuable tools in cognitive science and linguistics. However, it is cru- cial to acknowledge that these models acquire and produce language through mechanisms that differ significantly from human languageprocessing. Consequently, extrapolating these findings directly to human mind organization can lead to potential risks and unintended con- sequences. Acknowledgements We would like to thank the anonymous reviewers for their feedback and comments. AP has been sup- ported by the project “Word Embeddings: From Cognitive Linguistics to Language Engineering, and Back” (WEMB), funded by the Italian Ministry of University and Research (MUR) under the PRIN 2022 funding scheme (CUP B53D23013050006), the PNRR (Prot. IR0000013) “SoBigData.it: Strengthening the Italian RI for Social Mining and Big Data Analytics”. GR, CV , and MB have been funded by the European Union (GRANT AGREE- MENT: ERC-2021-STG-101039777). Views and opinions expressed are however those of the au- thor(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held re- sponsible for them. References Briony Banks and Louise Connell. 2023. Category production norms for 117 concrete and abstract cat- egories. Behavior Research Methods , 55(3):1292– 1313. Lawrence W Barsalou. 1982. Context-independent and context-dependent information in concepts. Memory & cognition , 10(1):82–93. Lawrence W Barsalou. 1983. Ad hoc categories. Mem- ory & cognition , 11:211–227. Lawrence W Barsalou, Ava Santos, W Kyle Simmons, and Christine D Wilson. 2008. Language and simula- tion in conceptual processing. Symbols, embodiment, and meaning , pages 245–283. Ruairidh M Battleday, Joshua C Peterson, and Thomas L Griffiths. 2020. Capturing human categorization of natural images by combining deep networks and cog- nitive models. Nature communications , 11(1):5418. Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics , pages 5185–5198, Online. Association for Computational Linguistics. Marianna Bolognesi, Christian Burgers, and Tommaso Caselli. 2020. On abstraction: decoupling conceptual concreteness and categorical specificity. Cognitive Processing , 21(3):365–381. Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von
https://arxiv.org/abs/2505.21301v1
Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Kr- ishna, Rohith Kuditipudi, Ananya Kumar, Faisal Lad- hak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchan- dani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Pa- padimitriou, Joon Sung Park, Chris Piech, Eva Porte- lance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2022. On the opportunities and risks of foundation models. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Nichol Castro, Taylor Curley, and Christopher Hertzog. 2021. Category norms with a cross-sectional sample of adults in the united states: Consideration of cohort, age, and historical effects on semantic categories. Behavior research methods , 53:898–917. Henri Cohen and Claire Lefebvre. 2005. Handbook of categorization in cognitive science . Elsevier. Charles P Davis and Eiling Yee. 2021. Building seman- tic memory from embodied and distributional lan-guage experience. Wiley Interdisciplinary Reviews: Cognitive Science , 12(5):e1555. Luciano Floridi and Massimo Chiriatti. 2020. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines , 30:681–694. Fritz Günther, Marco Marelli, Sam Tureski, and Marco Alessandro Petilli. 2023. ViSpa (vision spaces): A computer-vision-based representation system for individual images and concept proto- types, with large-scale evaluation. Psychol. Rev. , 130(4):896–934. Lala Hajibayova. 2013. Basic-level categories: A re- view. Journal of Information Science , 39(5):676– 687. Hannes Hansen and Martin N Hebart. 2022. Semantic features of object concepts generated with GPT-3. In Proceedings of the Annual Meeting of the Cognitive Science Society . Tom Heyman and Geert Heyman. 2019. Can prediction- based distributional semantic models predict typical- ity? Quarterly Journal of Experimental Psychology , 72(8):2084–2109. PMID: 30704340. Tom Heyman and Geert Heyman. 2024. The impact
https://arxiv.org/abs/2505.21301v1
of chatgpt on human data collection: A case study in- volving typicality norming data. Behavior Research Methods , 56(5):4974–4981. Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. 2021. Perceiver: General perception with iterative attention. InProceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 4651–4664. PMLR. Miloš Jakubí ˇcek, Adam Kilgarriff, V ojt ˇech Ková ˇr, Pavel Rychl `y, and Vít Suchomel. 2013. The tenten cor- pus family. In 7th international corpus linguistics conference CL , pages 125–127. Valladolid. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bam- ford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, L’elio Renard Lavaud, Lucile Saulnier, Marie- Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mix- tral of experts. ArXiv , abs/2401.04088. Carina Kauf, Anna A Ivanova, Giulia Rambelli, Em- manuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko, and Alessandro Lenci. 2023. Event knowledge in large language models: the gap between the impossible and the un- likely. Cognitive Science , 47(11):e13386. Hugo Laurençon, Léo Tronchon, Matthieu Cord, and Victor Sanh. 2024. What matters when building vision-language models? Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. In Advances in Neural Information Processing Systems , volume 36, pages 34892–34916. LlamaTeam. 2024. The llama 3 herd of models. Max M Louwerse. 2018. Knowing the meaning of a word by the linguistic and perceptual company it keeps. Topics in cognitive science , 10(3):573–589. Gary Lupyan. 2012. Linguistically modulated percep- tion and cognition: The label-feedback hypothesis. Frontiers in psychology , 3:54. Gary Lupyan and Molly Lewis. 2019. From words-as- mappings to words-as-cues: The role of language in semantic knowledge. Language, Cognition and Neuroscience , 34(10):1319–1337. Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, and Evelina Fedorenko. 2024. Dissociating language and thought in large language models. Trends in Cognitive Sciences , 28(6):517–540. Gary Marcus. 2020. The next decade in ai: Four steps towards robust artificial intelligence. Kanishka Misra, Allyson Ettinger, and Julia Rayz. 2021. Do language models learn typicality judgments from text? In Proceedings of the Annual Meeting of the Cognitive Science Society, 43 , pages 216–222. Kanishka Misra, Julia Rayz, and Allyson Ettinger. 2023. COMPS: Conceptual minimal pair sentences for test- ing robust property knowledge and its inheritance in pre-trained language models. In Proceedings of the 17th Conference of the European Chapter of the As- sociation for Computational Linguistics , pages 2928– 2949, Dubrovnik, Croatia. Association for Computa- tional Linguistics. Maria Montefinese, Ettore Ambrosini, Beth Fairfield, and Nicola Mammarella. 2012. Semantic memory: A
https://arxiv.org/abs/2505.21301v1
feature-based analysis and new norms for italian. Behavior Research Methods , 45:440 – 461. Gregory Murphy. 2002. The big book of concepts . MIT press. Cristina Izura Natividad Hernández-Muñoz and An- drew W. Ellis. 2006. Cognitive aspects of lexical availability. European Journal of Cognitive Psychol- ogy, 18(5):730–755.Animesh Nighojkar, Anna Khlyzova, and John Licato. 2022. Cognitive modeling of semantic fluency using transformers. arXiv preprint arXiv:2208.09719 . Joshua C Peterson, Joshua T Abbott, and Thomas L Griffiths. 2018. Evaluating (and improving) the cor- respondence between deep neural networks and hu- man representations. Cognitive science , 42(8):2648– 2669. Joseph Renner, Pascal Denis, Remi Gilleron, and Angèle Brunellière. 2023. Exploring category struc- ture with contextual language models and lexical semantic networks. In Proceedings of the 17th Con- ference of the European Chapter of the Association for Computational Linguistics , pages 2277–2290, Dubrovnik, Croatia. Association for Computational Linguistics. Eleanor Rosch. 1975. Cognitive representations of se- mantic categories. Journal of experimental psychol- ogy: General , 104(3):192. Eleanor Rosch. 1978. Principles of categorization. Cog- nition and categorization/Erlbaum . Eleanor Rosch, Carolyn B Mervis, Wayne D Gray, David M Johnson, and Penny Boyes-Braem. 1976. Basic objects in natural categories. Cognitive psy- chology , 8(3):382–439. Brian H Ross and Gregory L Murphy. 1999. Food for thought: Cross-classification and category organi- zation in a complex real-world domain. Cognitive psychology , 38(4):495–553. Prisha Samadarshi, Mariam Mustafa, Anushka Kulkarni, Raven Rothkopf, Tuhin Chakrabarty, and Smaranda Muresan. 2024. Connecting the dots: Evaluating abstract reasoning capabilities of llms using the new york times connections word game. arXiv preprint arXiv:2406.11012 . Vít Suchomel, Jan Pomikálek, et al. 2012. Efficient web crawling for large text corpora. In Proceedings of the seventh Web as Corpus Workshop (WAC7) , pages 39–43. Neha Upadhyay, Kritika Mittal, and Sashank Varma. 2022. Typicality gradients in computer vision mod- els. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 44. Rens van Hoef, Louise Connell, and Dermot Lynott. 2023. The effects of sensorimotor and linguistic information on the basic-level advantage. Cognition , 241:105606. Siddhartha K Vemuri, Raj Sanjay Shah, and Sashank Varma. 2024. How Well Do Deep Learning Models Capture Human Concepts? The Case of the Typical- ity Effect. Proceedings of the Annual Meeting of the Cognitive Science Society , 46:5160–5167. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for nat- ural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP , pages 353–355, Brussels, Belgium. Association for Com- putational Linguistics. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 . Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for lan- guage image pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages 11975–11986. A S TUDY 1 A.1 Metrics In this section, we define the metrics described in Section 3 used to evaluate the exemplars obtained from
https://arxiv.org/abs/2505.21301v1
human participants. Exemplar Dominance ED(E) =P(E|C) =N(E∩C) N(C)(1) where N(E∩C)is equal to the number of par- ticipants who produced the exemplar Ewhen in response to the concept C, andN(C)is the number of participants elicited by C. Mean Rank Order MRO(E) =PN(C)ri(E|C) N(C)(2) First Occurrence Value FOV(E, C) =Nfirst(E) N(C)(3) Exemplar Availability EA(E,C) =nX p=1fpi N·e[−2.3·(p−1 n−1])(4) where pis the rank of the produced exemplar E, nis its lowest rank obtained across multiple par- ticipants, fpiis the number of participants who produced the exemplar iat the same position p, andNis the total number of participant who have seen the category C.B S TUDY 2 B.1 Models Description In this section, we provide the details on the pre- trained language models listed in §4. All models are open-source and available via huggingface5. B.2 Unimodal Language Models LLaMA-3.1 (LlamaTeam, 2024) is a collection of pre-trained auto-regressive large language mod- els openly released by Meta AI. In our experiments, we rely on the instruction-based version, which are fine-tuned for dialogue use case with multilingual input. We assess performance of both the small version (8B parameters6) and the larger one (with 70B parameters7). We avoid testing the extra-large version (405B parameters) due to computational constraints. All models are first pre-trained (SFT) on a mix of publicly available online data and fur- ther aligned with human preferences via RLHF. LLaMA-3.2 is the next iteration of llama models. With respect to version 3.1, they differ in models sizes (1B, 3B, 11B, and 90B parameters) and mul- timodal capabilities. However, at the moment of writing, the multimodal version of llama-3.2 is not accessible in the EU, due to European regulations8. For this reason, we are not able to provide any in- sight about the multimodal versions. Concerning the assessed version, we limit ourselves to the small (3B) model.9 Mistral (Jiang et al., 2023) is a pre-trained auto- regressive large language model released by Mitral AI10. The model leverages Grouped-Query Atten- tion and Sliding Windows Attention to improve inference time and memory requirements, and to enable handling longer input sequences. Mistral-8x7B is an ensemble mixture of experts model11of eight 7B parameter models developed by Mistral AI. The individual models are trained with Grouped-Query Attention (GQA) and Sliding 5https://huggingface.co/ 6https://huggingface.co/meta-llama/Llama-3. 1-8B-Instruct 7https://huggingface.co/meta-llama/Llama-3. 1-70B-Instruct 8https://huggingface.co/meta-llama/Llama-3. 2-11B-Vision-Instruct 9https://huggingface.co/meta-llama/Llama-3. 2-3B-Instruct 10https://huggingface.co/mistralai/ Mistral-7B-Instruct-v0.2 11https://huggingface.co/mistral/ Mistral-7B-Instruct Window Attention (SWA) mechanisms, enabling efficient handling of long sequences and improving inference speed. A routing system takes care of dis- tributing the input to the appropriate experts. This mechanism increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of param- eters per token. For our experiments, we use the standard instruction-tuned version of Mistral-7B, focusing on its capacity for multilingual inputs and dialogue generation. NeMo is a 12B model12designed for multilin- gual applications. It is trained on function call- ing, has a large context window, and is particularly strong in English, French, German, Spanish, Ital- ian, Portuguese, Chinese, Japanese, Korean, Ara- bic, and Hindi. Mistral NeMo uses a new tokenizer, Tekken, based on Tiktoken, that
https://arxiv.org/abs/2505.21301v1
was trained on over more than 100 languages, and compresses nat- ural language text and source code more efficiently than the SentencePiece tokenizer used in previous Mistral models. B.3 Multimodal Language Models LLaV A (Liu et al., 2023) is a multimodal model that integrates visual understanding with language capabilities by combining a vision encoder (e.g., CLIP’s Vision Transformer) with a large language model (e.g., LLaMA). It is designed for open- ended vision-language tasks, such as image cap- tioning, visual question answering, and reasoning about images. The model is trained following a two-stage training approach: first, the vision and the language encoders are aligned by training a projection layers that map visual features into the LLM’s embedding space. Second, the model un- dergoes an instruction tuning phase, using curated vision-language datasets to improve coherence and accuracy in responses. Idefics2 (Laurençon et al., 2024) is the result of a throughout ablation of the design choices available for vLMs pre-training. To encode visual features in the LLM’s embedding space, Idefics2 leverages a SigLIP’s vision encoder (Zhai et al., 2023) fol- lowed by a learned Perceiver pooling (Jaegle et al., 2021) and an multi-layer perceptron projection. The pooled sequence is then concatenated with the text embeddings to obtain an interleaved sequence 12https://huggingface.co/mistralai/ Mistral-Nemo-Base-2407of images and texts. The model is trained accord- ing to the usual vLMs pipeline, with a first stage focusing on the alignment of the two modality em- bedders, followed by a second instruction-tuning stage. B.4 Perplexity Computation Perplexity is computed according the following formula: PPL(X) = exp( 1 ttX ilogpθ(xi|x<i)) (5) where xiis the target expression (i.e., either the basic or superordinate category, in SUBTASK A, or the subordinate level exemplar, in SUBTASK B) andx<iis the fixed prompt. In our settings, this is equivalent to the exponentiation of the cross- entropy loss. We compute the perplexity for the target tokens only (xi), and mask the non-target tokens (x(<i))accordingly. Notice that in our ex- periments the perplexity is used to compare output of the same model, therefore normalization is not required to compare the binary-accuracy score (i.e., the evaluation metrics for S UBTASK A and B). B.5 Prompting Strategy To obtain a list of exemplars (i.e., basic-level con- cepts) from a LLMs, we use the following Italian prompt: <s>[INST] Data una parola che denota una concetto, elenca tutta i ‘tipi di’ quel concetto. Elenca solo i nomi delle entità. Per esempio per il concetto ‘elettrodomestico’ elenca: frullatore, aspirapolvere, tostapane, lavatrice. Ora fai lo stesso per il concetto ‘<CONCEPT>’ [/INST] Questa è una lista ’tipi di’ che appartengono al concetto ‘<CONCEPT>‘: where <CONCEPT> is replaced with the eliciting con- cept. For the non-Italian reader, we provide an English translation of previous prompt: <s>[INST] Given a word denoting a concept, list all of the ‘kinds of’ of the given concept. List only words denoting entities. For example, for the concept ‘electric appliance‘ list: ‘mixer’, ‘vacuum cleaner’, ‘toaster’, ‘washing machine’. Now do the same for the concept ‘<CONCEPT>’: B.6 Model-specific sampling parameters Regarding hyperparameters, we set top-p to 0.75 to limit the long tail of low-probability to-
https://arxiv.org/abs/2505.21301v1
kens that may be sampled, while frequency and repetition penalty are set to 0. B.7 Top-5 Matches Table 6 shows the percentage of matches among the top-5 human- produced and LLMs-generated exemplars, reporting individual accuracy for each of the 12 superordinate categories. B.8 Generated Exemplars and Hallucinations In Table 7 we report the exemplars generated by the LLaMA-3.1-70B, the best-performing model, for the 12 superordinate categories. For each of the 12 superordinate categories, we select the basic- level concept for which humans have generated the greatest amount of exemplars. In Table 8, we report the exemplars generated for the 12 basic- level concepts that produced the greatest amount of unattested occurrences according to the Italian Corpus ItTenTen. In our study, we automatically identify low- frequency occurring terms via the Italian corpus ItTenTen. By analyzing exemplars with an abso- lute frequency equal to zero we can gain a deeper insight regarding hallucination generation in the exemplars generation task. We divide unattested exemplars into false negatives (e.g., exemplars for which we retrieved a zero frequency due to mis- spellings or morphosyntactical issues) and halluci- nations. Through qualitative analysis, we observe several recurring patterns and categorize most of the hallucinations into the following groupings: ad-hoc instances ,nonsensical ,foreign-language based ,conceptual confusion , and imitation-based . Ad-hoc Instances: These instances reflect the model’s ability to creatively compose category- consistent yet ungrounded expressions, relying on syntactic and semantic cues rather than empirical knowledge. As such, ad hoc constructions are gen- erated “on the fly” to fit perceived communica- tive goals, but lack the frequency-based supportor conventionalization required to qualify as ex- emplars stored in long-term memory. Some ex- amples are: MAGLIA ‘a punto catenella’ (chain stitched KNITWEAR), ‘a punto scritto a rombi’ (diamond shape stitched KNITWEAR), GALLO ‘della giungla verde’ (COCK of the green jungle), ‘della giungla rosso’ (red COCK of the jungle), CASSETTIERA ‘per giocattoli’ (toy DRAWER), or CASSETTIRA ‘per attrezzi’ (tool DRAWER), ‘da corridoio’ (hallway DRAWER). Nonsensical: Expressions that are grammatically well-formed but semantically incoherent, implausi- ble, or internally contradictory, often resulting from incongruous or incompatible feature combinations. Some examples are: GERANIO ‘a foglie di quer- cia’ (GERANIUM with oak leaves), ‘a foglie di rosmarino’ (GERANIUM with rosemary leaves). CRUCIVERBA ‘a parole sovrapposte’ (CROSS- WORD with overlapping words), ‘a parole crociate’ (CROSSWORD with word crossed). TRATTORE ‘a cingoli in acciaio’ (TRACTOR with steel tank track). GALLO ‘cedrone giapponese’ (Japanese capercaillie COCK). Foreign-Language Based: Refers to expressions that denote a real-world referent conceptualized in a foreign language with respect to Italian. For example, GALLO ‘di Crèvecœur’ (Crèvecœur CHICKEN) has no attested translation in Italian. Conceptual Confusion: Cases in which the model misinterprets the intended sense or category of a lexical item, leading to the generation of ex- emplars that belong to a different semantic domain. For example, when prompted with margherita as a flower (i.e., ‘daisy’), the model generates d’Austria (‘of Austria’), referencing Margherita d’Austria (Margaret of Parma, a historical figure13), and d’Ungheria (‘of Hungary’), referencing Margherita d’Ungheria (Saint Margaret of Hungary14). Imitation Based: In this case, LLMs replicate the surface-level syntactic or morphological struc- ture
https://arxiv.org/abs/2505.21301v1
of a valid, attested exemplar, leading to the overgeneralization of that structure across subse- quent, unattested or spurious exemplars. This imi- tation is often form-driven rather than grounded in semantic plausibility or real-world usage. This phe- nomenon typically arises when a salient exemplar 13https://en.wikipedia.org/wiki/Margaret_of_ Parma 14https://en.wikipedia.org/wiki/Margaret_of_ Hungary_(saint) ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.24 0.13 0.09 0.33 0.11 0.13 0.08 0.10 0.12 0.06 0.06 0.21 0.14 llama-3.1-8B 0.32 0.13 0.09 0.36 0.20 0.24 0.18 0.19 0.17 0.15 0.13 0.25 0.20 llama-3.1-70B 0.35 0.12 0.15 0.29 0.19 0.28 0.23 0.19 0.15 0.18 0.18 0.18 0.21 mistral-7B 0.25 0.14 0.07 0.24 0.03 0.09 0.14 0.13 0.13 0.16 0.08 0.13 0.13 nemo-12B 0.36 0.16 0.26 0.37 0.16 0.31 0.26 0.18 0.23 0.24 0.25 0.15 0.24 mixtral-8x7B 0.27 0.11 0.18 0.25 0.19 0.20 0.23 0.19 0.18 0.18 0.21 0.14 0.19 llava-7B 0.32 0.09 0.11 0.28 0.06 0.11 0.15 0.10 0.10 0.22 0.10 0.15 0.15 idefics2-8B 0.25 0.09 0.06 0.25 0.03 0.04 0.11 0.03 0.03 0.10 0.03 0.21 0.10 category avg 0.29 0.12 0.12 0.29 0.12 0.17 0.17 0.13 0.13 0.16 0.13 0.17 0.17 Table 6: Percentage of matches among top five most available exemplars. introduces a productive or familiar template, which the model then extends combinatorially without regard for corpus evidence or conceptual appropri- ateness. For instance, the attested exemplar TER- RAZZO ‘alla veneziana‘ (Venetian PA VEMENT) serves as a template NOUN + ADJECTIVE (ITALIAN LOCATION) for generating further expressions like terrazzo genovese, milanese, bergamasca, pavese, fiorentina , none of which are attested or conven- tional within the category. Similarly, for the con- cept CANDELABRO ‘a 5 bracci/a‘, the syntactical structure ‘a N bracci/a’ is reiterated multiple times with increasing numbers of arms. C S UBTASK A In this section, we report the in-depth results for the experiment described in Section 5.1. Tables 9a and 9b report individual accuracy for each of the 12 superordinate categories for basic-level and superordinate-level category prediction, respec- tively. D S UBTASK B In the following tables, we report individual accu- racy for each of the 12 superordinate categories forSUBTASK B. Results are grouped into three blocks according to the number of exemplars gen- erated by the human subjects: (i) low coverage (up to 5 exemplars; Table 10a), (ii) medium coverage (6–10 exemplars; Table 10b), and (iii) high cover- age (more than 10 exemplars; Table 10c). Note that the columns containing ‘na’ values are the results of the frequency-based grouping. For example, we do not have any basic-level concept belonging to the super-ordinate category of plants that elicited ahigh number of exemplars in the human experi- mental phase. Hence, the empty column in Tables 10a and 10c.D.1 SUBTASK B: Typicality Variation by Availability Score In this Section, we report the results for the typi- cality prediction experiments described in Section 5.2 by aggregating the results along the availabil- ity score. Specifically, we group results according to the absolute difference between the availabil- ity score of the most-available exemplars and the availability score of the least-available one. The availability score is computed on
https://arxiv.org/abs/2505.21301v1
the human exper- iment’s results. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE Cane (dog) Capelli (hair) Scarpa (shoe) Pasta (pasta) Vaso (vase) Sedia (chair) 1 pastore tedesco riccio stivale spaghetti di fiori poltrona 2 segugio ricci sandalo fettuccine di rame a dondolo 3 rottweiler afro ciabatta penne di cristallo a rotelle 4 alano ondulato da ballo farfalle di ceramica sgabello 5 dobermann crespo anfibio tortellini di porcellana pieghevole 6 levriero riccio afro stivaletto rigatoni di terracotta sdraio 7 corso liscio lucido da trekking cannelloni di vetro da giardino 8 pinscher liscio da ginnastica ravioli di metallo da ufficio 9 boxer ondulati da calcio gnocchi urna a sdraio 10 beagle crespi da tennis maccheroni di legno da bar 11 poodle ricciolino zoccolo lasagne di marmo da ristorante 12 pug liscio opaco da sci vermicelli di plastica da spiaggia 13 dalmata mosso mocassino tagliatelle di argento reclinabile 14 bulldog mossi da ciclismo fusilli di oro per bambini 15 lupo lisci da danza classica linguine di ottone a sacco 16 shih tzu crespo lucido da basket ditalini di pietra a schienale alto 17 pitbull ondulato lucido da neve pappardelle di argilla a schienale basso 18 basset hound mosso lucido da danza orecchiette di notte pouf 19 chihuahua riccio afro lucido da calcetto conchiglie di bronzo panchina 20 collie riccio lucido da equitazione lasagna greco a braccioli HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES Libro (book) Camera (room) Pentola (pan) Margherita (daisy) Foglio (sheet) Automobile (car) 1 romanzo da letto a pressione comune di carta berlina 2 saggio oscura casseruola dei prati di via autocarro 3 dizionario di equilibrio di ghisa di savoia di alluminio autobus 4 atlante d’albergo padella pizza di rame camion 5 enciclopedia iperbarica marmitta di lorena di calcolo minivan 6 agenda di sicurezza wok di angoulême di stile monovolume 7 manuale di combustione di acciaio di borgogna elettronico spider 8 almanacco a gas a vapore di fiandra di ruta suv 9 biografia di commercio di rame di scozia di plastica pick-up 10 fumetto di decompressione di terracotta di provenza di piombo cabriolet 11 trattato di scoppio di ceramica di parma di registro station wagon 12 diario di refrigerazione di alluminio di valois di viti fuoristrada 13 catalogo di consiglio antiaderente tudor di stagno furgone 14 novella di compensazione di pietra ollare d’ungheria di ottone citycar 15 autobiografia di controllo coccotte a fiori doppi di rame berillifero furgoncino 16 compendio di manovra elettrica d’austria di rame fosforoso coupé 17 raccolta di decantazione in rame a fiori giganti di rame arsenicale hatchback 18 per bambini di carico in pietra ollare a fiori piccoli di stagnola pulmino 19 racconto di regia paiolo a fiori colorati di alluminio stagnato autovettura 20 monografia mortuaria calderone a fiori bianchi di lavoro roadster Table 7: Up to 20 exemplars generated by LLaMA-3.1-70B (the best-performing model in terms of valid exemplars generated), sorted by availability score. For each of the 12 superordinate categories (in UPPERCASE), we select the basic-level category (in bold ) for which humans have generated the greatest amount of exemplars. Cells with a light-blue background indicate exemplars not
https://arxiv.org/abs/2505.21301v1
produced by the human study group but still considered valid, with more than 15 occurrences in the ItTenTen corpus. Exemplars with lower frequency are denoted by alight-yellow background . A light-red background indicates unattested exemplars, which are regarded as hallucinations. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE Gallo (cock) Spalla (shoulder) Maglia (sweater) Latte (milk) Candelabro (candelabra) Cassettiera (dresser) 1 cedrone a sbalzo a coste di cocco da tavolo da ufficio 2 bankiva a volant a punto croce di soia a sospensione da cucina 3 silvestre a bretella a righe di capra da terra da bagno 4 nero a bretelle a losanghe di mucca da parete da notte 5 di banca a botte rasata di pecora a 5 bracci da camera da letto 6 di wallich a spigolo a uncinetto di bufala a 3 braccia per giocattoli 7 da combattimento a pizzo a punto di avena a stelo da ingresso 8 di sonnerat a cuscino a tubolare di arachidi a 5 braccia da comodino 9 della giungla all’americana a rombi di mandorla a 7 braccia per attrezzi 10 cedrone giapponese a kimono a cavi di riso a 9 braccia da scrivania 11 di faverolles a punta a fantasia di cammello a 7 bracci per oggetti di cancelleria 12 di houdan a sbuffo a doppia punta di nocciole a 9 bracci da corridoio 13 della malesia a frangia a punto catenella di anacardi a 11 bracci da esterno 14 della giungla grigio a pizzo di sanok a punto lino di quinoa a 13 bracci da soggiorno 15 di crèvecoeur a pizzo di lefkara a punto scritto di mandorle da mensola 16 della giungla verde a latticciolo a punto raso di orzo da camino 17 della giungla rosso a piquet a punto legaccio di semi di lino da altare 18 di jungla a pizzo di burano a punto scritto a rombi di grano tripode 19 di borneo a pizzo di gorizia a punto reale di semi di sesamo da chiesa 20 di delacour a pizzo ricamato a punto rovescio di semi di girasole a più bracci HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES Cruciverba (crossword) Terrazzo (terrace) Mestolo (ladle) Gernaio (geranium) Colla (glue) Trattore (tractor) 1 classico alla veneziana forato a foglie di quercia a caldo agricolo 2 enigmistica a sbalzo da minestra geranium maculatum vinilica cingolato 3 a schema variabile alla romana lungo a fioritura continua a freddo a ruote 4 a schema fisso pensile a buco a foglie di vite a base di lattice stradale 5 a schema libero alla genovese da zuppa a foglie di betulla a base di gomma a cingoli in gomma 6 per bambini alla milanese da cucina a foglie di rosmarino a base di resina articolato 7 tematico alla bergamasca da gelato a foglie di alloro a base di silicio a cingoli 8 per adulti a livello per mescolare a foglie di felce epossidica telescopico 9 a schema personalizzato alla pavese per servire a foglie di platano per legno a quattro ruote motrici 10 a schema geometrico panoramico per gelato a fioritura estiva per carta a ruote motrici
https://arxiv.org/abs/2505.21301v1
11 a schema logico coperto da legno a fioritura primaverile per plastica a cingoli motrici 12 a schema numerico fiorito da salsa d’appartamento acrilica agricolo cingolato 13 a parole sovrapposte scoperto da metallo d’altura a base di silano a cingoli motrici 4x2 14 a parole nascoste giardino da risotto cespuglioso a base di silice a cingoli motrici 4x4 15 a parole crociate solarium da silicone bicolore per tessuti a ruote motrici 4x2 16 a definizioni consecutive adiacente per impastare aquilegifolium a base di silicato a ruote motrici 4x4 17 a definizioni incrociate alla fiorentina per dosare annuale per metalli a ruote anteriori sterzanti 18 a tema libero per condire alpino per vetro a ruote posteriori sterzanti 19 a figure cucchiaio a foglia rossa a base di solvente a due ruote motrici 20 con immagini a nido d’ape geranium phaeum a base d’acqua a cingoli in acciaio Table 8: Up to 20 exemplars generated by LLaMA-3.1-70B (the best-performing model in terms of valid exemplars generated), sorted by availability score. We select the basic-level categories that produced the highest number of hallucinations, i.e., expressions unattested in the ItTenTen corpus. For the colouring rationale, see Table 7. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.76 0.81 0.82 0.67 0.95 0.92 0.87 0.73 0.83 0.94 0.94 0.8 0.84 llama-3.1-8B 1.0 0.94 1.0 0.93 1.0 0.92 0.93 0.93 0.92 1.0 1.0 1.0 0.96 llama-3.1-70B 1.0 0.94 0.94 1.0 1.0 0.92 0.93 0.93 0.92 0.94 1.0 0.93 0.95 mistral-7B 0.94 1.0 0.76 0.87 1.0 0.92 0.93 0.8 0.75 0.94 0.88 0.93 0.89 nemo-12B 0.94 1.0 1.0 1.0 1.0 0.83 1.0 1.0 0.92 0.88 0.94 0.93 0.95 mixtral-8x7B 0.94 1.0 0.94 1.0 1.0 1.0 1.0 1.0 1.0 0.94 1.0 1.0 0.98 llava-7B 0.94 1.0 0.82 0.67 1.0 0.92 0.93 0.93 1.0 0.94 1.0 1.0 0.93 idefics2-8B 0.88 1.0 0.88 0.8 1.0 0.92 1.0 0.93 1.0 0.94 1.0 0.93 0.94 category avg 0.93 0.96 0.90 0.87 0.99 0.92 0.95 0.91 0.92 0.94 0.97 0.94 0.93 (a) Accuracy for basic-level category prediction. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.94 0.12 0.71 0.07 0.0 0.75 0.07 0.8 1.0 0.81 0.0 0.93 0.52 llama-3.1-8B 1.0 0.81 0.76 0.2 0.0 0.92 0.13 0.8 1.0 0.94 0.0 1.0 0.63 llama-3.1-70B 1.0 0.69 0.35 0.4 0.0 1.0 0.07 0.93 1.0 0.88 0.44 0.93 0.64 mistral-7B 0.94 0.62 0.94 0.33 0.32 0.92 0.0 0.4 1.0 0.56 0.0 1.0 0.59 nemo-12B 0.06 0.81 0.12 0.0 0.0 1.0 0.07 0.2 1.0 0.75 0.5 1.0 0.46 mixtral-8x7B 1.0 0.94 0.06 0.47 0.0 0.83 0.13 0.6 1.0 0.75 0.06 1.0 0.57 llava-7B 0.88 0.88 0.76 0.33 0.11 0.83 0.13 0.67 1.0 0.5 0.0 1.0 0.59 idefics2-8B 0.88 0.0 0.12 0.6 0.0 0.67 0.0 0.53 1.0 0.06 0.0 0.67 0.38 category avg 0.84 0.61 0.48 0.30 0.05 0.86 0.08 0.62 1.00 0.66 0.12 0.94 0.53 (b) Accuracy for superordinate category prediction. Table 9: S UBTASK A–Accuracy for category prediction at basic and super-ordinate category level. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN
https://arxiv.org/abs/2505.21301v1
PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.38 0.60 na na 0.60 0.50 0.67 0.75 na 0.77 1.00 0.60 0.65 llama-3.1-8B 0.38 0.80 na na 0.60 0.25 0.33 0.75 na 0.54 1.00 0.60 0.58 llama-3.1-70B 0.38 1.00 na na 0.80 0.75 0.67 0.75 na 0.85 1.00 0.40 0.73 mistral-7B 0.62 0.60 na na 0.80 0.50 0.33 0.00 na 0.54 0.50 0.60 0.50 nemo-12B 0.25 0.40 na na 0.60 0.50 0.67 0.75 na 0.69 0.50 0.40 0.53 mixtral-8x7B 0.50 1.00 na na 0.80 0.50 0.67 1.00 na 0.69 0.50 0.80 0.72 llava-7B 0.75 0.40 na na 0.80 0.25 0.33 0.25 na 0.46 0.50 0.60 0.48 idefics2-8B 0.62 0.40 na na 0.80 0.50 0.33 0.00 na 0.54 1.00 0.60 0.53 category avg 0.48 0.65 na na 0.72 0.47 0.50 0.53 na 0.63 0.75 0.57 0.59 (a)Low coverage basic-level categories. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.50 0.33 1.00 0.00 0.67 1.00 0.75 0.67 0.67 0.80 0.67 0.33 0.62 llama-3.1-8B 0.75 0.44 0.80 0.50 0.44 0.60 0.75 0.83 0.33 0.60 0.78 0.33 0.60 llama-3.1-70B 0.75 0.33 1.00 1.00 0.67 0.60 0.75 0.67 0.33 0.80 0.78 0.50 0.68 mistral-7B 0.50 0.44 0.80 1.00 0.56 0.60 0.75 0.67 0.50 0.20 0.33 0.50 0.57 nemo-12B 0.75 0.56 0.80 1.00 0.67 1.00 0.75 0.50 0.67 0.80 0.44 0.33 0.69 mixtral-8x7B 0.75 0.56 1.00 0.50 0.67 0.40 0.75 0.33 0.67 0.20 0.33 0.50 0.55 llava-7B 0.25 0.33 0.80 1.00 0.56 0.80 0.75 0.67 0.33 0.80 0.44 0.67 0.62 idefics2-8B 0.25 0.22 0.80 1.00 0.67 0.80 0.75 0.50 0.17 0.80 0.44 0.50 0.58 category avg 0.56 0.40 0.88 0.75 0.61 0.72 0.75 0.60 0.46 0.62 0.53 0.46 0.61 (b)Medium coverage basic-level categories. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.60 0.50 0.58 0.62 0.20 0.33 0.38 0.40 0.17 na 0.40 0.50 0.42 llama-3.1-8B 0.60 0.00 0.42 0.54 0.60 0.00 0.50 0.40 0.50 na 0.60 0.50 0.42 llama-3.1-70B 0.40 1.00 0.83 0.77 0.60 0.67 0.50 0.40 0.67 na 0.40 0.50 0.61 mistral-7B 0.20 0.50 0.50 0.62 0.40 0.33 0.38 0.40 0.67 na 0.40 0.75 0.47 nemo-12B 0.60 1.00 0.33 0.69 0.80 0.33 0.12 0.40 0.50 na 0.40 0.50 0.52 mixtral-8x7B 0.80 0.00 0.50 0.62 0.80 0.67 0.25 0.20 0.67 na 0.80 1.00 0.57 llava-7B 0.60 0.50 0.42 0.77 0.40 0.33 0.25 0.40 0.50 na 0.40 0.75 0.48 idefics2-8B 0.80 0.50 0.42 0.69 0.40 0.00 0.25 0.40 0.33 na 0.40 0.75 0.45 category avg 0.57 0.50 0.50 0.66 0.52 0.33 0.33 0.38 0.50 na 0.48 0.66 0.49 (c)High coverage basic-level categories. Table 10: S UBTASK B–Typicality Accuracy at different coverage of basic-level categories. ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.50 0.33 0.75 0.71 0.80 1.00 0.57 0.67 0.38 1.00 0.60 0.00 0.61 llama-3.1-8B 0.75 0.33 0.75 0.71 0.80 0.50 0.57 0.67 0.50 0.50 0.80 0.00 0.57 llama-3.1-70B 0.25 0.67 1.00 1.00 1.00 1.00 0.71 0.67 0.50 1.00 0.60 0.00 0.70 mistral-7B 0.25 0.67 0.50 0.86 0.60 1.00 0.71 0.50 0.62 0.00 0.60 0.00 0.53 nemo-12B 0.50 1.00 0.75 0.71
https://arxiv.org/abs/2505.21301v1
1.00 1.00 0.71 0.67 0.75 1.00 0.20 0.00 0.69 mixtral-8x7B 0.75 0.33 0.75 0.86 1.00 1.00 0.71 0.50 0.62 0.50 0.80 0.50 0.69 llava-7B 0.50 0.33 0.50 0.86 0.80 1.00 0.57 0.67 0.50 0.50 0.60 0.50 0.61 idefics2-8B 0.50 0.67 0.50 0.71 0.80 0.50 0.57 0.33 0.25 0.50 0.60 0.00 0.49 category avg 0.50 0.54 0.69 0.80 0.85 0.88 0.64 0.58 0.52 0.62 0.60 0.12 0.61 (a)High absolute difference in availability score ( |∆|>0.4). ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.50 0.60 0.86 0.38 0.38 0.60 0.40 0.80 0.50 0.67 0.71 0.57 0.58 llama-3.1-8B 0.50 0.60 0.57 0.38 0.50 0.20 0.60 0.80 0.25 0.33 0.71 0.57 0.50 llama-3.1-70B 0.50 0.70 0.86 0.62 0.62 0.40 0.60 0.60 0.50 0.67 0.71 0.57 0.61 mistral-7B 0.50 0.50 0.57 0.50 0.62 0.40 0.40 0.40 0.50 0.33 0.29 0.86 0.49 nemo-12B 0.50 0.50 0.29 0.75 0.62 0.60 0.20 0.40 0.25 0.67 0.57 0.57 0.49 mixtral-8x7B 0.50 0.70 0.71 0.38 0.88 0.40 0.40 0.40 0.75 0.33 0.29 0.86 0.55 llava-7B 0.50 0.50 0.29 0.75 0.50 0.60 0.20 0.40 0.25 0.33 0.43 0.86 0.47 idefics2-8B 0.50 0.30 0.43 0.75 0.75 0.40 0.20 0.40 0.25 0.33 0.43 0.86 0.47 category avg 0.50 0.55 0.57 0.56 0.61 0.45 0.38 0.52 0.41 0.46 0.52 0.71 0.52 (b)Medium absolute difference in availability score ( 0.2≤ |∆| ≤0.4). ANIMALS BODY PARTS CLOTHES FOODS FURNISHING FURNITURE HOBBIES HOUSING KITCHEN PLANTS STATIONERY VEHICLES avg llama-3.2-3B 0.43 0.00 0.50 na 0.50 0.60 0.67 0.25 na 0.73 0.50 0.50 0.47 llama-3.1-8B 0.43 0.33 0.33 na 0.33 0.40 0.33 0.50 na 0.64 0.75 0.50 0.45 llama-3.1-70B 0.57 0.33 0.83 na 0.50 0.80 0.33 0.50 na 0.82 0.75 0.50 0.59 mistral-7B 0.57 0.33 0.67 na 0.50 0.40 0.00 0.25 na 0.64 0.25 0.50 0.41 nemo-12B 0.43 0.33 0.50 na 0.50 0.60 0.00 0.50 na 0.64 0.50 0.33 0.43 mixtral-8x7B 0.71 0.67 0.50 na 0.33 0.40 0.00 0.50 na 0.64 0.50 0.67 0.49 llava-7B 0.71 0.00 0.83 na 0.50 0.20 0.33 0.25 na 0.64 0.25 0.50 0.42 idefics2-8B 0.71 0.00 0.67 na 0.33 0.60 0.33 0.25 na 0.73 0.50 0.50 0.46 category avg 0.57 0.25 0.60 na 0.44 0.50 0.25 0.38 na 0.68 0.50 0.50 0.47 (c)Low absolute difference in availability score ( |∆|<0.2). Table 11: S UBTASK B–Typicality Accuracy at different availability score of exemplars.
https://arxiv.org/abs/2505.21301v1
arXiv:2505.21317v1 [cs.LG] 27 May 2025A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features Ihab Bendidi1 2 3Yassir El Mesbahi1 2Alisandra K. Denton1 2Karush Suri1 2Kian Kenyon-Dean2 Auguste Genovesio3 *Emmanuel Noutahi1 2 * 1Valence Labs, Montr ´eal, Canada 2Recursion, Salt Lake City, USA 3Ecole Normale Sup ´erieure PSL, Paris, France Correspondence to: <auguste.genovesio@ens.psl.eu >,<emmanuel@valencelabs.com >. Abstract Understanding cellular responses to stimuli is crucial for biological discovery and drug devel- opment. Transcriptomics provides interpretable, gene-level insights, while microscopy imaging offers rich predictive features but is harder to in- terpret. Weakly paired datasets, where samples from different modalities are not from the same biological replicate but share key metadata such as cell line and perturbation, enable multimodal learning but are scarce, limiting their utility for training and multimodal inference. We propose a framework to enhance transcriptomics by distill- ing knowledge from microscopy images. Using weakly paired data, our method aligns and binds modalities, enriching gene expression representa- tions with morphological information. To address data scarcity, we introduce (1) Semi-Clipped , an adaptation of CLIP for cross-modal distillation using pretrained foundation models, achieving state-of-the-art results, and (2) PEA (Perturbation Embedding Augmentation), a novel augmenta- tion technique that enhances transcriptomics data while preserving inherent biological information. These strategies improve the predictive power and retain the interpretability of transcriptomics, en- abling rich unimodal representations for complex biological tasks. Proceedings of the 42ndInternational Conference on Ma- chine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).1. Introduction Understanding how cells respond to various stimuli is fun- damental to uncovering cellular functions and identifying novel drug targets. However, current technologies are lim- ited in capturing the full range of cellular activities under diverse conditions, especially given the immense complex- ity of biological systems (Conesa et al., 2016; Kharchenko, 2021). For instance, the interaction of over 20,000 hu- man protein-coding genes with the estimated 1060possible chemical compounds (Reymond, 2015) far exceeds manual analysis, necessitating computational methods. Advances in deep learning for biology, such as predicting protein structures (Jumper et al., 2021), modeling molecular bind- ing (Corso et al., 2023; Evans et al., 2021), and uncovering biological patterns through microscopy and gene expression data (Kraus et al., 2024; Bendidi et al., 2024b; Bourriez et al., 2024), offer powerful tools to address this challenge from a unimodal perspective. However, separately modelling data from various omics modalities, such as morphological features, proteomics, and transcriptomics provides unique yet partial insights into cellular behavior (Miao et al., 2021; Carpenter et al., 2006; Lopez et al., 2018). By combining these perspectives through multimodal fusion, researchers can construct more comprehensive representations of biolog- ical systems (Lu et al., 2021; Rosen et al., 2023), revealing connections critical for accelerating drug discovery. How- ever, collecting multimodal data paired at the sample level remains infeasible currently due to massive experimental costs and technical challenges. Given the challenges of collecting fully paired data across biological modalities, our focus is on weakly paired datasets, where clusters of samples from two modalities share a com- mon biological state. In this setting, two samples from
https://arxiv.org/abs/2505.21317v1
different modalities are considered ”paired” if they belong 1 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations to the same biological state or metadata (Xi et al., 2024). In our case, this means transcriptomics and microscopy imaging samples that are not from the same biological repli- cate but share the same cell line and were exposed to the same perturbation. However, even weakly paired datasets remain scarce due to the cost and complexity of aligning states across modalities. Only a few such datasets exist, limiting their utility for training or fine-tuning models and making simultaneous inference on both modalities, for mul- timodal fusion for example, impossible to scale with current resources. To address these constraints, we aim to train models using the limited weakly paired data from transcrip- tomics and microscopy imaging, while enabling them to operate on a single modality, transcriptomics, during infer- ence. This approach leverages the complementary strengths of the modalities: microscopy images are rich in visual phenotypic features with strong predictive power but are challenging to interpret, while transcriptomics data suffers from weaker predictive power, but is more directly inter- pretable at the gene level, making it easier to connect to biological mechanisms (Kraus et al., 2024; Bendidi et al., 2024b). The complementarity between these modalities motivates the development of strategies to transfer the rich phenotypic insights from microscopy into transcriptomics representations. To overcome pairing scarcity in training, we propose two practical solutions: cross-modal knowledge distillation and biologically inspired data augmentation. Knowledge distilla- tion facilitates the transfer of information from one modality to another to enhance its utility. For instance, the predic- tive strength of morphological features in microscopy im- ages can enrich transcriptomics representations, making them more powerful for downstream tasks like drug dis- covery (Kraus et al., 2024; Replogle et al., 2022; Ye et al., 2018; Chandrasekaran et al., 2023; Bourriez et al., 2024; Sanchez et al., 2025). However, most distillation techniques rely on supervised objectives, which require precise labels that are often unavailable for most biological modalities. Alternatively, unsupervised alignment methods aim to un- cover shared structures between modalities, though this is challenging due to the distinct biological relationships each captures (Appendix Figure 6). We instead propose to lever- age alignment techniques for cross-modal distillation by binding transcriptomics to frozen morphological representa- tions. We further introduce a novel biologically inspired data augmentation technique tailored for transcriptomics vectors, which preserves biological information while introducing meaningful variation to the dataset. This augmentation ap- proach addresses the scarcity of paired data by improving the richness and robustness of transcriptomics representa- tions, enhancing their predictive power while retaining their inherent interpretability. By combining these strategies, our framework enriches gene expression representations, offer-ing deeper insights into biological processes and expanding their utility across diverse applications. To summarize, we introduce in this work a recipe for trans- ferring knowledge from morphological features to transcrip- tomics representations in weakly paired datasets, composed of the following contributions : •We present Semi-Clipped , a straightforward adapta- tion of CLIP (Radford et al., 2021) that leverages pre- trained large unimodal
https://arxiv.org/abs/2505.21317v1
foundation models with train- able adapters. It achieves state-of-the-art performance in cross-modal distillation under data-scarce conditions for our biological modalities. •We introduce PEA,Perturbation Embedding Augmentation, a novel biologically inspired data augmentation technique for representations of transcriptomics, that introduce significant variation in the training data while retaining meaningful biological information of each sample. PEA improves cross-modal distillation in our low data regime and widely outperforms existing augmentation techniques at uncovering novel biological relationships. 2. Related Works Cross-Modal Knowledge Distillation. Knowledge dis- tillation transfers knowledge from a teacher model to a student by aligning output distributions, typically using Kullback–Leibler (KL) divergence (Hinton et al., 2015). Variants introduce gradient similarity (Zhu & Wang, 2021), correlation (Huang et al., 2022), or structural losses (Park et al., 2019). Cross-modal methods leverage strong modal- ities to guide weaker ones, often relying on label informa- tion (Gupta et al., 2016; Roheda et al., 2018; Xue et al., 2021; Lee et al., 2023). For instance, C2KD (Huo et al., 2024) uses an online filtering mechanism for soft label alignment, while SHAKE (Li & Zhe, 2022) employs shadow adapters for bidirectional distillation. XKD (Sarkar & Etemad) com- bines self-supervised learning with cross-modal distillation but requires large paired datasets. To our knowledge, no distillation approach has effectively leveraged unsupervised cross-modal alignment in the context of limited weakly paired data. Multimodal Learning. Multimodal learning encom- passes approaches for aligning or merging data types for robust inference. CLIP (Radford et al., 2021) aligns image and text into a shared space, while CSA (han Li et al., 2024) uses pretrained unimodal models for few-shot alignment. Methods like SigClip (Zhai et al., 2023), VICReg (Bardes et al., 2022), and DCCA (Lan et al., 2020) enhance align- ment through self-supervised learning or correlation maxi- mization but depend on significant shared information (Tsai 2 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations et al., 2021). Multimodal distillation approaches (Yang et al., 2024; Wu et al., 2023; Fang et al., 2021; Wang et al., 2022) typically require paired data and focus on multimodal- to-multimodal distillation and do not leverage multimodal alignment for cross-modal distillation when only one modal- ity is available at inference. Relevant to our setting, (Hager et al., 2023) proposed a contrastive learning framework com- bining images and tabular data, structurally similar to our microscopy-transcriptomics setting, demonstrating the via- bility of such combinations for predictive and interpretable medical tasks. Biologically Relevant Representations. Advances in mi- croscopy imaging models have driven progress in high- content screening (Kraus et al., 2024; Yao et al., 2024; Kenyon-Dean et al., 2025; Wenkel et al., 2025), histopathol- ogy (Saillard et al., 2024; Chen et al., 2024; V orontsov et al., 2024), and specialized architectures (Bourriez et al., 2024; Pham & Plummer, 2024). In transcriptomics, foundation models (Cui et al., 2024; Yang et al., 2022; Theodoris et al., 2023; Wen et al., 2024) show promise but often underper- form simpler models in biologically relevant tasks, with scVI (Lopez et al., 2018) as an exception (Liu et al., 2023; Bendidi et al., 2024b). Microscopy imaging complements transcriptomics (Camunas-Soler, 2024),
https://arxiv.org/abs/2505.21317v1
but while unimodal datasets (Replogle et al., 2022; Chandrasekaran et al., 2023; Fay et al., 2023) are growing, weakly paired multimodal datasets remain scarce. Recent methods (Xi et al., 2024; Watkinson et al., 2024; Sanchez-Fernandez et al., 2023; Xie et al., 2023) address this kind of limitation for different modalities by leveraging weak pairings through pretrained models with trainable adapters (Fradkin et al., 2024). Data Augmentations for Biology. Data augmentations are crucial in addressing data scarcity for biology, as biolog- ically meaningful augmentations can stabilize and improve performance with limited biological datasets (Moutakanni et al., 2024; Bendidi et al., 2023; 2024a). In computational biology, image augmentations have typically focused on basic transformations like rotations (Alfasly et al., 2024; Lafarge & Koelzer, 2022) or differentiable techniques us- ing adversarial learning for domain generalization (Ruppli et al., 2022; Zhou et al., 2024). For transcriptomics, data being in a representation format allows leveraging existing representation-level augmentation methods (DeVries & Tay- lor, 2017; Li et al., 2022), though their efficacy for biologi- cal contexts remains uncertain. Recently, new biologically inspired techniques have emerged specifically for augment- ing transcriptomics and biological representations (Kircher et al., 2022; Li et al., 2023; Nouri, 2025).3. Proposed Approach Problem Formulation. We consider two biological data modalities: a teacher modality Tand a student modality S, each offering distinct perspectives on cellular behav- ior. Let XTandXSrepresent the datasets from these modalities. The samples x(i) T∈ X Tandx(i) S∈ X S correspond to the same biological perturbation and cell type but are not strongly paired due to biological variabil- ity. Each sample is annotated with weak labels p(per- turbation) and l(cell type). Both datasets are organized into biological batches BT={bT,1, bT,2, . . . , b T,|BT|}and BS={bS,1, bS,2, . . . , b S,|BS|}. Each batch bT,k∈ BTand bS,m∈ BSconsists of a set of samples {x(j) T,k}NT,k j=1and {x(j) S,m}NS,m j=1, and each batch includes, in addition to per- turbed samples, a number of control (unperturbed) samples, denoted by {x(c) T,k}CT,k c=1and{x(c) S,m}CS,m c=1forCM,k≥2. Proposed Distillation Method. Given the scarcity of weakly paired data, we adopt pretrained and frozen uni- modal encoders ET:XT→RdTandES:XS→RdS, following (Fradkin et al., 2024). These encoders produce embeddings z(i) T=ET(x(i) T)andz(i) S=ES(x(i) S)for the teacher and student modalities, respectively. Our objective is to learn a mapping function fS:RdS→RdTthat aligns student embeddings to the teacher embedding space, yield- ing transformed embeddings h(i) S=fS(z(i) S)that integrate properties from the teacher modality T. We aim to achieve the dual objective of leveraging weak biological labels for pairing while minimizing reliance on them as learning objec- tives, since such labels underperform compared to unsuper- vised objectives in microscopy imaging (Kraus et al., 2024), and preventing mutual drift between modalities with limited shared information. We propose Semi-Clipped , a straight- forward adaptation of the CLIP loss (Radford et al., 2021) for cross-modal knowledge distillation. This approach uses the frozen unimodal encoders to generate embeddings zT andzS. The teacher representation zTis fixed, while an adapter function fSis trained on the student modality by optimizing the CLIP loss between hSandzTto produce aligned embeddings hS. By freezing
https://arxiv.org/abs/2505.21317v1
the teacher embedding space, this avoids dependence on massive amounts of paired data for encoder training and ensures one-way knowledge transfer from the teacher to the student, mitigating mutual drift and feedback from the student to the teacher. Batch Correction for Data Augmentation. In biolog- ical datasets, batch effects, or variability caused by dif- ferences in experimental conditions, introduce noise that can obscure meaningful patterns. Traditional batch correc- tion techniques (Bendidi et al., 2024b; Celik et al., 2024; Ando et al., 2017) address this by centering embeddings on control (unperturbed) samples within each batch, reducing 3 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations noise while preserving the signal. Typically used as a post- processing step, these corrections shift the embedding distri- bution while retaining key information for downstream anal- ysis. To tackle the scarcity of paired biological modalities, we introduce PEA (Perturbation Embeddings Augmenta- tion), a novel biologically inspired augmentation technique that repurposes batch correction as a data augmentation ap- plied directly to the student embeddings during training. Specifically, a function A: (RdS, Xc S)→RdSis randomly selected from a set Aof batch correction transformations and applied to the student embeddings z(i) S. Augmented embeddings z(i) S,A=A(z(i) S, X(c) S)are then passed to the student adapter fSfor cross-modal knowledge distillation. To ensure the teacher embeddings focus on relevant infor- mation, a fixed batch correction B(Ando et al., 2017) is applied to the teacher modality. To augment transcriptomics data while preserving biological relevance, we extend traditional batch correction techniques into a stochastic augmentation framework. Specifically, for each sample, we randomly select one batch correction transformation Afrom a predefined set of normalization techniques, ensuring controlled variability in perturbation embeddings. Each selected transformation A:RdS→RdS falls into one of these categories: (1) centering, which shifts embeddings by subtracting batch-wise control means to re- move batch-specific offsets; (2) scaling, which normalizes variance across features to enhance comparability; and (3) principal component-based transformations that reweight variance along principal axes, emphasizing biologically rel- evant information while reducing batch artifacts. To intro- duce further stochasticity, we drop a random subset of the steps of each correction method per sample rather than al- ways applying them sequentially. Additionally, the number of control samples used for correction is randomly sampled per training sample, increasing diversity and robustness to out-of-domain experimental shifts in the learned distri- butions. Detailed implementation of our batch correction techniques is provided in Appendix Section C. This method introduces controlled and diverse distributional shifts, helping fSlearn robust, biologically meaningful rep- resentations by ignoring batch-induced variability. During inference, a batch correction is applied to the student embed- dings zSto align them with the training distribution, further improving robustness. Algorithm 1 details the process, en- suring the adapter captures biologically relevant features while increasing training diversity and preserving biological information in low-data settings. 4. Experimental Setup Data & Model Training. We use microscopy imaging as the teacher modality and transcriptomics as the stu- dent modality. The training dataset includes 130,000 ar-Algorithm 1 Semi-Clipped with PEA implementation foreach batch (xS, xT)∈(XS, XT)do Extract using frozen encoders zS=ES(xS)andzT=
https://arxiv.org/abs/2505.21317v1
ET(xT) Sample batch correction function A∼ A Drop a random subset of steps in A→A′ Sample a random subset of control samples X(c) S Apply batch correction: za S=A′(zS, X(c) S) Compute transformed embeddings: hS=fS(za S) Apply TVN correction to teacher embeddings: zb S= B(zT)and compute CLIP loss : L=−BX i=1logexp( sim(h(i) S, z(b,i) T)/τ) BP j=1exp( sim(h(i) S, z(b,j) T)/τ) Backpropagate lossLand update adapter fS end for rayed bulk transcriptomics samples ( HUVEC-CMPD ) and 20,000 microscopy images of human umbilical vein en- dothelial cells (HUVEC), cells from cell painting, both cov- ering 1,700 chemical perturbations at three concentrations. Each transcriptomics sample can pair with multiple imaging samples based on treatment and concentration, with one pair randomly selected per epoch. These pairs are weakly paired—i.e., they do not originate from the same biological replicate but share the same cell line and perturbation meta- data, ensuring comparable biological states across modali- ties. For encoding microscopy images, we use the pretrained Phenom-1 model (Kraus et al., 2024), a state-of-the-art pre- trained model trained on 93 million microscopy images. For transcriptomics, we compare three models: a simple scVI- like MLP1trained from scratch on the HUVEC-CMPD bulk dataset, scVI (Lopez et al., 2018), a model known for strong performance on small datasets, outperforming existing tran- scriptomics pretrained models (Bendidi et al., 2024b), and similarly trained from scratch on the HUVEC-CMPD bulk dataset, and a pretrained scGPT (Cui et al., 2024) (a pre- trained model trained on 33 million transcriptomics sam- ples). A three-layer MLP adapter fS(input size dS, output sizedT) with ReLU activations is trained for the student modality, while both encoders remain frozen. Consistent with (Kenyon-Dean et al., 2025), control samples are ex- cluded from paired data for knowledge distillation and only used for batch correction. The adapter is trained with a tem- perature of 0.1, learning rate of 0.001, batch size of 1,024, and over 150 epochs. 1scVI is an MLP-based V AE conditionned on a batch label. 4 A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomic Representations Evaluation Setting. The evaluation focuses on assessing the quality of transcriptomic representations after knowl- edge distillation, emphasizing biological relevance and inter- pretability. We use a hierarchical benchmarking framework for transcriptomic representations (Bendidi et al., 2024b) (Appendix Section B) with two primary tasks: (1) Retrieval of known biological relationships , this task evaluates the ability of the learned representations to capture established biological relationships by retrieving known interactions between genes. Using cosine similarity of gene embed- dings, predicted relationships are validated against annota- tions from CORUM, HuMAP, StringDB, Reactome, and SIGNOR databases. Success is measured by recall scores averaged across these databases, reflecting how well the rep- resentations align with known biology. (2) Transcriptomic interpretability preservation , this task measures how well the distilled embeddings retain information necessary for reconstructing original gene expression profiles. It evaluates two complementary metrics: the Structural Integrity score, which quantifies how accurately the model preserves the relationships between control and perturbation samples, and the Spearman correlation, which assesses the rank-based agreement between predicted and true gene expression
https://arxiv.org/abs/2505.21317v1
pro- files. The average of these metrics provides a comprehensive measure of interpretability preservation. Success is defined as improving retrieval scores while main- taining interpretability metrics comparable to unimodal transcriptomic representations. This dual focus ensures that the student representations do not collapse or lose transcriptomic-specific information by ignoring it and re- lying solely on morphological features. Using these tasks, we compare our Semi-Clipped approach, with and with- out PEA, against standard multimodal alignment and cross- modal knowledge distillation methods. For alignment, we include CLIP (Radford et al., 2021), SigClip (Zhai et al., 2023), VICReg (Bardes et al., 2022), and DCCA (Lan et al., 2020). For distillation, we evaluate KD(Hinton et al., 2015), SHAKE (Li & Zhe, 2022), and C2KD (Huo et al., 2024). All methods use the same pretrained encoders with trainable adapters. For distillation approaches, teacher and student adapters are unimodally pretrained with perturbation la- bels before fine-tuning via their respective methods. We benchmark PEA by applying it to zSduring training, and compare it to existing biological and transcriptomics data augmentation approaches : MWO (Kircher et al., 2022), scVI denoising (Lopez et al., 2018), MDWGAN-GP (Li et al., 2023), scGFT (Nouri, 2025), and their combination with and without PEA. Hyperparameters are optimized via grid search on a validation split, with results averaged across multiple seeds. Evaluation Datasets. We assess generalization on three Out-Of-Distribution (OOD) datasets, each introducing dis- Figure 1. Impact of training choices on Semi-Clipped performance for known biological relationship recall on HUVEC-KO . Finetun- ing or multimodal training from scratch underperforms due to lim- ited weakly paired data, while using adapters on pretrained models significantly improves results. The best performance is achieved with Semi-Clipped : a single transcriptomic adapter aligned to frozen image representations. tinct distribution shifts. (1) Experimental variability: The HUVEC-KO dataset contains arrayed bulk transcrip- tomics data from 120,000 genetically perturbed sample, with around 300 CRISPR gene Knock-Out (KO) in HU- VEC cells, unlike the training set, which uses chemical perturbations. This dataset does not share any experiment with the training set, and evaluates generalization to un- seen experiments and unseen genetic perturbations. (2) Quantification method shift: The LINCS dataset (Sub- ramanian et al., 2017) includes 443,000 arrayed bulk tran- scriptomics samples across 31 cell types and 5,157 CRISPR gene KO, using the L1000 assay, a transcript abundance measurement method different from the sequencing-based approach in training. (3) Single-cell adaptation: TheSC- RPE1 dataset (Replogle et al., 2022) consists of 247,914 single-cell transcriptomic samples from retinal pigmented epithelium cells with 2,393 CRISPR knockouts, testing the transition from bulk transcriptomics (training dataset) to single-cell transcriptomics. Together, these three OOD eval- uation settings introduce significant distribution shifts on different aspects, testing the model’s robustness to new cell types, experimental conditions, and gene expression quan- tification methods. 5. Results We aim to evaluate the impact of Semi-Clipped and PEA both independently and in combination. Our primary ob- jective is to improve biological relationship recall on OOD datasets compared to the corresponding unimodal transcrip- tomic baseline while preserving or enhancing interpretabil- ity in transcriptomics. 5 A Cross Modal Knowledge Distillation & Data
https://arxiv.org/abs/2505.21317v1