text
string
source
string
that would elevate the DRG weight beyond the primary diagnosis of colitis. Given these conditions, the DRG code is assigned based on the primary diagnosis of digestive disorders, specifically colitis, without any significant complications or comorbidities that would elevate it to a higher complexity category. Therefore, the most appropriate DRG is"ESOPHAGITIS GASTROENTERITIS AND MISCELLANEOUS DIGESTIVE DISORDERS WITHOUT MCC." </think> <answer> ESOPHAGITIS GASTROENTERITIS AND MISCELLANEOUS DIGESTIVE DISOR- DERS WITHOUT MCC </answer> 22 Example Two <think> Based on the discharge summary provided, the most likely Medicare Severity Diagnosis-Related Group (MS-DRG) is PANCREAS LIVER AND SHUNT PROCEDURES WITH MCC . To determine this, let’s break down the key elements of the case: 1.Principal Diagnosis and Procedures: • The patient underwent a left hepatic lobectomy and placement of biliary stents. •The lobectomy is a major liver surgery—this alone is a significant factor for DRG assignment. 2.Complications and Comorbidities: • Post-operative bilious drainage from the JP (jejunostomy) tube. • Increased wound drainage requiring V AC (wound vacuum) placement. • ERCP performed to manage bile duct stones and issues. • Biliary stents placed during ERCP. •Elevated WBC count and suspected infection requiring IV antibiotics and infectious disease consultation. 3.MCC Analysis: •The ERCP, biliary stents, and post-op complications significantly impacted recov- ery. •These events qualify as a Major Complication or Comorbidity (MCC) under DRG criteria. 4.Severity and Complexity: • Multiple interventions (ERCP, stent placement, IV antibiotics). • Extended inpatient stay due to complications. • High resource utilization consistent with MCC-level DRG classification. 5.Consistency with MS-DRG Criteria: • MS-DRG assignment accounts for diagnosis, procedures, and complications. •This case aligns with the category for major liver/pancreas procedures involving MCCs. Therefore, the DRG code PANCREAS LIVER AND SHUNT PROCEDURES WITH MCC is assigned because the case involves a significant liver procedure (left hepatic lobectomy) and multiple major complications and interventions. The care complexity and extended course fulfill criteria for an MCC classification. </think> <answer> PANCREAS LIVER AND SHUNT PROCEDURES WITH MCC </answer> 23 J Example Outputs Demonstrating Different Cognitive Behaviors Example from Answer-First <think> Based on the provided discharge summary, the most likely Medicare Severity Diagnosis-Related Group (MS-DRG) is "OTHER CIRCULATORY SYSTEM DIAGNOSES WITH CC." Reasoning: 1.Principal Diagnosis: •The patient presents with chest pain and ST elevations, initially suggestive of STEMI (ST-Elevation Myocardial Infarction). •Cardiac catheterization revealed normal coronary arteries, ruling out coronary artery disease. •ST elevations without coronary lesions point to myopericarditis or a related circu- latory system condition. 2.CC (Complication or Comorbidity): •Chest pain, ST elevations, and treatment with ibuprofen and colchicine increased care complexity. •Mild echocardiographic findings (dilated aortic root and ascending aorta) required follow-up and additional monitoring, qualifying as a CC. 3.MCC (Major Complication or Comorbidity): • Despite significant diagnostic workup, no MCC-level condition was present. •No major complication or comorbidity substantially elevated resource use or reimbursement criteria. 4.Discharged with Specific Medications and Follow-Up Plans: • The patient was discharged on ibuprofen and colchicine. • A follow-up echocardiogram was planned, indicating ongoing resource use. Conclusion: The DRG code "OTHER CIRCULATORY SYSTEM DIAGNOSES WITH CC" is assigned because the patient’s primary condition, although initially concerning, did not meet the
https://arxiv.org/abs/2505.21908v1
criteria for an MCC. The presence of chest pain, ST elevations, treatment complexity, and follow-up needs justify a CC classification due to increased resource utilization and care complexity. </think> <answer> OTHER CIRCULATORY SYSTEM DIAGNOSES WITH CC </answer> 24 Example from CoT-First <think> To determine the most likely MS-DRG for this patient, let’s break down the key elements from the discharge summary: 1.Principal Diagnosis: •The patient presents with ST elevations in the EKG, initially consistent with a STEMI (ST-Elevation Myocardial Infarction). • Cardiac catheterization revealed normal coronaries (RRA – Right Rectus Artery), indicating a false positive STEMI. •The patient was diagnosed with myopericarditis, which can cause chest pain and ST segment elevations. 2.Secondary Diagnoses and Procedures: • Myopericarditis (secondary to chest pain and ST elevations) • Normal cardiac catheterization • No significant past medical history • Mildly dilated aortic root and ascending aorta on TTE •GERD, insomnia, and OSA—no active treatment or significant impact on hospital- ization 3.Complications and Comorbidities (CC and MCC): • Myopericarditis increases the complexity of care and qualifies as a CC. •There is no indication that it qualifies as a major complication or comorbidity (MCC ). Reasoning for the DRG Assignment: • Myopericarditis is the principal diagnosis and is considered a CC, not an MCC. • Mildly dilated aortic root and other secondary findings do not affect DRG assignment. •GERD, insomnia, and OSA are managed conditions that do not impact hospital resource use. DRG Code Assignment: The most likely MS-DRG for this patient, based on the provided information, is "OTHER CIRCULATORY SYSTEM DIAGNOSES WITH CC." This assignment reflects the presence of a complication (myopericarditis) that increases care complexity, without rising to the level of an MCC. </think> <answer> OTHER CIRCULATORY SYSTEM DIAGNOSES WITH CC </answer> 25 Example from Differential Thinking <think> Analysis of the Case Details Principal Diagnosis: •ST-Elevations and Myopericarditis Secondary Diagnoses: • No significant past medical history • GERD (Gastroesophageal Reflux Disease) • Insomnia • OSA (Obstructive Sleep Apnea) • HIV and Syphilis (screening recommended) Procedures Performed: • Cardiac Catheterization Other Relevant Factors: • Normal cardiac catheterization result • Mildly dilated aortic root and ascending aorta • Started on ibuprofen and colchicine Possible DRG Codes 1.Heart failure and shock without CC/MCC 2.Other circulatory system diagnoses with CC 3.Other circulatory system diagnoses with MCC Analysis for Each DRG Code 1. Heart failure and shock without CC/MCC Why this DRG code might be appropriate: • The patient was admitted with ST-elevations, which can be indicative of heart issues. •Mild dilation of the aortic root and ascending aorta could suggest early signs of heart failure. •Started on medications for pain and inflammation, which are common in heart failure management. Why it might not be appropriate: • Cardiac catheterization was normal, which doesn’t support a diagnosis of heart failure. • No history of heart failure or shock—only myopericarditis and mild aortic dilation. • GERD, insomnia, and OSA are unrelated to heart failure. 2. Other circulatory system diagnoses with CC Why this DRG code might be appropriate: • ST-elevations and myopericarditis are circulatory system issues. • GERD (a CC) increases complexity of
https://arxiv.org/abs/2505.21908v1
care and resource utilization. Why it might not be appropriate: •Condition managed with ibuprofen and colchicine—does not indicate severe complica- tions. •Absence of heart failure or severe circulatory system condition to warrant a more specific DRG. 3. Other circulatory system diagnoses with MCC Why this DRG code might be appropriate: 26 • ST-elevations and myopericarditis are significant circulatory issues. • GERD (a CC) could be misclassified as MCC if chronic impact is overestimated. Why it might not be appropriate: • Condition managed conservatively—not indicative of MCC-level complexity. • No evidence of heart failure or other major circulatory complications. Conclusion Most Likely DRG Code: Other circulatory system diagnoses with CC Reasoning: • Myopericarditis with ST-elevations is a significant circulatory condition. • GERD, as a CC, increases care complexity but not to the level of MCC. •No evidence of heart failure or other severe conditions to justify a different DRG code. Based on the provided information, the most appropriate DRG code is Other circulatory system diagnoses with CC , as it best reflects the patient’s primary condition and the presence of a CC (GERD) that increases complexity of care. </think> <answer> OTHER CIRCULATORY SYSTEM DIAGNOSES WITH CC </answer> 27
https://arxiv.org/abs/2505.21908v1
arXiv:2505.21918v1 [cs.LG] 28 May 2025IJABC: International Journal of Activity and Behavior Computing 1 Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing Haruki Kai1, Tsuyoshi Okita2 12Kyushu Institute of Technology Abstract We developed a deep learning algorithm for human activity recognition using sensor signals as input. In this study, we built a pre-trained lan- guage model based on the Transformer architecture, which is widely used in natural language processing. By leveraging this pre-trained model, we aimed to improve performance on the downstream task of human activity recognition. While this task can be addressed using a vanilla Transformer, we propose an enhanced n-dimensional numerical processing Transformer that incorporates three key features: embedding n-dimensional numeri- cal data through a linear layer, binning-based preprocessing, and a linear transformation in the output layer. We evaluated the effectiveness of our proposed model across five different datasets. Compared to the vanilla Transformer, our model demonstrated a 10%–15% improvement in accu- racy. 1 Introduction In recent years, Human Activity Recognition (HAR), which involves inferring human activities from sensor data, has gained increasing im- portance across various domains, including healthcare, sports, and smart home applications [12, 4, 6, 36, 25, 27]. Traditional HAR methods often rely on camera-based approaches, which have been extensively studied. However, these methods present challenges such as environmental con- straints and privacy concerns for individuals being monitored [10]. In contrast, approaches based on motion sensor data—such as accelerome- ters and gyroscopes embedded in smartphones and wearable devices—offer significant advantages, including reduced dependency on environmental conditions and improved privacy protection. With advancements in IoT technologies and the widespread adoption of smart devices, it has become increasingly feasible to collect and store 1kai.haruki822@mail.kyutech.jp 2tsuyoshi.okita@gmail.com Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 2 large-scale multi-dimensional time-series data from sensors like accelerom- eters and gyroscopes [17]. HAR methods leveraging such sensor data not only offer enhanced privacy but also hold great potential for applications such as daily activity monitoring and healthcare services. This study aims to achieve high-accuracy activity classification using multi-dimensional time-series data from sensors such as accelerometers and gyroscopes. Specifically, we propose an algorithm with the following key features: •Embedding for Multi-Dimensional Numerical Data : A sim- ple linear layer is introduced to embed multi-dimensional numerical data into vector representations. •Binning Process : Time-series data is discretized to mitigate noise and enhance feature extraction. •Parallel Linear Layers in the Output : Parallel linear layers are designed to accommodate varying channel characteristics, thereby improving prediction accuracy. Although the application of Transformer models[26] to HAR tasks has been steadily increasing, many existing studies focus on designing custom Transformer architectures specifically tailored for sensor data [29, 33]. In contrast, the field of natural language processing (NLP) has seen the development and refinement of various Transformer-based models, such as BERT [7], the GPT series [20], and DistilBERT [23], which are designed to achieve a balance between high accuracy and computational efficiency. Inspired by the idea that adapting existing NLP models for sensor data analysis could integrate state-of-the-art advancements in language
https://arxiv.org/abs/2505.21918v1
mod- eling into HAR tasks, we explore the feasibility of building a HAR model based on NLP-oriented Transformers. However, the direct application of these efficient Transformer architectures to sensor data remains relatively unexplored, and their effectiveness requires empirical validation. The contributions of this study are as follows: •Efficient Model Design Leveraging NLP Models : We explore the direct application of Transformer-based models, such as Distil- BERT and GPT-2, which have demonstrated success in NLP, to sensor data analysis. We provide design guidelines to effectively adapt these models for HAR tasks. •Proposed Embedding, Binning, and Parallel Linear Layer Structure : As a novel contribution, we introduce a lightweight yet flexible model that integrates linear layers for embedding, binning mechanisms for noise reduction, and parallel linear layers in the out- put to enhance predictive performance. •Experimental Validation : The effectiveness of the proposed model is empirically validated through comparative experiments against conventional models, including ResNet, Random Forest, and Vanilla Transformer[26]. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 3 2 Related literature In recent years, Transformer architectures have been increasingly applied beyond the field of natural language processing (NLP) to non-linguistic data analysis, including sensor data and time-series data. This trend can be attributed to the unique characteristics of Transformers, particularly their self-attention mechanism, which captures long-term dependencies while enabling parallel computation. 2.1 Soft sensing transformer Recent advancements in sensor-based human activity recognition (HAR) have seen the adaptation of Transformer architectures for handling multi- dimensional sensor data. For example, the Soft Sensing Transformer [31] replaces conventional embedding layers with linear transformations to di- rectly process continuous sensor inputs. This approach efficiently handles multi-dimensional data without requiring explicit tokenization or cluster- ing. Our work builds on this idea by further enhancing self-supervised learning for HAR through additional design components. 2.2 Time-Series Transformer Variants There are several categories of recent works for time-series transformer. The first category is the modification on attention mechanism. LogTrans [13] proposes convolutional self-attention by employing causal convolu- tions to generate queries and keys in the self-attention layer. LogTrans introduces sparse bias, a Logsparse mask, in self-attention model. In- former [34] selects dominant queries based on queries and key similarities. Together with this, Informer designs a generative style decoder to produce long-term forecasting directly. AST [30] uses a generative adversarial en- coderdecoder framework to train a sparse Transformer model. Pyraformer [14] designs a hierarchical pyramidal attention module with a binary tree following the path, to capture temporal dependencies of different ranges. FEDformer [35] applies attention operation in the frequency domain with Fourier transform and wavelet transform. Quatformer [5] (LRA) based on quaternions that introduce learnable period and phase information to depict intricate periodical patterns. The second categories introduces the way to normalize time series data. Non-stationary Transformer [15] introduces the modification mechanism in the normalization. The third categories utilizes the bias for token input. Autoformer [28] adopts a segmentation-based representation mechanism. The auto- correlation of this block measures the time-delay similarity between inputs signal. PatchTST [18] uses channels which contain a single
https://arxiv.org/abs/2505.21918v1
univariate time series that shares the same embedding within all the series. And it uses subseries-level patch design which segmentation of time series into subseries-level patches that are served as input tokens to Transformer. Crossformer [32] uses the input which is embedded into a 2D vector ar- ray through the dimension-segment-wise embedding to preserve time and dimension information. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 4 2.3 Vision Transformer Variants SENvT-u4 [19] is a transformer model designed to handle signals, with pre-training consisting of MLM (Masked Language Modeling) and a signal- specific pretext task. SENvT-contrastive [19] is also a transformer de- signed for signals, but its pre-training is based on contrastive learning. Both of these models are modified versions of the original Vision Trans- former (ViT). In the Vision Transformer, a grid of patches is used, with these patches treated as embeddings. Moreover, the relative relationships between the embeddings of a given patch and its adjacent patches are recognized as distances. The signal transformers modify the basic structure of the Vision Trans- former by modifying it from two-dimensional to one-dimensional. As a result, it treats signals in the time series direction as one-dimensional patches, embedding them in the same manner. However, unlike Vi- sion Transformers, the time series signals are not treated as individual data points but as embeddings based on one-dimensional patches. These patches are treated as embeddings during input. Therefore, they do not take the form of the linear layer proposed in this paper. 2.4 Attempts to Leverage LLMs for Sensor Data Analysis More recently, attempts have been made to apply large language models (LLMs), such as GPT-4, to sensor data-based activity recognition tasks [9]. For instance, Sijie Ji et al. [9] designed prompts, such as ”role- play scenarios” and ”think step-by-step” to achieve accuracy surpassing that of traditional baseline models. This study suggests the potential of repurposing the extensive knowledge encoded in LLMs for sensor data analysis. However, such approaches often rely on natural language tokenizers to encode sensor data, which may limit the ability to fully exploit the contin- uous nature of numerical data. Natural language tokenizers are inherently designed to segment strings or word sequences into discrete tokens, mak- ing them less suited for effectively representing multi-dimensional sensor data. Addressing this challenge requires the development of embedding layers tailored to numerical data or structural modifications to LLMs. Building upon these advancements in applying Transformers to non- linguistic data analysis, this study proposes a model that combines linear layers for efficient embedding of sensor data with architectures based on natural language models. Our aim is to achieve high accuracy in activity recognition tasks by leveraging the strengths of both linear embeddings and Transformer-based architectures. 3 Method We propose a method for processing n-dimensional numerical data based on the Transformer encoder commonly used in natural language process- ing. The architecture is illustrated in Figure 1. Specifically, the proposed Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 5 method
https://arxiv.org/abs/2505.21918v1
consists of the following components: Embedding with Linear Layer :A method that takes n-dimensional numerical data as input and obtains an embedded representation through a linear layer. In our approach, multi-dimensional sensor data is processed in a manner similar to natural language processing (NLP) models, where sequential token embeddings are fed into a Transformer. Specifically, given an input of shape L×n, where: •Lrepresents the sequence length (e.g., 300 for accelerometer data), •nrepresents the number of sensor dimensions (e.g., 3 for a triaxial accelerometer). We apply a single linear embedding layer of size n×d, where: •nrepresents the number of sensor dimensions, •drepresents the embedding dimension. This ensures that each sensor reading is converted into a unified fea- ture space while preserving temporal order. Although this paper focuses primarily on 3-dimensional sensor data (e.g., accelerometer data with X, Y, and Z axes), the proposed method is not restricted to this configu- ration. It can be naturally extended to higher-dimensional sensor data (e.g., 6-axis IMU with accelerometer and gyroscope, or 9-axis motion sen- sor data), where the linear embedding layer would be adjusted to match the input dimensionality. By embedding each time step independently us- ing a single linear layer, the model can learn meaningful representations of sensor readings while maintaining their temporal structure, similar to how token embeddings are processed in NLP tasks. Parallel linear layers in the output layer :A design for the out- put layer with parallel linear layers corresponding to each dimension of the n-dimensional data, enabling the learning of data features. A single output layer for all dimensions could, in theory, capture inter-axis depen- dencies; however, this would require additional preprocessing steps such as clustering or tokenization to consolidate multi-dimensional sensor read- ings into a unified representation. This transformation could lead to the loss of valuable information specific to individual sensor axes. Instead, our approach maintains the original structure of the sensor data by em- ploying separate output layers for each dimension. This allows the model to independently learn the characteristics of each sensor axis while still leveraging the Transformer’s attention mechanism to capture inter-axis dependencies implicitly. Binning process :A discretization method for input numerical data to facilitate self-supervised learning in the output layer. Pre-Training algorithm :A proposed Pre-Training algorithm designed for the proposed architecture. While Soft Sensing Transformer [31] also applies linear transforma- tions to sensor inputs, our approach further introduces a binning process for efficient self-supervised learning and parallel linear layers in the out- put layer to handle multi-dimensional data without requiring additional clustering or tokenization. Through this design, we propose an effective architecture for HAR tasks that differs from similar studies in its struc- tural approach. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 6 Notably, this method is not limited to Transformer encoders and can be similarly applied to Transformer decoders. This chapter provides a detailed explanation of each of these compo- nents. Figure 1: Conventional and proposed architectures 3.1 Embedding with Linear Layer In traditional natural language models, an embedding layer is used to convert
https://arxiv.org/abs/2505.21918v1
input data into embedding vectors. In contrast, our method re- places the embedding layer with a linear layer to transform the input data x∈Rninto an embedding vector h∈Rd. The proposed embedding using a linear layer is defined as follows: h=Wx+b •x∈Rnis the input n-dimensional numerical data, Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 7 •W∈Rd×nis a learnable weight matrix, •b∈Rdis a learnable bias term, •h∈Rdis the resulting embedding representation. Through this linear layer, the input n-dimensional numerical data is pro- jected into the embedding space and subsequently fed into the Trans- former encoder. 3.2 Binning Process When continuous numerical data is learned as a regression task, it of- ten fails to achieve satisfactory performance. To address this issue, our method discretizes continuous numerical data, enabling it to be learned as a classification task. This binning process generates the ground-truth labels necessary for self-supervised learning. The input data xi(numerical data for the i-th dimension) is divided into an arbitrary number of kbins and converted into discrete labels yi. The binning process is conducted as follows: Standardizing the numerical scale : For both the Pre-Training dataset and the downstream learning dataset, the numerical data xifor each di- mension iis standardized using the minimum value xminand maximum value xmax: xscaled i =xi−xmin xmax−xmin, x min≤xi≤xmax,0≤xscaled i≤1 Discretizing via binning : The standardized data xscaled i is divided into kbins, and discrete labels yiare generated using the floor function: yi= min(j k·xscaled ik , k−1), y i∈ {0,1, . . . , k −1} This process is performed independently for each dimension iof the in- put data. By doing so, ground-truth label data for self-supervised learning is generated, which serves as the target data for loss calculation in subse- quent sections. 3.3 Parallel Linear Layers for the Output In traditional language Transformer models used for natural language pro- cessing, the output layer consists of a single linear layer for predictions. In contrast, our method introduces multiple parallel linear layers, each corresponding to a dimension of the input data. This design allows the representation vectors output by the Transformer encoder to be appropri- ately processed for each dimension, enabling self-supervised learning. When the input data consists of n-dimensional numerical data, the output layer includes nparallel linear layers. Each linear layer fi(i= 1, . . . , n ) corresponds to the i-th dimension of the input. For the representation vector H∈Rdobtained from the Transformer encoder, each linear layer fiis defined as follows: ˆyi=fi(H) =WiH+bi(i= 1,2, . . . , n ) Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 8 •H∈Rdis the representation vector from the Transformer encoder. •Wi∈Rc×dis the weight matrix of the i-th linear layer. •bi∈Rcis the bias term of the i-th linear layer. •ˆyi∈Rcis the output of the i-th linear layer. •cis the number of output classes (corresponding to the number of bins in the binning process). Thus, each linear layer corresponds to a specific dimension of the input data, taking the representation
https://arxiv.org/abs/2505.21918v1
vector Has input and generating the output for the respective dimension. During Pre-Training, cross-entropy loss is computed using the labels generated from the binning process. The loss Lifor each dimension iis expressed as: Li=−cX j=1yi,jlog ˆyi,j •yi,j∈ {0,1}is the ground-truth label for class jin the i-th dimension (represented as a one-hot vector). •ˆyi,j∈[0,1] is the predicted probability for the corresponding class (softmax output). The final loss function Lis defined as the average loss across all di- mensions: L=1 nnX i=1Li Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 9 3.4 Pre-Training Algorithm Algorithm 1 Pre-training Algorithm Require: X∈R(N×n){N: Number of samples, n: n-dimensional sensor data } model; batch size; learning rate η; number of bins k; loss function L Ensure: Trained model model∗ 1:{Step 1: Binning Process for Data } 2:fori= 1tondo 3:X[:, i]scaled←X[:,i]−min(X[:,i]) max(X[:,i])−min(X[:,i]) 4:Y[:, i]←min( k·X[:, i]scaled , k−1) 5:end for 6:{Step 2: Pretraining the Model } 7:while lossLhas not converged do 8:forXbatch, Ybatch in Loader( X, Y, batch size)do 9: Z←model( Xbatch){Model output } 10: L ← Loss( Z, Y batch) 11: optim.step () 12: end for 13:end while 14:return model∗ The training algorithm using the n-dimensional numerical processing Transformer encoder is presented in Algorithm 1. It is constructed by combining the methods described in Sections 3.1, 3.2, and 3.3. In Step 1, the binning process described in Section 3.2 is applied to generate labels based on the number of bins k. In Step 2, the input data is fed into the linear layer described in Section 3.1, and predictions for each dimension’s labels are made using the output linear layers described in Section 3.3. The model is trained by updating its parameters to minimize the cross- entropy loss. This algorithm is not limited to encoder models and can also be applied to decoder models. 4 Experimental Setup In this experiment, we perform Pre-Training and downstream learning using 3-dimensional numerical sensor data. The input data consists of fixed-length sequences of multi-dimensional numerical sensor data (here- after referred to as ”sequences”), which are used to construct both the Pre-Training and the downstream learning task of activity recognition. What is sequence length? : Sensor data is collected continuously over time, giving each data point a temporal structure. The sequence length refers to the number of data points extracted when segmenting this time- series data into fixed lengths. For example, when the sequence length is set to 300, one sequence consists of 300 consecutive data points. Each Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 10 data point is represented as an n-dimensional numerical vector (e.g., the x, y, and z axes of an accelerometer). Pre-Training : Using a fixed sequence length of 300, we employ the sensor dataset capture24 [3] for Pre-Training. The proposed method de- scribed in Chapter 3, including linear embedding of n-dimensional input data, binning, and parallel linear layers in the output layer, is used to perform Pre-Training with the Transformer encoder (DistilBERT) and Transformer decoder (GPT-2). Downstream
https://arxiv.org/abs/2505.21918v1
Learning : As the downstream task, activity recognition is performed on five datasets: ADL [2], Opportunity [22], PAMAP2 [21], REALWORLD [24], and WISDM [11]. For each dataset, the data is seg- mented into sequences of length 300, and the task involves predicting a single activity label for each sequence. The label corresponds to the activ- ity associated with the sensor data at the central point of each sequence. Through this setup, we evaluate the effectiveness of the sensor data features learned during Pre-Training on the downstream activity recog- nition tasks. This chapter details the specific experimental settings for Pre-Training. 4.1 Pre-Training Setup Algorithm 2 Pretraining Algorithm Using Sequence Data Require: X∈R(N×n){N: Number of samples, n: Number of dimensions } Sequence length: 300; Number of bins k; Learning rate η; Model model Ensure: Trained model∗ 1:{Step 1: Prepare Training Data } 2:X←normalize( X){Normalize sensor data } 3:num seq←N//300{Total number of sequences } 4:Xseq←reshape( X,[num seq,300, n]) 5:Yseq←discretize( Xseq, k){Discretize each dimension into bins } 6:{Step 2: Pretrain the Model } 7:while LossLhas not converged do 8:forXbatch, Ybatch in Loader( Xseq, Yseq)do 9: Z←model( Xbatch){Input sequences into the model } 10: L ← Loss( Z, Y batch){Calculate loss } 11: w←w−η· ∇wL {Update parameters using gradients } 12: end for 13:end while 14:return model∗ Algorithm 2 outlines the Pre-Training algorithm used in this experi- ment. Pre-Training is performed using the capture24 dataset, which con- tains 3-axis numerical data (x, y, z) from accelerometer sensors. Preprocessing : First, the maximum, minimum, and mean values are calculated after removing the top 5% and bottom 5% of the data. The Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 11 top 5% and bottom 5% values are then reassigned to the calculated maxi- mum and minimum values, respectively, to mitigate the adverse effects of extreme values on model training. Next, any missing values are imputed with the mean value. Finally, Min-Max scaling is applied to normalize the data to a range of 0 to 1. This preprocessing is performed independently for each sensor dimension. Input Data Reshaping : The preprocessed sensor data is reshaped into the format (number of data points, sequence length, number of sensor dimensions). This format constructs the data as sequences to be input into the model. The sequence length is set to 300. Data Discretization : The reshaped data is discretized by dividing the values into bins, creating label data for each sensor dimension. For this binning process, two bin sizes (100 and 1000) are used to create the labels. A smaller number of labels simplifies the information to be learned and can enhance generalization performance but risks missing important features. Conversely, a larger number of labels allows the model to learn finer- grained features of the sensor data but may lead to overfitting by over- adapting to sensor measurement noise, potentially reducing generalization performance. Preparation : The prepared data includes fixed-sequence-length sen- sor data as input and corresponding binned labels. These are used for model input and loss calculation. The output dimensions of
https://arxiv.org/abs/2505.21918v1
the linear layers in the output layer are adjusted to match the number of labels in the data. Training Loop : The prepared data is divided into batches of size 25 and input into the model. Cross-entropy loss is computed between the model output and the label data. The loss is averaged across sequences and dimensions to derive the overall loss, after which model parameters are updated. The AdamW optimization algorithm is used with a learning rate of 5 ×10−5. In this study, the loss converged sufficiently within one epoch, so the training loop was limited to a single epoch. 4.2 Downstream Learning Setup The pretrained model’s weights, excluding the classification head, are loaded to perform the activity recognition task. As in Pre-Training, fixed-length sequences with a sequence length of 300 are used. For each sequence, the activity label corresponding to the central data point is assigned as the activity label for the entire sequence. The optimization algorithm used is AdamW [16], with a learning rate of 5×10−5. This approach evaluates the performance of the Pre-trained model on the activity recognition task. Details of the algorithm are provided in the appendix. 4.3 Setup For Each Pre-Training Task In this study, three Pre-Training tasks were used: a reconstruction task, a Masked Language Modeling (MLM) task, and a Next token prediction task. The model architecture used for each Pre-Training method remains Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 12 consistent; however, modifications were made to the dataset class respon- sible for creating batches according to the specific requirements of each task. This section explains the settings of the dataset class used for each Pre-Training task. 4.3.1 Pre-Training : Reconstruction The reconstruction task involves providing multi-dimensional numerical input data to the model and comparing the logits obtained from the model’s output layer with the pre-discretized data to reconstruct the orig- inal discretized data. In this experiment, the encoder model DistilBERT was used. For this task, the loss is computed using cross-entropy between the logits from the output layer and the discretized data. All data points from the logits are included in the loss calculation, and no data points are excluded. The reconstruction task is performed as a classification task across all data points obtained from the model. 4.3.2 Pre-Training : MLM(Masked Language Modeling) The Masked Language Modeling (MLM) task is a Pre-Training task de- signed to enable the model to learn contextual information by masking a portion of the input sequence and predicting the masked values. In this experiment, the encoder model DistilBERT was used. For this task, 25% of the data points are randomly selected and masked across all dimensions and within the sequence range of the input sequence. The masked positions are replaced with a mask value of -100.0. The logits corresponding to unmasked positions are excluded from the cross-entropy calculation, and the loss is computed solely based on the masked positions. 4.3.3 Pre-Training : Next Token Prediction The next-token prediction task is a Pre-Training task primarily performed using decoder models. In
https://arxiv.org/abs/2505.21918v1
this experiment, the decoder model GPT-2 was employed. The objective of the next-token prediction task is to enable the model to learn the ability to predict the next data point by referencing only the past information up to any given point. In this experiment, the decoder version of the n-dimensional numerical processing Transformer model was used, and the task was designed to predict the multi-dimensional nu- merical data following a specific data point. The next multi-dimensional numerical data is predicted as discretized classification labels. For loss calculation, cross-entropy loss was not computed over the entire sequence. Instead, only a specific range was targeted. Specifically, for a sequence length of 300, the first 70 data points in the sequence were excluded from the cross-entropy calculation. Loss calculation for next- token prediction was performed only for the data points following this range. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 13 This configuration was adopted based on the consideration that the early portion of the sequence lacks sufficient contextual information for meaningful next-token prediction. As such, including it in the loss calcu- lation was unlikely to contribute to effective learning. 5 Result and Discussion This chapter presents the experimental results. 5.1 Assumptions For Pre-Training, we utilized the capture24 dataset [3]. For downstream tasks, we employed the ADL [2], Opportunity [22], PAMAP2 [21], REAL- WORLD [24], and WISDM [11] datasets. Additionally, confusion matrices and loss transition plots are presented for further evaluation. The input data consisted of 3-dimensional accelerometer sensor data (x, y, and z axes) with a sequence length of 300. The task was defined as predicting a single user activity label for each sequence. This task was also used for performance comparison with ResNet [8], Random Forest [1], and Vanilla Transformer [26] models. Regarding the Vanilla Transformer method shown in Table 2, the 3- dimensional sensor data was sequentially arranged as input. This is be- cause the Vanilla Transformer requires a 1-dimensional sequence of to- kens as input. For this purpose, the sensor data was first scaled using min-max normalization. Subsequently, all values were scaled up accord- ing to a predefined vocabulary size, truncated to discard decimal points, and converted into integer values. These integer values served as token IDs, forming the input token sequence for the Vanilla Transformer. As this experiment focused on 3-dimensional sensor data, the input to the Vanilla Transformer consisted of a 1-dimensional token sequence with a length of 900. The Pre-Training method employed was a Masked Language Model (MLM) task. 5.2 Results 5.2.1 Intrinsic Evaluation Table 1: Intrinsic Performance Comparison modelsPAMAP2 REALWORLD acc f1 acc f1 Vanilla Transformer vocab size:10000 0.7126 0.6940 0.7479 0.7446 Vanilla Transformer vocab size:30000 0.6864 0.6454 0.7320 0.7252 1-Dimensional Numerical Processing Transformer bin size:100 0.7758 0.7721 0.7943 0.7947 1-Dimensional Numerical Processing Transformer bin size:1000 0.7811 0.7777 0.7963 0.7969 Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 14 Table 1 presents the results of a comparison between the Vanilla Trans- former and our proposed method,
https://arxiv.org/abs/2505.21918v1
using only 1-dimensional acceleration data from the X-axis for training. Both models are based on DistilBERT and were pre-trained on the capture24 dataset. After Pre-Training, down- stream tasks were conducted on respective datasets, and performance was evaluated. The evaluation metrics used were accuracy and weighted F1-score. In the case of the Vanilla Transformer, performance decreased on both the PAMAP2 and REALWORLD datasets when the vocabulary size was set to 30,000 compared to 10,000. This performance degradation may be attributed to reduced learning efficiency caused by an excessively large vocabulary size. Conversely, the proposed method outperformed the Vanilla Trans- former under both experimental conditions. This indicates that the em- bedding layer implemented with a linear transformation, the binning pro- cess for numerical data, and the output linear layer for classification tasks in the proposed method effectively contributed to feature extraction from the sensor data. 5.2.2 Extrinsic Evaluation Table 2: Extrinsic Performance Comparison modelsADL Opportunity PAMAP2 REALWORLD WISDM acc f1 acc f1 acc f1 acc f1 acc f1 ResNet18 [8] 0.9417 0.9430 0.7960 0.7982 0.8927 0.8912 0.9397 0.9394 0.9240 0.9243 Random Forest [1] 0.8583 0.8411 0.7143 0.6757 0.7230 0.6910 0.8205 0.8196 0.7329 0.7303 Vanilla Transformer(DistilBERT[23])vocab size:10000 0.8583 0.8553 0.7272 0.7245 0.7561 0.7527 0.7807 0.7819 0.7923 0.7924 Vanilla Transformer(DistilBERT[23])vocab size:30000 0.6220 0.6139 0.7002 0.6931 0.6812 0.6595 0.7683 0.7710 0.7123 0.7040 SENvT-u4 [19] 0.9210 0.8930 0.7681 0.7623 0.8592 0.8556 0.9071 0.9170 0.8741 0.8722 SENvT-contrastive [19] 0.9212 0.8994 0.7692 0.7812 0.8470 0.8351 0.9091 0.9199 0.8900 0.8890 DistilBERT(Reconstruction)bin size:100 0.9260 0.9230 0.8166 0.8130 0.8659 0.8659 0.9176 0.9179 0.9130 0.9127 DistilBERT(Reconstruction)bin size:1000 0.8938 0.8912 0.8021 0.7986 0.8787 0.8791 0.9034 0.9042 0.9082 0.9075 DistilBERT(MLM)bin size:100 0.9481 0.9469 0.8138 0.8133 0.9025 0.9024 0.9186 0.9188 0.9062 0.9058 DistilBERT(MLM)bin size:1000 0.9606 0.9599 0.8228 0.8245 0.9100 0.9103 0.9122 0.9127 0.8974 0.8971 GPT2-small(Next Token Prediction)bin size:100 0.9496 0.9491 0.8045 0.8049 0.8840 0.8844 0.9181 0.9184 0.8965 0.8967 GPT2-small(Next Token Prediction)bin size:1000 0.9307 0.9275 0.8122 0.8079 0.8745 0.8751 0.9088 0.9091 0.8964 0.8963 GPT2-medium(Next Token Prediction)bin size:100 0.9417 0.9379 0.8021 0.8027 0.8686 0.8679 0.9177 0.9178 0.8959 0.8961 GPT2-medium(Next Token Prediction)bin size:1000 0.9528 0.9510 0.8115 0.8079 0.8885 0.8888 0.9112 0.9113 0.8865 0.8871 DistilBERT (without Pre-Trainig) 0.8567 0.8484 0.7693 0.7587 0.8188 0.8187 0.8957 0.8964 0.8856 0.8853 GPT2-small (without Pre-Trainig) 0.8708 0.8553 0.7175 0.6970 0.8111 0.8104 0.8525 0.8534 0.8435 0.8437 GPT2-midium (without Pre-Trainig) 0.8504 0.8159 0.7032 0.6588 0.7805 0.7778 0.8454 0.8466 0.8630 0.8636 Note: ResNet18, Random Forest, and Vanilla Transformer are employed as baseline models. In contrast, models based on DistilBERT and GPT-2 incorporate the proposed methods. The table compares their performance under various pre-training methods as well as cases without pre-training. For comparison, baseline models including ResNet, Random Forest, and Vanilla Transformer were employed. Additionally, the proposed meth- ods, based on DistilBERT and GPT-2, were evaluated. The results include Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 15 the performance of various pre-training methods as well as cases without pre-training. When comparing the performance of the Vanilla Transformer and the proposed methods, the proposed methods outperformed the Vanilla Trans- former across all datasets. In the Vanilla Transformer, continuous numer- ical
https://arxiv.org/abs/2505.21918v1
data is converted into discrete token IDs, which are then transformed into representation vectors through an embedding layer. This process can potentially disrupt the continuity and relationships inherent in the nu- merical data. Furthermore, due to the architectural design of the Vanilla Trans- former, the embedding layer only accepts discrete token IDs as input, making it incapable of directly processing multidimensional sensor data as a single data point. This limitation complicates handling multidimen- sional data obtained from multiple sensor axes. In contrast, the proposed method uses a linear embedding layer, enabling the model to input mul- tidimensional data as a single data point. This fundamental difference significantly impacts the ability to process sensor data effectively. These architectural limitations and the difficulty of representing small contin- uous changes in multidimensional sensor data using discrete token IDs likely contributed to the lower performance of the Vanilla Transformer. On the ADL, Opportunity, and PAMAP2 datasets, the proposed meth- ods outperformed ResNet and Random Forest. Particularly on the Op- portunity dataset, the proposed methods achieved higher accuracy and F1 scores than ResNet for all pre-training methods. However, on the RE- ALWORLD and WISDM datasets, ResNet outperformed the proposed methods, and none of the pre-training methods surpassed ResNet’s per- formance. Within the proposed methods, reconstruction tasks achieved the high- est performance only on the WISDM dataset. On the other datasets, the Masked Language Modeling (MLM) task consistently recorded the best performance, suggesting that the MLM task is relatively effective for training language transformer-based models in the proposed framework. Pre-training using the next-token prediction task did not show any significant advantage over the other two methods in downstream tasks. For the discretization of sensor data during pre-training, comparisons were made using bin sizes of 100 and 1000. However, no clear impact of bin size on performance was observed in this experiment. SENvT-u4 and SENvT-contrastive [19] outperform the Vanilla Trans- former in five downstream tasks. This demonstrates that the mechanism of treating one-dimensional patches as embeddings in the SENvT is ef- fective. However, DistilBERT (MLM) outperformed both SENvT-u4 and SENvT-contrastive in all five downstream tasks. This indicates that our DistilBERT (MLM) performed even better, suggesting that the proposed approach leads to high performance. As is mentioned in Section 2.3, SENvT-u4 handles patches while our DistilBert (MLM) handles each data point projected in a linear manner. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 16 5.2.3 Analysis of Oppotunity Figure 2: Confusion matrix for Oppotunity On the Opportunity dataset, downstream models pre-trained using the Masked Language Modeling (MLM) task achieved the highest accuracy, with improvements of up to 2.68% in accuracy and 2.63% in F1-score compared to other methods. These results surpassed those of both the reconstruction task and the next-token prediction task. Moreover, the performance of all proposed methods exceeded that of ResNet. The confusion matrix in Figure 2 illustrates the performance of down- stream models trained with different pre-training tasks, as well as the performance of the pre-trained ResNet model on the test data. While the proposed method
https://arxiv.org/abs/2505.21918v1
using the MLM task demonstrated supe- rior performance on the Opportunity dataset, the reasons for this high performance require further investigation. Specifically, comparative anal- yses with various other tasks represent an important direction for future research. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 17 5.2.4 Analysis of RealWorld Figure 3: Confusion matrix for Oppotunity On the REALWORLD dataset, none of the proposed pre-training tasks outperformed ResNet. However, among the proposed methods, pre-training with the Masked Language Modeling (MLM) task achieved the highest performance, surpassing the other tasks. Both the Opportunity and REALWORLD datasets demonstrated the effectiveness of the MLM task compared to other tasks. However, while all proposed methods outperformed ResNet on the Opportunity dataset, none of them surpassed ResNet on the REALWORLD dataset. This dis- crepancy likely stems from differences in the characteristics of the two datasets. A detailed examination of the differences between the Opportunity and REALWORLD datasets is necessary to determine whether the proposed pre-training tasks themselves require improvement, whether the architec- ture needs refinement, or under what conditions the proposed methods are most effective. This represents an important direction for future re- search. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 18 5.2.5 Pre-Training Loss Values Figure 4: Confusion matrix for Oppotunity Figure 4 illustrates the loss transitions for each pre-training task, specifi- cally for a bin size of 100. The initial random loss value was approximately 4.61. The final training and validation losses for each task are summarized in Table 3. The MLM task exhibited the most unstable loss reduction, Table 3: Final Loss Values for Pre-Training Tasks Pre-Training task Train Loss Validation Loss MLM 2.2396 2.3242 Reconstruction 1.2649 1.0327 Next-token Prediction 1.8121 1.8213 with significant fluctuations during training. Although its final loss val- ues were higher compared to other tasks, it still demonstrated a certain level of convergence. The reconstruction task showed the most stable loss reduction and achieved the lowest final loss values, both for training and validation, indicating high generalization performance on evaluation data. The next-token prediction task yielded intermediate results, with final loss values that did not match those of the reconstruction task. From the earlier performance evaluation table, downstream task per- formance followed the trend: MLM task >Reconstruction task >Next- token prediction task. Despite the reconstruction task achieving the most stable and lowest pre-training losses, its performance in downstream tasks did not surpass the MLM task. This suggests that minimizing pre-training task losses alone does not directly translate to improved downstream per- formance. It is hypothesized that the reconstruction task might have overly adapted to its specific pre-training task, hindering its ability to effectively enhance downstream task performance. In contrast, the MLM task, with its more unstable convergence, may have avoided overfitting to the pre-training task. This likely allowed it to learn more effective features of the sensor data for downstream tasks. Therefore, it can be concluded that setting a pre-training task that is Self-supervised Learning Method Using Transformer for Multi-dimensional
https://arxiv.org/abs/2505.21918v1
Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 19 neither overly simplistic nor excessively complex is crucial when employing n-dimensional numerical data with Transformer models. 6 Model Comparison Table 4: model comparison of parameters Model Model Parameters(Pre-training) Model Parameters(Downstream) hidden size n heads n layers ResNet18 - 3,848,904 - - - DistilBERT bin size:100 44,768,556 44,355,905 768 12 6 GPT2-small bin size:100 85,522,476 85,297,928 768 12 12 Model Parameters : This value represents the total number of trainable parameters during both the pre-training and downstream learning phases, indicating the total number of parameters updated throughout the model Table 5: model comparison of training/inferece times and memory consumption ModelPre-training (min)Downstream (sec)Inference on GPU(ms/sample)Inference on CPU(ms/sample)Memory Usage (MB) ResNet18 - 5 2.2950 4.5370 578.4800 DistilBERT bin size:100 86 71 4.5420 74.0620 957.6400 GPT2-small bin size:100 175 300 6.3373 155.0240 1401.9800 Note: Memory Usage is the amount of memory used when measured by inference in the CPU. Also training and inference times are presented using the Capture24 dataset for pretraining and the PAMAP2 dataset for downstream learning. The following discussion is based on the results presented in Tables 4 and 5, considering the requirements of a practical real-time HAR system. In this setting, the sensor device operates at 30 fps, meaning that each sample, consisting of 300 sensor sequences, must be processed within ap- proximately 33 ms. First, ResNet18 was trained only on the downstream task without any pretraining, requiring only 5 seconds for training. In contrast, the proposed Transformer-based models underwent both pre- training and downstream task learning. Specifically, DistilBERT required 86 minutes and GPT-2 required 175 minutes for pretraining, while the downstream training times were 71 seconds and 300 seconds, respectively. Regarding inference time, all models demonstrated significantly faster in- ference than the 30 fps requirement ( ≤33 ms per sample) in the GPU setting, with ResNet18 achieving 2.30 ms per sample, DistilBERT 4.54 ms per sample, and GPT-2 small 6.34 ms per sample. However, in the CPU setting, ResNet18 remained highly efficient at 4.54 ms per sample, whereas DistilBERT and GPT-2 small exhibited considerably longer infer- ence times of 74.06 ms per sample and 155.02 ms per sample, respectively. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 20 These results indicate that Transformer-based models are impractical for direct deployment in edge environments where GPU acceleration is un- available. In terms of memory footprint (Table 5:RAM usage), ResNet18 required approximately 578 MB, whereas DistilBERT and GPT-2 small consumed 958 MB and 1402 MB, respectively. This substantial increase in memory usage for Transformer-based models is attributed to their multi- layer self-attention mechanisms and large intermediate representations. However, the proposed method demonstrated notable performance im- provements through pretraining. In our experiments, it outperformed ResNet18 on three out of five datasets, indicating that the computational and memory costs are justified by the accuracy gains. Thus, while all models satisfy the 30 fps real-time processing requirement under GPU conditions, ResNet18 remains the most advantageous choice for CPU- based real-time HAR applications. Overall, Transformer-based models
https://arxiv.org/abs/2505.21918v1
exhibit challenges in terms of training cost, CPU inference speed, and memory consumption. However, their ability to achieve superior accu- racy through pretraining represents a significant advantage. On the other hand, ResNet18 requires no pretraining, has a lower training cost, and outperforms Transformer-based models in inference speed and memory efficiency, making it the most practical choice for real-time HAR deploy- ment on edge devices. 7 Conclusion In this study, we proposed a pre-training method utilizing an n-dimensional numerical Transformer model and applied it to activity recognition tasks based on sensor data. The proposed method incorporated three types of pre-training tasks—Masked Language Modeling (MLM), Reconstruc- tion, and Next-Token Prediction—whose performances were systemati- cally compared. Our method outperformed conventional approaches on the ADL, Op- portunity, and PAMAP2 datasets, with the MLM task achieving the high- est performance. Additionally, we compared our method with similar Transformer-based models, including SENvT-u4 and SENvT-contrastive. While these models outperformed the Vanilla Transformer across multiple downstream tasks, our DistilBERT (MLM) achieved even better results, demonstrating the effectiveness of handling each data point through a lin- ear projection rather than patch-based embeddings. Notably, no direct correlation was observed between the reduction in pre-training task losses and downstream performance. Despite exhibiting the most unstable loss reduction, the MLM task yielded the best overall results. For future work, we aim to refine the pre-training approach by ad- justing the difficulty of the MLM task for performance comparison and further investigating the relationship between dataset characteristics and pre-training task effectiveness. These insights will contribute to the de- velopment of more robust and efficient pre-training techniques. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 21 References [1] G´ erard Biau and Erwan Scornet. A random forest guided tour. Test, 25:197–227, 2016. [2] Mastrogiovanni Fulvio Bruno, Barbara and Antonio Sgor- bissa. Dataset for ADL Recognition with Wrist-worn Ac- celerometer. UCI Machine Learning Repository, 2012. DOI: https://doi.org/10.24432/C5PC99. [3] S. Chan Chang, R. Walmsley, J. Gershuny, T. Harms, E. Thomas, K. Milton, P. Kelly, C. Foster, A. Wong, N. Gray, S. Haque, S. Hol- lowell, and A. Doherty. Capture-24: Activity tracker dataset for human activity recognition, 2021. [4] Kaixuan Chen, Dalin Zhang, Lina Yao, Bin Guo, Zhiwen Yu, and Yunhao Liu. Deep learning for sensor-based human activity recog- nition: Overview, challenges, and opportunities. ACM Computing Surveys (CSUR) , 54(4):1–40, 2021. [5] Weiqi Chen, Wenwei Wang, Bingqing Peng, Qingsong Wen, Tian Zhou, and Liang Sun. Learning to rotate: Quaternion transformer for complicated periodical time series forecasting. KDD , 2022. [6] Ian Cleland, Luke Nugent, Federico Cruciani, and Chris Nugent. Leveraging large language models for activity recognition in smart environments. In 2024 International Conference on Activity and Be- havior Computing (ABC) , pages 1–8, 2024. [7] Jacob Devlin. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018. [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770– 778, 2016. [9] Sijie Ji, Xinzhe Zheng,
https://arxiv.org/abs/2505.21918v1
and Chenshu Wu. Hargpt: Are llms zero-shot human activity recognizers?, 2024. [10] Im Y Jung. A review of privacy-preserving human and human activ- ity recognition. International Journal on Smart Sensing and Intelli- gent Systems , 13(1):1–13, 2020. [11] Jennifer R Kwapisz, Gary M Weiss, and Samuel A Moore. Activity recognition using cell phone accelerometers. ACM SigKDD Explo- rations Newsletter , 12(2):74–82, 2011. [12] Oscar D Lara and Miguel A Labrador. A survey on human activity recognition using wearable sensors. IEEE communications surveys & tutorials , 15(3):1192–1209, 2012. [13] Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu- Xiang Wang, , and Xifeng Yan. Enhancing the locality and break- ing the memory bottleneck of transformer on time series forecasting. NeurIPS , 2019. [14] Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, and Schahram Dustdar. Pyraformer: Low-complexity pyramidal Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 22 attention for long-range time series modeling and forecasting. ICLR , 2022. [15] Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non- stationary transformers: Exploring the stationarity in time series forecasting. NeurIPS , 2022. [16] I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. [17] Riktim Mondal, Dibyendu Mukhopadhyay, Sayanwita Barua, Pawan Kumar Singh, Ram Sarkar, and Debotosh Bhattacharjee. Chapter 14 - a study on smartphone sensor-based human activity recognition using deep learning approaches. In Janmenjoy Nayak, Bighnaraj Naik, Danilo Pelusi, and Asit Kumar Das, editors, Hand- book of Computational Intelligence in Biomedical Engineering and Healthcare , pages 343–369. Academic Press, 2021. [18] Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Longterm forecasting with transformers. ICLR , 2023. [19] Tsuyoshi Okita, Kosuke Ukita, Koki Matsuishi, Masaharu Kagiyama, Kodai Hirata, and Asahi Miyazaki. Towards llms for sensor data: Multi-task self-supervised learning. UbiComp/ISWC ’23 Adjunct, page 499–504, 2023. [20] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. [21] Attila Reiss and Didier Stricker. Introducing a new benchmarked dataset for activity monitoring. In 2012 16th International Sympo- sium on Wearable Computers , pages 108–109, 2012. [22] Daniel Roggen, Alberto Calatroni, Mirco Rossi, Thomas Hol- leczek, Kilian F¨ orster, Gerhard Tr¨ oster, Paul Lukowicz, David Ban- nach, Gerald Pirkl, Alois Ferscha, Jakob Doppler, Clemens Holz- mann, Marc Kurz, Gerald Holl, Ricardo Chavarriaga, Hesam Sagha, Hamidreza Bayati, Marco Creatura, and Jos´ e del R. Mill` an. Collect- ing complex activity datasets in highly rich networked sensor envi- ronments. In 2010 Seventh International Conference on Networked Sensing Systems (INSS) , pages 233–240, 2010. [23] V Sanh. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 , 2019. [24] Timo Sztyler and Heiner Stuckenschmidt. On-body localization of wearable devices: An investigation of position-aware activity recog- nition. In 2016 IEEE International Conference on Pervasive Com- puting and Communications (PerCom) , pages 1–9, 2016. [25] Akihisa Tsukamoto, Kenji Mase, and Yu Enokibori. Integrated uti- lization of
https://arxiv.org/abs/2505.21918v1
imu-based human activity recognition datasets across var- ied configurations using mig har dataset. International Journal of Activity and Behavior Computing , 2024(2):1–21, 2024. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 23 [26] A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems , 2017. [27] Yu Wang and Ana Basiri. Advancing human activity recognition us- ing ultra-wideband channel impulse response snapshots. In 2024 In- ternational Conference on Activity and Behavior Computing (ABC) , pages 1–10, 2024. [28] Haixu Wu, Jiehui Xu, Jianmin Wang, , and Mingsheng Long. Auto- former: Decomposition transformers with auto-correlation for long- term series forecasting. NeurIPS , 2021. [29] Neo Wu, Bradley Green, Xue Ben, and Shawn O’Banion. Deep trans- former models for time series forecasting: The influenza prevalence case. arXiv preprint arXiv:2001.08317 , 2020. [30] Sifan Wu, Xi Xiao, Qianggang Ding, Peilin Zhao, Ying Wei, and Junzhou Huang. Adversarial sparse transformer for time series fore- casting. NeurIPS , 2020. [31] Chao Zhang, Jaswanth Yella, Yu Huang, Xiaoye Qian, Sergei Petrov, Andrey Rzhetsky, and Sthitie Bom. Soft sensing transformer: hun- dreds of sensors are worth a single word. pages 1999–2008, 2021. [32] Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. ICLR , 2023. [33] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient trans- former for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence , volume 35, pages 11106– 11115, 2021. [34] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient trans- former for long sequence timeseries forecasting. AAAI , 2021. [35] Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. ICML , 2022. [36] Parham Zolfaghari, Vitor Fortes Rey, Lala Ray, Hyun Kim, Sungho Suh, and Paul Lukowicz. Sensor data augmentation from skeleton pose sequences for improving human activity recognition. In 2024 In- ternational Conference on Activity and Behavior Computing (ABC) , pages 1–8, 2024. A Appendix A.1 Dataset Table 6 summarizes the five datasets used for the extrinsic evaluation pre- sented in Table 2. The column ”Class” represents the number of activity types in the activity recognition task. The columns ”Train Samples,” ”Validation Samples,” and ”Test Samples” show the number of samples Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 24 Table 6: Train, Validation, and Test Sample Counts for Datasets Dataset Classes Train Samples Validation Samples Test Samples ADL[2] 5 406 102 127 Oppotunity[22] 4 2730 683 854 PAMAP2[21] 8 1836 459 574 REALWORLD[24] 8 7964 1992 2490 WISDM[11] 18 17916 4480 5600 Capture-24[3] - 722308 187227 - allocated for training, validation, and testing, respectively, in each dataset. These datasets consist of 3-dimensional time-series data, where one sam- ple corresponds to a sequence of 300 timesteps. This format is used as input for each model in the study. Additionally,
https://arxiv.org/abs/2505.21918v1
the capture-24 dataset, used for pretraining, is included in the table. Since this dataset is not used for the activity recognition task, the ”Class” column is not applicable in this case. Only the ”Train Samples” and ”Validation Samples” are shown for this dataset. A.2 Downstream Algorithm Algorithm 3 Downstream Learning Require: Xseq∈R(num seq×sequence length ×sensor dim) Ylabel∈R(num seq) Pretrained model; batch size batch size; Learning rate η Ensure: Trained model∗ 1:while the training has not converged do 2:forXbatch, Ybatch in Loader( Xseq, Ylabel,batch size)do 3: Z←model( Xbatch) 4: L ← CrossEntropy( Z, Y batch) 5: w←w−η· ∇wL {Update parameters based on the loss } 6:end for 7:end while 8:return model∗ This algorithm 3 describes the downstream learning process conducted after self-supervised learning in the proposed method. The input data is structured as three-dimensional sensor data, consistent with the pretrain- ing phase, where each sample corresponds to a sequence of 300 timesteps. Using this data, activity recognition is performed as a classification task. During downstream learning, the pretrained model is loaded for the activity recognition task. In this process, the weights of the parallel linear layers in the output layer are discarded. A new classification head for activity recognition is introduced, while all other parameters are loaded Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing IJABC: International Journal of Activity and Behavior Computing 25 from the pretrained model. This approach leverages the knowledge ac- quired during pretraining to construct a model tailored for the activity recognition task. Self-supervised Learning Method Using Transformer for Multi-dimensional Sensor Data Processing
https://arxiv.org/abs/2505.21918v1
arXiv:2505.21919v1 [cs.ET] 28 May 2025Towards Efficient Key-Value Cache Management for Prefix Prefilling in LLM Inference Yue Zhu, Hao Yu, Chen Wang, Zhuoran Liu, Eun Kyung Lee IBM T. J. Watson Research Center, Yorktown Heights, NY , USA yue.zhu@ibm.com, yuh@us.ibm.com, chen.wang1@ibm.com, zhuoran.liu@ibm.com, eunkyung.lee@us.ibm.com Abstract —The increasing adoption of large language models (LLMs) with extended context windows necessitates efficient Key- Value Cache (KVC) management to optimize inference perfor- mance. Inference workloads like Retrieval-Augmented Genera- tion (RAG) and agents exhibit high cache reusability, making efficient caching critical to reducing redundancy and improving speed. We analyze real-world KVC access patterns using publicly available traces and evaluate commercial key-value stores like Redis and state-of-the-art RDMA-based systems (CHIME [1] and Sherman [2]) for KVC metadata management. Our work demon- strates the lack of tailored storage solution for KVC prefilling, underscores the need for an efficient distributed caching system with optimized metadata management for LLM workloads, and provides insights into designing improved KVC management systems for scalable, low-latency inference. Index Terms —Prefix Prefill, Key-Value Cache, LLM Inference, Key-Value Store, Distributed Caching I. I NTRODUCTION Large Language Models (LLMs) have shown remarkable ability in tasks like text generation, translation, and question- answering, but their attention architecture introduces signifi- cant challenges. The use of key-value caches (KVC) in atten- tion layer of transformer models, while essential for efficient token generation, requires substantial memory resources. As the input sequence grows, size of the KVC grows linearly with respect to the input length [3]. This often limits a model’s ability to handle long contexts (e.g. document sum- marization, conversational AI) or process multiple requests simultaneously. Techniques like prefix prefill improves LLM performance by caching frequently used prefixes, reducing redundant computations, decreasing time to first token (TTFT), and enhancing overall throughput [4]. These optimizations ex- acerbate the memory challenge by requiring additional storage for staging pre-computed KVCs. Numerous research efforts have focused on addressing the challenges posed by KVC. These include quantization and pruning [5] techniques to reduce memory footprint, as well as more sophisticated approaches such as Grouped- Query Attention (GQA) [6] and Sliding Window Attention (SWA) [7]. Additionally, researchers have explored options to offload KVC to CPU memory [8] or sophisticated hierarchical storage solutions (e.g., Mooncake [9], LMCache [10]) to stage KVC across different storage tiers, allowing for more efficient management of storage resources. Although these works rely This paper has been accepted at IEEE Cloud 2025. The final version will appear in IEEE Xplore.on traditional key-value stores or single-node data structures for metadata management, they overlook the unique access patterns of prefix prefill workloads. They do not efficiently handle KVC’s high-reusbility, dominated sequential accesses with random access patterns, causing performance degrada- tion, increased latency, and scalability bottlenecks in LLM inference. Recent research on KVC management has leveraged dis- tributed system features like RDMA and disaggregated storage for traditional workloads. State-of-the-art key-value stores like CHIME [1] and Sherman [2] focus on efficient indexing in disaggregated systems. However, the challenges of prefix prefill workloads in distributed environments remain largely unexplored, particularly in KVC metadata management. In this paper, we analyze KVC block access
https://arxiv.org/abs/2505.21919v1
pattern from published traces for real-world LLM-serving [9]. Our analysis reveals fundamentally different access patterns associated with KVC-usage from traditional key-value store workloads: (1) high temporal locality for recent tokens, (2) substantial initial token reusability across requests, and (3) the need for a combi- nation of range queries and random access operations. Current key-value stores, such as Redis (used in MoonCake [9]) and FoundationDB (used in DeepSeek 3FS [11]), are not optimized for these unique characteristics. Our evaluation demonstrates that there is a need to develop a KVC management system to efficiently support the combination of range queries and random accesses while leveraging the intrinsic locality in LLM inference workloads. II. P RIOR ARTS& T RACE STUDY State-of-the-art solutions such as Mooncake [9], LM- Cache [10], IMPRESS [12], and DeepSeek 3FS [11] employ different strategies for KVC staging. Mooncake optimizes KVC retrieval and management. LMCache accelerates KVC transfer via compression and dynamic merging without loss in output quality. IMPRESS implements an importance-aware multi-tier storage system to minimize I/O latency. DeepSeek 3FS utilizes a high-performance distributed file system with a chunk-based data store to support various AI workloads, including KVC for inference and training checkpoints. While these solutions aim to reduce redundant KVC com- putation and improve TTFT, they rely on conventional meta- data management approaches. For instance, Mooncake lever- ages Redis, IMPRESS employs a node-local radix tree, and DeepSeek 3FS utilizes FoundationDB. None are tailored for (a) CDF of Block Hit Ratio (b) Block ID Distribution Fig. 1: Block Reusability over 1-Hour Trace optimized metadata management to accommodate the unique access patterns and demands of prefix prefill workloads. To gain deeper insights into the requirements of KVC man- agement in real-world scenario, we analyzed three prefix prefill traces (i.e., conversation, tool&agent, synthetic trace) from Mooncake’s production LLM applications [9]. Due to the page limit, we present tool&agent trace here. The one-hour trace is composed of a sequence of LLM-serving requests, where each request contains the arrival timestamp, input/output-length, a list of block-IDs ( hash id in the trace) for KVC blocks required in the requests. Using the LLM production traces, we examine the access patterns and reusability of KVC blocks. Fig. 1a depicts the cumulative distribution of KVC-block hit rates over all KVC access requests in the trace. For a given request with multiple blocks, the hit-rate of the request is the ratio of the number of blocks that are accessed previously to the total number of blocks in the request. As shown, over 75% of requests have a block hit rate exceeding 50%, demonstrating high KVC reusability. We saw that approximately 40% of requests in the conversation trace and 45% in the synthetic trace achieve a KVC block hit ratio exceeding 50% and 80%, respectively. Figure 1b plots the distribution of KVC blocks block IDs in one hour. The figure shows that both the initially generated KVC and the most recently generated KVC exhibit high reusability over time. The same trend appears in conversation trace, but the synthetic trace shows rather short reuse distance in time. To further analyze block access
https://arxiv.org/abs/2505.21919v1
patterns, we categorize accesses as sequential (two or more contiguous blocks) or non-sequential within each request. Fig. 2a shows the fraction of sequential blocks in a request over the one-hour span. On average, 86.8% of blocks within each request are sequential, enabling efficient key retrieval via range queries. By leveraging this sequential access pattern, range queries could reduce individual lookups, minimize metadata search overhead, and improve overall retrieval latency. We also analyze the non- sequential block access pattern in Fig. 2b. We use p-values from randomness runs test to quantify the randomness of these non-sequential block ID distributions, where a block ID with p-value >0.05 indicates a random distribution. As shown in the figure, 89% of block IDs’ p-values are greater than 0.05, with many near 0.5, indicating a high random distribution of these non-sequential KVC block accesses. Similar sequential and random access patterns are found in the conversational (a) Seq Access Rate per Req (b) Runs Test for Non-Seq Block Fig. 2: Sequential & Random Access Pattern in Requests and synthetic traces. The analysis exhibits high temporal locality for both recent and initial tokens, along with a mix of sequential and random block access. These characteristics present both opportunities and challenges for optimizing KVC management in LLM applications, highlighting the need for efficient strategies that balance sequential retrieval and handling of randomly accessed blocks. III. E XPERIMENT & IMPLICATIONS To assess the performance of commercial and state-of- the-art key-value stores in metadata management for LLM prefix prefill workloads, we conducted experiments with three systems: CHIME [1], Sherman [2], and Redis. Using the publicly available KVC traces described in Section II, we developed a benchmark to simulate metadata operations based on the timestamp and block hash ID of each KVC block in the request stream. We assume the key is a 32-byte hash ID (SHA-256 output) and the value is an 8-byte address pointing to the corresponding KVC block. We issue range queries for contiguous blocks, and use get() operations for randomly accessed blocks. We conduct the experiments with two nodes, one as client and one as server, directly connected via a 100 Gbps Ethernet link. Each node is equipped with a 36-core Intel Xeon E5-2697 v4 CPU, 1 TiB of DRAM, and a 100 Gbps Mellanox ConnectX-6 NIC. Fig. 3 presents the average p99 latency over the given time interval during the one-hour span of the real application trace, providing insights into the performance variability of different key-value store solutions under realistic workload conditions. In both Fig. 3a and Fig. 3b, Redis exhibits significantly higher latency compared to CHIME and Sherman for both range queries and search operations. This performance gap is primarily due to Redis’s high operational overhead as a full database, whereas CHIME and Sherman are optimized for efficient metadata retrieval in distributed environments. Despite CHIME’s reported advantages over Sherman in YCSB 0-10 10-20 20-30 30-40 40-50 50-60 Timestamp (min)0100200300P99 Latency (us)chime redis sherman (a) P99 Range Query Latency 0-10 10-20 20-30 30-40 40-50 50-60 Timestamp (min)0100200300P99 Latency (us)chime redis sherman (b) P99 Search Latency Fig. 3: Normalized P99
https://arxiv.org/abs/2505.21919v1
Latency Based on Real Trace (Redis=1). benchmarks, our experiments show minimal differences be- tween the two when evaluated on KVC workloads. For range queries in Fig. 3a, SHERMAN outperforms CHIME by 10.3% on average, excluding the first 10 minutes for warm up. Conversely, for search latency in Fig. 3b, CHIME slightly surpasses Sherman by 5.5%, excluding the warm-up period. Further, both systems exhibit significant p99 latency variability in range query. The results suggest that the optimizations in state-of-the-art systems like CHIME and Sherman have insignificant impacts on KVC prefix prefill workloads, show- casing the need for tailored metadata management solutions that better align with the unique access patterns of these workloads. Based on our experiment findings and trends in KVC optimization, we outline key implications of metadata man- agement in KVC systems tailored for prefix prefill workloads. Insight 1: Traditional key-value stores were not well-suited for KVC prefix prefill workloads, which require specialized optimizations to manage both random searches and range queries while maximizing key reusability. When loading KVC from memory/storage stack, Redis’s long metadata indexing time (search and range query latency >0.1 ms) can significantly delay TTFT. Given that the smallest TTFT ranges from 0.44 ms to 0.56 ms when pre-caching KVC on GPU memory [13], such delays can become a major bottleneck in inference performance. Additionally, the structural limitations of existing systems hinder optimal KVC prefilling performance. Sherman’s B+ tree is not optimized for mixed access patterns, while CHIME’s hybrid approach lacks optimizations to fully exploit high key reuse. Their design priorities also do not align with the unique characteristics of KVC workloads. Sherman prioritizes write operations over read-heavy workloads, while CHIME’s caching strategy fails to accommodate sequential chunk access patterns, making both suboptimal for KVC prefix prefill workloads. To fully leverage high key reusability while balancing random search efficiency and large-scale range queries, key-value store architectures must be reimagined, with a focus on caching mechanisms tailored to KVC access patterns. Insight 2: Metadata management overhead will be amplified for new KVC optimization techniques. Chunked prefill and KVC compression are two key opti- mizations aimed at improving KVC prefill efficiency. Chunked prefill (e.g., [10]) introduces finer-granularity chunked KVC, leading to higher volume of metadata operations and increased management overhead. KVC compression methods, such as quantization and context-aware compression [10], reduce KVC size to minimize data transfer times and enhance I/O effi- ciency. As metadata operations increase and KVC data transfer times decrease, the relative impact of metadata operations on overall system performance becomes more pronounced. Insight 3: Traditional YCSB workloads is not sufficient to evaluate key-value store for KVC metadata management. YCSB workloads are designed for generic key-value access patterns with predefined distributions (e.g., uniform, zipfian) and fail to capture the unique characteristics of KVC meta-data access, such as high key reusability and mixed random search/range query patterns. We will need more representative benchmarks to effectively reflect the data access patterns of KVC prefix prefill workloads. IV. C ONCLUSIONS & F UTURE WORK In this work, we analyzed real-world application traces and identified the high reusability and the mixed sequential-
https://arxiv.org/abs/2505.21919v1
random access patterns in KVC prefix prefill workloads. To evaluate the efficiency of metadata management in existing KVC staging solutions, including Redis and state-of-the-art key-value stores, we developed a benchmark that measures metadata efficiency using real-world application traces. Our evaluation demonstrates that current solutions are inadequate for handling the unique demands of prefix prefill workloads. We are developing a metadata management system and a hierarchical KVC caching system to optimize range queries and random get queries, aiming to minimize TTFT for long- context inference via KVC prefilling. Our approach features a reuse-optimized metadata caching scheme, a workload-aware index structure balancing sequential block access with fast random lookups, and a hotness-aware data placement strategy for hierarchical caching. Additionally, we will enrich the current benchmark to facilitate comprehensive evaluations of metadata management and the overall KVC caching system. REFERENCES [1] X. Luo, J. Shen, P. Zuo, X. Wang, M. R. Lyu, and Y . Zhou, “CHIME: A Cache-Efficient and High-Performance Hybrid Index on Disaggregated Memory,” in SIGOPS’24 . [2] Q. Wang, Y . Lu, and J. Shu, “Sherman: A Write-Optimized Distributed B+Tree Index on Disaggregated Memory,” in SIGMOD’22 . [3] R. Turganbay, “Unlocking Longer Generation with Key-Value Cache Quantization.” https://huggingface.co/blog/kv-cache-quantization, 2024. Accessed: 2025-02-28. [4] H. Jeong, D. Ahn, Y . Jung, and T. Kim, “vLLM vs TensorRT- LLM 12, Automatic Prefix Caching.” https://blog.squeezebits.com/ vllm-vs-tensorrtllm-12-automatic-prefix-caching-38189, 2024. Ac- cessed: 2025-02-28. [5] J. Zhang, D. Zhu, Y . Song, W. Wu, C. Kuang, X. Li, L. Shang, Q. Liu, and S. Li, “More Tokens, Lower Precision: Towards the Optimal Token-Precision Trade-off in KV Cache Compression,” arXiv preprint arXiv:2412.12706 , 2024. [6] J. Ainslie, J. Lee-Thorp, M. De Jong, Y . Zemlyanskiy, F. Lebr ´on, and S. Sanghai, “GQA: Training Generalized Multi-query Transformer Models from Multi-Head Checkpoints,” arXiv:2305.13245 , 2023. [7] I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long- document transformer,” CoRR’20 . [8] Y . Xu, Z. Mao, X. Mo, S. Liu, and I. Stoica, “Pie: Pooling CPU Memory for LLM Inference,” arXiv:2411.09317 , 2024. [9] R. Qin, Z. Li, W. He, J. Cui, F. Ren, M. Zhang, Y . Wu, W. Zheng, and X. Xu, “Mooncake: Trading More Storage for Less Computation—A KVCache-centric Architecture for Serving LLM Chatbot,” in FAST’25 . [10] Y . Liu, H. Li, Y . Cheng, S. Ray, Y . Huang, Q. Zhang, K. Du, J. Yao, S. Lu, G. Ananthanarayanan, M. Maire, H. Hoffmann, A. Holtzman, and J. Jiang, “CacheGen: KV Cache Compression and Streaming for Fast Large Language Model Serving,” in SIGCOMM’24 . [11] DeepSeek, “DeepSeek 3FS.” https://github.com/deepseek-ai/3FS, 2025. Accessed: 2025-02-28. [12] W. Chen, S. He, H. Qu, R. Zhang, S. Yang, P. Chen, Y . Zheng, B. Huai, and G. Chen, “IMPRESS: An Importance-Informed Multi-Tier Prefix KV Storage System for Large Language Model Inference,” in FAST’25 . [13] Y . Tang, R. Cheng, P. Zhou, T. Liu, F. Liu, W. Tang, K. Bae, J. Chen, W. Xiang, and R. Shi, “Exploring cxl-based kv cache storage for llm serving,” NeurIPS’24 Workshop MLforSys .
https://arxiv.org/abs/2505.21919v1
FALCON: An ML Framework for Fully Automated Layout-Constrained Analog Circuit Design Asal Mehradfar1Xuzhe Zhao2Yilun Huang2Emir Ceyani1 Yankai Yang2Shihao Han2Hamidreza Aghasi2Salman Avestimehr1 1University of Southern California2University of California, Irvine mehradfa@usc.edu Abstract Designing analog circuits from performance specifications is a complex, multi-stage process encompassing topology selection, parameter inference, and layout feasibil- ity. We introduce FALCON, a unified machine learning framework that enables fully automated, specification-driven analog circuit synthesis through topology se- lection and layout-constrained optimization. Given a target performance, FALCON first selects an appropriate circuit topology using a performance-driven classifier guided by human design heuristics. Next, it employs a custom, edge-centric graph neural network trained to map circuit topology and parameters to performance, enabling gradient-based parameter inference through the learned forward model. This inference is guided by a differentiable layout cost, derived from analytical equations capturing parasitic and frequency-dependent effects, and constrained by design rules. We train and evaluate FALCON on a large-scale custom dataset of 1M analog mm-wave circuits, generated and simulated using Cadence Spectre across 20 expert-designed topologies. Through this evaluation, FALCON demonstrates >99% accuracy in topology inference, <10% relative error in performance prediction, and efficient layout-aware design that completes in under 1 second per instance. Together, these results position FALCON as a practical and extensible foundation model for end-to-end analog circuit design automation. Our code and dataset are publicly available at https://github.com/AsalMehradfar/FALCON . 1 Introduction Analog radio frequency (RF) and millimeter-wave (mm-wave) circuits are essential to modern electronics, powering critical applications in signal processing [ 1], wireless communication [ 2], sensing [3], radar [4], and wireless power transfer systems [5]. Despite their importance, the design of analog circuits remains largely manual, iterative, and dependent on expert heuristics [ 6–8]. This inefficiency stems from several challenges: a vast and continuous design space that is difficult to explore systematically; tightly coupled performance metrics (e.g. gain, noise, bandwidth, and power) that create complex trade-offs; and physical and layout-dependent interactions that complicate design decisions. As demand grows for customized, high-performance analog blocks, this slow, expert-driven design cycle has become a critical bottleneck. While machine learning (ML) has revolutionized digital design automation, analog and RF circuits still lack scalable frameworks for automating the full pipeline from specification to layout. While recent ML approaches have made progress in analog circuit design, they typically target isolated sub-tasks such as topology generation or component sizing [ 9,10] at the schematic level, without addressing the full synthesis pipeline. Many efforts assume fixed topologies [ 11–14], limiting adaptability to new specifications or circuit families. Optimization strategies often rely on black-box methods that do not scale well to large, continuous design spaces [ 15]. Some methods predict Preprint. Under review.arXiv:2505.21923v1 [cs.LG] 28 May 2025 Figure 1: Our AI-based circuit design pipeline. Given a target performance specification, FALCON first selects a suitable topology, then generates design parameters through layout-aware gradient-based reasoning with GNN model. Then, the synthesized circuit is validated using Cadence simulations. performance metrics directly from netlists [ 16], but do not support inverse design, i.e., generating circuit parameters from target specifications. Furthermore, layout awareness is typically handled as a separate post-processing
https://arxiv.org/abs/2505.21923v1
step [ 17], missing the opportunity to guide optimization with layout constraints. Finally, many available benchmarks are built on symbolic or synthetic simulations [ 18], lacking the fidelity and realism of the process of commercial grade design flows. As a result, current ML pipelines do not allow fully generalizable, layout-aware, and end-to-end analog circuit design. We propose FALCON (Fully Automated Layout-Constrained analOg circuit desigN), a scalable and modular machine learning framework for end-to-end analog and RF circuit design. Built on a dataset of over one million Cadence-simulated circuits, FALCON comprises three core components (Figure 1): (1) a lightweight multilayer perceptron (MLP) selects the most appropriate topology given a target performance specification; (2) a generalizable graph neural network (GNN) maps circuit topology and element-level parameters to performance metrics, operating on a native graph representation derived from Cadence netlists; and (3) gradient-based optimization over the forward GNN model recovers design parameters that meet the target specification, guided by a differentiable layout-aware loss that encodes parasitic effects and physical constraints. Notably, the GNN model in FALCON generalizes effectively to unseen topologies, enabling inverse design across diverse circuit families, even in low-data regimes, with optional fine-tuning for improved accuracy. By integrating layout modeling directly into the optimization process, FALCON unifies schematic and physical considerations within a single differentiable learning framework. Our main contributions are as follows: •We construct a large-scale analog/RF circuit dataset comprising over one million Cadence- simulated datapoints across 20 expert-designed topologies and five circuit types. •We introduce a native netlist-to-graph representation that preserves both structural and parametric fidelity, enabling accurate learning over physical circuit topologies. •We develop a modular ML framework for end-to-end inverse design, incorporating performance-driven topology selection and layout-aware gradient-based optimization, with a differentiable loss that enforces area constraints, design-rule compliance, and frequency- dependent modeling of passive components. •We design a generalizable GNN capable of accurate performance prediction and parameter inference across both seen and unseen topologies, with optional fine-tuning. 2 Related Work While recent ML-based approaches have advanced analog and RF circuit design, they typically target isolated stages of the design flow—such as topology generation, parameter sizing, or schematic-level performance prediction—without supporting unified, end-to-end synthesis. FALCON bridges this gap by jointly addressing aforementioned stages within a single framework. Topology generation methods aim to select or synthesize candidate circuit structures [ 9,19,20], often using discrete optimization or generative models to explore the circuit graph space. However, these approaches typically target low-frequency or simplified designs [ 9] and may produce physically invalid or non-manufacturable topologies. In contrast, FALCON leverages a curated set of netlists, ensuring manufacturable validity and eliminating the need to rediscover fundamental circuit structures. Parameter sizing and performance prediction have been explored through various learning paradigms. Reinforcement learning [ 10,21] and Bayesian optimization [ 15,22] optimize parameters 2 via trial-and-error, often requiring large simulation budgets. Supervised learning methods [ 23,24,11] regress parameter values from performance targets under fixed topologies. Graph-based models [ 16] incorporate topology-aware representations to predict performance metrics from netlists. However, these approaches focus on forward prediction or black-box sizing and do not support inverse design across varied
https://arxiv.org/abs/2505.21923v1
topologies. In contrast, FALCON unifies forward modeling and parameter inference in a single differentiable architecture that generalizes to unseen netlists. Layout-aware sizing and parasitic modeling have been explored to mitigate schematic-to-layout mismatch. Parasitic-aware methods [ 25] integrate pre-trained parasitic estimators into Bayesian optimization loops for fixed schematics. While effective for estimation, these approaches rely on time-consuming black-box search and lack inverse design capabilities. Other methods, such as ALIGN [ 26] and LayoutCopilot [ 27], generate layouts from fully sized netlists using ML-based constraint extraction or scripted interactions, but assume fixed parameters and do not support co- optimization or differentiable inverse design. In contrast, FALCON embeds layout objectives directly into the learning loss, enabling joint optimization of sizing and layout without relying on external parasitic models. For mm-wave circuits, our layout-aware loss captures frequency-sensitive constraints via simplified models that implicitly reflect DRC rules, EM coupling, and performance- critical factors such as quality factor and self-resonance frequency. Datasets for analog design are often limited to symbolic SPICE simulations or small-scale testbeds that do not reflect real-world design flows. AnalogGym [ 18] and AutoCkt [ 13] rely on synthetic circuits and symbolic simulators, lacking the process fidelity, noise characteristics, and layout- dependent behavior of foundry-calibrated flows. In contrast, FALCON is trained on a large-scale dataset constructed from over one million Cadence-simulated circuits across 20 topologies and five circuit categories, offering a substantially more realistic foundation for ML-driven analog design. To the best of our knowledge, FALCON is the first framework to unify topology selection, parameter inference, and layout-aware optimization in a single end-to-end pipeline, validated at scale using industrial-grade Cadence simulations for mm-wave analog circuits. 3 A Large-Scale Dataset and Inverse Design Problem Formulation 3.1 Dataset Overview We construct a large-scale dataset of analog and RF circuits simulated using industry-grade Cadence tools [ 28] with a 45nm CMOS process design kit (PDK). The dataset spans five widely used mm-wave circuit types for wireless applications [ 29,30]: low-noise amplifiers (LNAs) [ 31–34], mixers [ 35–38], power amplifiers (PAs) [ 39–43], voltage amplifiers (V As) [ 44–48], and voltage-controlled oscillators (VCOs) [ 49–53]. Each circuit type is instantiated in four distinct topologies, resulting in a total of 20 expert-designed architectures. For each topology, expert-designed schematics were implemented in Cadence Virtuoso, and key design parameters were manually identified based on their functional relevance. Parameter ranges were specified by domain experts and systematically swept using Cadence ADE XL, enabling parallelized Spectre simulations across the design space. For each configuration, performance metrics—such as gain, bandwidth, and oscillation frequency—were extracted and recorded. Each datapoint therefore includes the full parameter vector, the corresponding Cadence netlist, and the simulated performance metrics. The resulting dataset comprises over one million datapoints, capturing a wide range of circuit behaviors and design trade-offs across diverse topologies. This large-scale, high-fidelity dataset forms the foundation for training and evaluating our inverse design pipeline. 3.2 Graph-Based Circuit Representation To enable flexible and topology-agnostic learning, we represent each analog circuit as a graph extracted from its corresponding Cadence netlist. Nodes correspond to voltage nets (i.e., electrical connection points), and edges represent
https://arxiv.org/abs/2505.21923v1
circuit elements such as transistors, resistors, capacitors, or sources. Multi-terminal devices—such as transistors and baluns—are decomposed into multiple edges, and multiple components may connect the same node pair, resulting in heterogeneous, multi-edged graphs that preserve structural and functional diversity. 3 Recent works such as DICE [ 54] have explored transistor-level circuit-to-graph conversions for self-supervised learning, highlighting the challenges of faithfully capturing device structure and connectivity. In contrast, our approach maintains a native representation aligned with foundry- compatible netlists. Rather than flattening or reinterpreting device abstractions, we preserve symbolic parameters, multi-edge connections, and device-specific edge decomposition directly from the schematic source, enabling scalable learning across diverse analog circuit families. To support learning over such structured graphs, each edge is annotated with a rich set of attributes: (i) a categorical device type , specifying the component and connected terminal pair (e.g., NMOS drain–gate, resistor); (ii) numeric attributes , such as channel length or port resistance, fixed by the schematic; (iii) parametric attributes , defined symbolically in the netlist (e.g., W1,R3) and resolved numerically during preprocessing; (iv) one-hot categorical features , such as source type (DC, AC, or none); and (v) computational attributes , such as diffusion areas ( Ad,As) derived from sizing. This rule-based graph construction generalizes across circuit families without task-specific customization. Graphs in the FALCON dataset range from 4–40 nodes and 7–70 edges, reflecting the variability of practical analog designs. 3.3 Inverse Design Problem Definition In analog and RF circuit design, the traditional modeling process involves selecting a topology Tand parameter vector x, then evaluating circuit behavior via simulation to obtain performance metrics y=f(T, x). This forward workflow depends heavily on designer intuition, manual tuning, and exhaustive parameter sweeps. Engineers typically simulate many candidate (T, x)pairs and select the one that best satisfies the target specification—a slow, costly, and unguided process. In contrast, our goal is to perform inverse design : given a target performance specification ytarget, we aim to directly infer a topology and parameter configuration (T, x)such that f(T, x)≈ytarget, without enumerating the full design space. This inverse problem is ill-posed and the search space is constrained by both device-level rules and layout-aware objectives. Formally, the task is to find the optimal topology T∗∈ T and the optimal parameters x∗∈Rp such that f(T∗, x∗)≈ytargetwhere f:T ×Rp→Rdthe true performance function implemented by expensive Cadence simulations. In practice, fis nonlinear and non-invertible, making direct inversion intractable. FALCON addresses this challenge through a modular, three-stage pipeline: Stage 1: Topology Selection. We frame topology selection as a classification problem over a curated set of Kcandidate topologies {T1, . . . , T K}. Given a target specification ytarget, a lightweight MLP selects the topology T∗∈ T most likely to satisfy it, reducing the need for exhaustive search. Stage 2: Performance Prediction. Given a topology Tand parameter vector x, we train a GNN fθ to predict the corresponding performance ˆy=fθ(T, x). This model emulates the forward behavior of the simulator f, learning a continuous approximation of circuit performance across both seen and unseen topologies. By capturing the topology-conditioned mapping from parameters to performance, fθserves as
https://arxiv.org/abs/2505.21923v1
a differentiable surrogate that enables gradient-based inference in the next stage. Stage 3: Layout-Aware Gradient Reasoning. Given ytargetand a selected topology T∗, we infer a parameter vector x∗by minimizing a loss over the learned forward model fθ. Specifically, we solve: x∗= arg min xLperf(fθ(T∗, x), ytarget) +λLlayout(x), (1) where Lperfmeasures prediction error, and Llayout encodes differentiable layout-related constraints such as estimated area and soft design-rule penalties. Optimization is performed via gradient descent, allowing layout constraints to guide the search through a physically realistic parameter space. 4 Stage 1: Performance-Driven Topology Selection Task Setup. We formulate topology selection as a supervised classification task over a fixed library of 20 expert-designed circuit topologies T={T1, T2, . . . , T 20}. Rather than generating netlists from scratch—which often leads to invalid or impractical circuits—we select from a vetted set of designer-verified topologies. This ensures that all candidates are functionally correct, layout-feasible, and manufacturable. While expanding the topology set requires retraining, our lightweight MLP classifier enables rapid updates, making the approach scalable. This formulation also aligns with practical design workflows, where quickly identifying a viable initial topology is critical. 4 Figure 2: In Stage 1, an MLP classifier selects the most suitable circuit topology from a library of human-designed netlists, conditioned on the target performance specification.Table 1: Classification performance on topology selection. Metric Score (%) Accuracy 99.57 Balanced Accuracy 99.33 Macro Precision 99.27 Macro Recall 99.33 Macro F1 99.30 Micro F1 99.57 Each datapoint is represented by a 16-dimensional performance vector of key analog/RF metrics1. We normalize features using z-scores computed from the training set. Missing metrics (e.g., oscillation frequency for amplifiers) are imputed with zeros, yielding zero-centered, fixed-length vectors that retain task-relevant variation. Dataset splits are stratified to preserve class balance across training, validation, and test sets. We assume each target vector is realizable by at least one topology in T, though the library can be extended with new designs. Model Architecture and Training. We train a 5-layer MLP with hidden size 256 and ReLU activations for this problem. The model takes the normalized performance vector ytarget∈R16as input and outputs a probability distribution over 20 candidate topologies. The predicted topology is selected as T∗= arg max Tk∈TMLP(ytarget)k. We train the model using a cross-entropy loss and the Adam optimizer [ 55], with a batch size of 256. An overview of this process is shown in Figure 2. Evaluation. We begin by assessing the quality of the input representation used for topology classifi- cation. Normalized performance vectors encode rich semantic information about circuit behavior. To validate this, we project them into a two-dimensional t-SNE space [ 56] (Figure 3(a)). The re- sulting clusters align closely with topology labels, indicating that performance specifications reflect underlying schematic structure and are effective inputs for supervised classification. We assess classification performance using accuracy, balanced accuracy, macro precision, macro recall, macro F1, and micro F1 scores on the test set. As summarized in Table 1, the classifier achieves an overall accuracy of 99.57%, with macro F1 of 99.30% and balanced accuracy of 99.33%, demon- strating strong generalization across all 20
https://arxiv.org/abs/2505.21923v1
circuit topologies. Micro F1 (identical to accuracy in the multiclass setting) reaches 99.57%, while macro metrics—averaged equally across classes—highlight robustness to class imbalance. These trends are reinforced by the per-class accuracy plot in Figure 3(c), where most topologies reach 100% accuracy. The confusion matrix in Figure 3(b) visualizes only the misclassified instances, as most classes achieve perfect accuracy. The few observed errors are primarily concentrated among the two voltage amplifier topologies—common-gate (CGV A) and common-source (CSV A). These circuits operate near the gain-bandwidth limit of the transistor, and when the main amplifier transistor size is held constant, performance metrics such as power consumption, gain, and bandwidth can converge across these architectures. This occasional overlap in the performance space introduces ambiguity in classification for a small subset of instances. For other circuit categories, no significant confusion is expected or observed. These results validate our hypothesis that performance vectors contain sufficient semantic structure for accurate, scalable topology classification. 5 Stage 2: Generalizable Forward Modeling for Performance Prediction Task Setup. The goal of Stage 2 is to learn a differentiable approximation of the circuit simulator that maps a topology Tand parameter vector xto a performance prediction ˆy=fθ(T, x), where ˆy∈R16. Unlike black-box simulators, this learned forward model enables efficient performance estimation and supports gradient-based parameter inference in Stage 3. The model is trained to generalize across circuit families and can be reused on unseen topologies with minimal fine-tuning. 1DC power consumption (DCP), voltage gain (VGain), power gain (PGain), conversion gain (CGain), S11, S22, noise figure (NF), bandwidth (BW), oscillation frequency (OscF), tuning range (TR), output power (OutP), PSAT, drain efficiency (DE), power-added efficiency (PAE), phase noise (PN), voltage swing (VSwg). 5 (a) t-SNE of performance vectors CGLNADLNA DBPMixer SBPMixerCGVA CSVACVAIFVCO RVCO Predicted T opologyCGLNA DLNA DBPMixer SBPMixer CGVA CSVA CVA IFVCO RVCOTrue T opology99.31 0.69 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.14 98.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 99.95 0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 99.98 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 95.01 4.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5.57 93.85 0.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 99.77 0.23 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 99.93 020406080100 Percentage (%) (b) Confusion matrix (errors only) CGLNACLNACSLNADLNA DBAMixer DBPMixer SBAMixer SBPMixer ClassBPA ClassEPADohPADPACGVA CSVACVASFVAIFVCOCCVCO ColVCORVCO020406080100Accuracy (%)99.3% 100.0% 100.0% 98.9% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0%95.0% 93.9%100.0% 100.0% 99.8% 100.0% 100.0% 99.9% (c) Per-class accuracy across circuit topologies Figure 3: Topology selection results. (a) Performance vectors form well-separated clusters in t-SNE space, showing that circuit functionality is semantically predictive of topology. (b) Misclassifications primarily occur among voltage amplifier variants with overlapping gain-bandwidth tradeoffs. (c) Per-class test accuracy exceeds 93% across all 20 circuit topologies.2 Each datapoint consists of a graph-structured Cadence netlist annotated with resolved parameter values and the corresponding performance metrics. We frame the learning task as a supervised regression problem. Since not all performance metrics apply to every topology (e.g., oscillation frequency is undefined for amplifiers),
https://arxiv.org/abs/2505.21923v1
we train the model using a masked mean squared error loss: Lmasked =1P imidX i=1mi·(ˆyi−yi)2, (2) where mi= 1if the i-th metric is defined for the current sample, and zero otherwise. Model Architecture and Training. Each cir- cuit is represented as an undirected multi- edge graph with voltage nets as nodes and cir- cuit components as edges. All circuit parame- ters—both fixed and sweepable—are assigned to edges, along with categorical device types and one-hot encoded indicators. For each edge (u, v), these attributes are concatenated to form a unified feature vector xuv. The feature set is consistent within each component type but varies across types (e.g., NMOS vs. inductor), reflect- ing the structure defined in Section 3.2. Figure 4: In Stage 2, a custom edge-centric GNN maps an undirected multi-edge graph constructed from the circuit netlist to a performance vector. 2The 20 circuit topologies—listed in the same order as the numerical labels in Figure 3(a)—are: CGLNA (Common Gate), CLNA (Cascode), CSLNA (Common Source), DLNA (Differential), DBAMixer (Double- Balanced Active), DBPMixer (Double-Balanced Passive), SBAMixer (Single-Balanced Active), SBPMixer (Single-Balanced Passive), ClassBPA (Class-B), ClassEPA (Class-E), DohPA (Doherty), DPA (Differential), CGV A (Common Gate), CSV A (Common Source), CV A (Cascode), SFV A (Source Follower), IFVCO (Inductive- Feedback), CCVCO (Cross-Coupled), ColVCO (Colpitts), RVCO (Ring). 6 To account for component heterogeneity, we apply type-specific MLP encoders ϕ(t) encto each edge feature vector, producing initial embeddings e(0) uv=ϕ(t) enc(xuv), where tis the component type. These embeddings are updated via a 4-layer edge-centric message-passing GNN with shared weights. At each layer ℓ, for each node u, we first compute the node hidden state using the edge embeddings of all neighbors of the node u,N(u). Then, for each edge (u, v)in the circuit graph, we compute the edge embedding at the next layer ℓ+ 1is using the edge embedding e(ℓ) uvand the hidden node states forming the edge (u, v)at the current layer ℓas follows: h(ℓ) u=X w∈N(u)ϕMSG(e(ℓ) wu), e(ℓ+1) uv =ϕUPD(e(ℓ) uv, h(ℓ) u, h(ℓ) v), where ϕMSG, ϕUPDare the message and update parameters of message-passing GNN and h(ℓ) u, h(ℓ) vare the hidden states for the nodes forming the edge (u, v)respectively. After message passing [ 57], final edge embeddings e(L) uvare aggregated to form a graph-level representation zgraph=P (u,v)e(L) uv, which is decoded by a fully connected MLP (hidden size 256) to predict the 16-dimensional performance vector ˆy∈R16. An overview of this GNN-based forward prediction pipeline is shown in Figure 4. To stabilize training, physical parameters are rescaled by their expected units (e.g. resistance by 103), and performance targets are normalized to z-scores using training statistics. We train the model using the Adam optimizer (learning rate 10−3, batch size 256) and a ReduceLROnPlateau scheduler. Xavier uniform initialization is used for all layers, and early stopping is based on validation loss. We adopt the same splits as in Section 4 for consistency in evaluation. Evaluation. We evaluate the accuracy of the GNN forward model fθon a test set drawn from 19 of the 20 topologies. One topology—RVCO—is entirely excluded from training, validation, and test splits to assess generalization
https://arxiv.org/abs/2505.21923v1
to unseen architectures. Prediction quality is measured using standard re- gression metrics: coefficient of determination ( R2), root mean squared error (RMSE), and mean ab- solute error (MAE), computed independently for each of the 16 performance metrics. We also report themean relative error per metric , computed as the average across all test samples where each metric is defined. As summarized in Table 2, the model achieves high accuracy across all dimensions, with an average R2of 0.971. 0 5 10 15 20 25 30 35 Relative Error (%)0.0000.0250.0500.0750.1000.1250.1500.175DensityMean: 3.65% Median: 1.69% Mode: 1.38%Figure 5: Distribution of relative error (%) across the test set for the GNN forward model. Plot is trimmed at the 95th percentile. To evaluate end-to-end prediction accuracy at the sample level, we compute the mean relative error per instance , defined as the average relative error across all valid (non-masked) performance metrics for each test sample. Figure 5 shows the distribution of this quantity across the test set (trimmed at the 95th percentile to reduce the impact of outliers). The distribution is sharply concentrated, indicating that most predictions closely match their corresponding target vectors. Without percentile trimming, the overall mean relative error across the full test set is 9.14% . Table 2: Prediction accuracy of the forward GNN on all 16 circuit performance metrics. Metric DCP VGain PGain CGain S 11 S22 NF BW OscF TR OutP P SAT DE PAE PN VSwg Unit mW dB dB dB dB dB dB GHz GHz GHz dBm dBm % % dBc/Hz mV R² 1.0 1.0 0.99 1.0 0.92 1.0 0.99 0.98 0.97 0.83 0.97 1.0 1.0 1.0 0.89 1.0 RMSE 0.289 0.107 0.536 0.84 1.517 0.206 0.534 0.969 0.721 0.293 0.908 0.1 0.232 0.144 2.541 0.07 MAE 0.212 0.077 0.208 0.188 0.554 0.116 0.202 0.369 0.181 0.097 0.232 0.069 0.168 0.104 1.167 0.046 Rel. Err. 11.5% 2.7% 18.6% 7.8% 11.4% 1.8% 4.5% 5.6% 0.6% 6.5% 4.4% 4.5% 4.5% 11.7% 1.3% 1.48% 6 Stage 3: Layout-Aware Parameter Inference via Gradient Reasoning Task Setup. Given a target performance vector ytarget and a selected topology T∗, the goal of Stage 3 is to recover a parameter vector x∗that minimizes a total loss combining performance error and 7 Figure 6: In Stage 3, gradient reasoning iteratively updates parameters to minimize a loss combining performance error and layout cost, computed via a differentiable analytical model. layout-aware penalties, using the learned forward model fθfrom Stage 2. This formulation enables instance-wise inverse design without requiring circuit-level simulation. To initialize optimization, we perturb domain-specific scale factors (e.g., 10−12for capacitors) to sample a plausible starting point x0. Parameters are iteratively updated via gradient descent, guided by both functional and physical objectives. Topology-specific constants are held fixed, and parameter values are clipped to remain within valid domain bounds throughout the process. Loss Function. The total loss follows the structure defined in Eqn 1, jointly minimizing performance mismatch and layout cost: Ltotal=Lperf+λarea· Llayout·g(Lperf), (3) whereLperfis the masked mean squared error (see Eqn 2) between predicted and target performance vectors, and Llayout is a normalized area penalty derived from analytical layout equations. To
https://arxiv.org/abs/2505.21923v1
prioritize functionality, layout loss is softly gated by: g(Lperf) = 1−σ(γ(Lperf−τ)), which attenuates layout penalties when performance error exceeds a threshold τ, encouraging the model to first achieve functionality before optimizing for layout compactness. We set τ= 0.05,γ= 50 , and normalize layout area by 1mm2to stabilize gradients. The layout weight λarea= 0.02is chosen empirically to balance performance accuracy and physical realism without dominating the loss. This gated formulation supports manufacturable parameter recovery and reflects the broader paradigm of physics-informed learning [58]. Differentiable Layout Modeling. In mm-wave analog design, layout is not a downstream concern but a critical determinant of circuit performance—particularly for passive components. Substrate coupling, proximity effects, and DRC-imposed geometries directly affect key metrics such as reso- nance frequency, quality factor, and impedance matching. To incorporate these effects, we introduce a differentiable layout model that computes total physical area analytically from circuit parameters. This enables layout constraints to directly guide parameter optimization during inverse design. By minimizing the layout area in distributed mm-wave circuits [ 59], unwanted signal loss [ 60] is reduced, the self-resonance frequency of passives can increase [ 61], and phase and amplitude mismatches across signal paths [62] can be reduced. The layout model is deterministic and non-learned. It estimates area contributions from passive components—capacitors, inductors, and resistors—as these dominate total area and exhibit layout- sensitive behavior. Active devices (e.g., MOSFETs) are excluded since their geometries are fixed by the PDK and are negligible [ 63]. For a given parameter vector x, the total layout loss is computed as: Llayout(x) =X e∈E passiveAe(x), whereEpassive is the set of passive elements, and Ae(x)is the area of the created layout for the passive component based on analytical physics-based equations. The area of element eis estimated based on its 2D dimensions (e.g., A=W·Lfor resistors and capacitors). This area is normalized and used as a differentiable penalty in the optimization objective (see Eqn 3). Gradient Reasoning Procedure. Starting from the initialized parameter vector x, we iteratively update parameters via gradient reasoning. At each step, the frozen forward model fθpredicts the 8 (a) Designed DohPA schematic (b) Layout of designed DohPA Figure 7: Stage 3 results for a synthesized DohPA. The schematic (a) reflects optimized parameters to meet the target specification. The layout (b) is DRC-compliant and physically realizable. The final design achieves a mean relative error of 5.4% compared to the target performance. performance ˆy=fθ(T, x), and the total loss Ltotalis evaluated. Gradients are backpropagated with respect to x, and updates are applied using the Adam optimizer. Optimization proceeds for a fixed number of steps, with early stopping triggered if the loss fails to improve over a predefined window. To handle varying circuit difficulty and initialization quality, we employ an adaptive learning rate strategy. Each instance begins with a moderate learning rate ( 10−6), refined during optimization via aReduceLROnPlateau scheduler. If the solution fails to meet thresholds on performance error or layout area, optimization restarts with a more exploratory learning rate. This adjustment balances exploration and fine-tuning, enabling rapid convergence to physically valid solutions, typically within milliseconds to under one second
https://arxiv.org/abs/2505.21923v1
per instance. An overview is shown in Figure 6. Evaluation. We evaluate Stage 3 on 9,500 test instances (500 per topology) using our gradient-based optimization pipeline. A design is considered converged if it meets both: (i) a predicted mean relative error below 10%, and (ii) a layout area under a topology-specific bound—1 mm2for most circuits and 1.5 mm2for DLNA, DohPA, and ClassBPA. The 10% error threshold reflects the forward model’s ∼9%average prediction error (Section 5). A design is deemed successful if its final Cadence- simulated performance deviates from the target by less than 20%, confirming real-world viability. Our method achieves a success rate of 78.5% and a mean relative error of 17.7% across converged designs, with average inference time under 1 second on a MacBook CPU. Notably, success rate is coupled with the convergence threshold: tighter error bounds yield higher accuracy but require more iterations—critical for large-scale design tasks. To illustrate the effectiveness of our pipeline, Figure 7 shows a representative result for the DohPA topology: the synthesized schematic is shown on the left, and the corresponding layout is on the right. These results confirm that the recovered parameters are both functionally accurate and physically realizable. Together, they demonstrate that FALCON enables layout-aware inverse design within a single differentiable pipeline—a capability not supported by existing analog design frameworks. 7 Conclusion and Future Work We presented FALCON, a modular framework for end-to-end analog and RF circuit design that unifies topology selection, performance prediction, and layout-aware parameter optimization. Trained on over one million Cadence-simulated mm-wave circuits, FALCON combines a lightweight MLP, a generalizable GNN, and differentiable gradient reasoning to synthesize circuits from specification to layout-constrained parameters. FALCON achieves >99% topology selection accuracy, <10% predic- tion error, and efficient inverse design—all within sub-second inference. The GNN forward model generalizes to unseen topologies with minimal fine-tuning, supporting broad practical deployment. In future work, we aim to expand the topology library and support hierarchical macroblocks for scalable design beyond the cell level. We also plan to extend the layout-aware optimization with learned parasitic models and EM-informed constraints for more accurate post-layout estimation. Fi- nally, integrating reinforcement learning or diffusion-based models for generative topology synthesis represents a promising step toward general-purpose analog design automation. 9 Acknowledgments We thank Andrea Villasenor and Tanqin He for their assistance with circuit data generation. We also thank Mohammad Shahab Sepehri for his insightful discussions and thoughtful feedback during the development of this work. References [1]Vilem Kledrowetz, Roman Prokop, Lukas Fujcik, and Jiri Haze. A fully differential analog front-end for signal processing from emg sensor in 28 nm fdsoi technology. Sensors , 23(7), 2023. [2]Wei Hong, Zhi Hao Jiang, Chao Yu, Debin Hou, Haiming Wang, Chong Guo, Yun Hu, Le Kuai, Yingrui Yu, Zhengbo Jiang, Zhe Chen, Jixin Chen, Zhiqiang Yu, Jianfeng Zhai, Nianzu Zhang, Ling Tian, Fan Wu, Guangqi Yang, Zhang-Cheng Hao, and Jian Yi Zhou. The role of millimeter-wave technologies in 5g/6g wireless communications. IEEE Journal of Microwaves , 1(1):101–122, 2021. [3]Yingying Chi, Haifeng Zhang, Zhe Zheng, Rui Liu, Lei Qiao, and Wenpeng Cui. Analog front-end circuit design for wireless sensor
https://arxiv.org/abs/2505.21923v1
system-on-chip. In 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) , volume 1, pages 38–42, 2020. [4]Xuyang Liu, Md. Hedayatullah Maktoomi, Mahdi Alesheikh, Payam Heydari, and Hamidreza Aghasi. A cmos 49–63-ghz phase-locked stepped-chirp fmcw radar transceiver. IEEE Journal of Solid-State Circuits , pages 1–15, 2025. [5]Med Nariman, Farid Shirinfar, Anna Papió Toda, Sudhakar Pamarti, Ahmadreza Rofougaran, and Franco De Flaviis. A compact 60-ghz wireless power transfer system. IEEE Transactions on Microwave Theory and Techniques , 64(8):2664–2677, 2016. [6] Phillip E Allen and Douglas R Holberg. CMOS analog circuit design . Elsevier, 2011. [7] Willy M. C. Sansen. analog design essentials . SpringerLink, 2011. [8]Shady A Abdelaal, Ahmed Hussein, and Hassan Mostafa. A bayesian optimization framework for analog circuits optimization. In 2020 15th International Conference on Computer Engineering and Systems (ICCES) , pages 1–4. IEEE, 2020. [9]Zehao Dong, Weidong Cao, Muhan Zhang, Dacheng Tao, Yixin Chen, and Xuan Zhang. CktGNN: Circuit graph neural network for electronic design automation. In The Eleventh International Conference on Learning Representations , 2023. [10] Hanrui Wang, Kuan Wang, Jiacheng Yang, Linxiao Shen, Nan Sun, Hae-Seung Lee, and Song Han. Gcn-rl circuit designer: Transferable transistor sizing with graph neural networks and reinforcement learning. In 2020 57th ACM/IEEE Design Automation Conference (DAC) , pages 1–6, 2020. [11] Dmitrii Krylov, Pooya Khajeh, Junhan Ouyang, Thomas Reeves, Tongkai Liu, Hiba Ajmal, Hamidreza Aghasi, and Roy Fox. Learning to design analog circuits to meet threshold specifications. In Proceedings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org, 2023. [12] Hanrui Wang, Jiacheng Yang, Hae-Seung Lee, and Song Han. Learning to design circuits. arXiv preprint arXiv:1812.02734 , 2018. [13] Keertana Settaluri, Ameer Haj-Ali, Qijing Huang, Kourosh Hakhamaneshi, and Borivoje Nikolic. Autockt: deep reinforcement learning of analog circuit designs. In Proceedings of the 23rd Conference on Design, Automation and Test in Europe , DATE ’20, page 490–495, San Jose, CA, USA, 2020. EDA Consortium. [14] Yaguang Li, Yishuang Lin, Meghna Madhusudan, Arvind Sharma, Sachin Sapatnekar, Ramesh Harjani, and Jiang Hu. A circuit attention network-based actor-critic learning approach to robust analog transistor sizing. In 2021 ACM/IEEE 3rd Workshop on Machine Learning for CAD (MLCAD) , pages 1–6, 2021. [15] Wenlong Lyu, Pan Xue, Fan Yang, Changhao Yan, Zhiliang Hong, Xuan Zeng, and Dian Zhou. An efficient bayesian optimization approach for automated optimization of analog circuits. IEEE Transactions on Circuits and Systems I: Regular Papers , 65(6):1954–1967, 2017. [16] Kourosh Hakhamaneshi, Marcel Nassar, Mariano Phielipp, Pieter Abbeel, and Vladimir Stojanovic. Pretraining graph neural networks for few-shot analog circuit modeling and design. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 42(7):2163–2173, 2022. 10 [17] Morteza Fayazi, Morteza Tavakoli Taba, Ehsan Afshari, and Ronald Dreslinski. Angel: Fully-automated analog circuit generator using a neural network assisted semi-supervised learning approach. IEEE Transac- tions on Circuits and Systems I: Regular Papers , 2023. [18] Jintao Li, Haochang Zhi, Ruiyu Lyu, Wangzhen Li, Zhaori Bi, Keren Zhu, Yanhan Zeng, Weiwei Shan, Changhao Yan, Fan Yang, Yun Li, and Xuan Zeng. Analoggym: An open and practical testing suite for analog circuit synthesis. In International Conference
https://arxiv.org/abs/2505.21923v1
on Computer Aided Design , 2024. [19] Chen-Chia Chang, Yikang Shen, Shaoze Fan, Jing Li, Shun Zhang, Ningyuan Cao, Yiran Chen, and Xin Zhang. Lamagic: Language-model-based topology generation for analog integrated circuits. arXiv preprint arXiv:2407.18269 , 2024. [20] Yao Lai, Sungyoung Lee, Guojin Chen, Souradip Poddar, Mengkang Hu, David Z Pan, and Ping Luo. Analogcoder: Analog circuit design via training-free code generation. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 379–387, 2025. [21] Weidong Cao, Mouhacine Benosman, Xuan Zhang, and Rui Ma. Domain knowledge-based automated analog circuit design with deep reinforcement learning. arXiv preprint arXiv:2202.13185 , 2022. [22] Ahmet Faruk Budak, Miguel Gandara, Wei Shi, David Z. Pan, Nan Sun, and Bo Liu. An efficient analog circuit sizing method based on machine learning assisted global optimization. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 41(5):1209–1221, 2022. [23] Asal Mehradfar, Xuzhe Zhao, Yue Niu, Sara Babakniya, Mahdi Alesheikh, Hamidreza Aghasi, and Salman Avestimehr. AICircuit: A Multi-Level Dataset and Benchmark for AI-Driven Analog Integrated Circuit Design. Machine Learning and the Physical Sciences Workshop @ NeurIPS , 2024. [24] Asal Mehradfar, Xuzhe Zhao, Yue Niu, Sara Babakniya, Mahdi Alesheikh, Hamidreza Aghasi, and Salman Avestimehr. Supervised learning for analog and rf circuit design: Benchmarks and comparative insights. arXiv preprint arXiv:2501.11839 , 2025. [25] Mingjie Liu, Walker J. Turner, George F. Kokai, Brucek Khailany, David Z. Pan, and Haoxing Ren. Parasitic-aware analog circuit sizing with graph neural networks and bayesian optimization. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE) , pages 1372–1377, 2021. [26] Tonmoy Dhar, Kishor Kunal, Yaguang Li, Meghna Madhusudan, Jitesh Poojary, Arvind K Sharma, Wenbin Xu, Steven M Burns, Ramesh Harjani, Jiang Hu, et al. Align: A system for automating analog layout. IEEE Design & Test , 38(2):8–18, 2020. [27] Bingyang Liu, Haoyi Zhang, Xiaohan Gao, Zichen Kong, Xiyuan Tang, Yibo Lin, Runsheng Wang, and Ru Huang. Layoutcopilot: An llm-powered multi-agent collaborative framework for interactive analog layout design. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems , 2025. [28] Antonio J Lopez Martin. Cadence design environment. New Mexico State University, Tutorial paper , 35, 2002. [29] Sorin V oinigescu. High-frequency integrated circuits . Cambridge University Press, 2013. [30] Behzad Razavi. RF microelectronics , volume 2. Prentice hall New York, 2012. [31] Thomas H. Lee. The Design of CMOS Radio-Frequency Integrated Circuits . Cambridge University Press, 2nd edition, 2004. [32] John R. Long and Michael A. Copeland. The design of low-noise cmos rf amplifiers. IEEE Journal of Solid-State Circuits , 32(2):292–302, 1997. [33] Ali M. Niknejad. mm-Wave Silicon Technology: 60 GHz and Beyond . Springer, 2008. [34] Xiaohua Fan, Heng Zhang, and Edgar SÁnchez-Sinencio. A noise reduction and linearity improvement technique for a differential cascode lna. IEEE Journal of Solid-State Circuits , 43(3):588–599, 2008. [35] B. Henderson and E. Camargo. Microwave Mixer Technology and Applications . Microwave & RF. Artech House, 2013. [36] B. Gilbert. A precise four-quadrant multiplier with subnanosecond response. IEEE Journal of Solid-State Circuits , 3(4):365–373, 1968. [37] Krenar Komoni, Sameer Sonkusale, and Geoff Dawe. Fundamental performance limits
https://arxiv.org/abs/2505.21923v1
and scaling of a cmos passive double-balanced mixer. In 2008 Joint 6th International IEEE Northeast Workshop on Circuits and Systems and TAISA Conference , pages 297–300, 2008. 11 [38] S. Chehrazi, R. Bagheri, and A.A. Abidi. Noise in passive fet mixers: a simple physical model. In Proceedings of the IEEE 2004 Custom Integrated Circuits Conference (IEEE Cat. No.04CH37571) , pages 375–378, 2004. [39] Hua Wang, Peter M. Asbeck, and Christian Fager. Millimeter-wave power amplifier integrated circuits for high dynamic range signals. IEEE Journal of Microwaves , 1(1):299–316, 2021. [40] M.K. Kazimierczuk. RF Power Amplifiers . Wiley, 2014. [41] F.H. Raab, P. Asbeck, S. Cripps, P.B. Kenington, Z.B. Popovic, N. Pothecary, J.F. Sevic, and N.O. Sokal. Power amplifiers and transmitters for rf and microwave. IEEE Transactions on Microwave Theory and Techniques , 50(3):814–826, 2002. [42] Narek Rostomyan, Mustafa Özen, and Peter Asbeck. 28 ghz doherty power amplifier in cmos soi with 28 IEEE Microwave and Wireless Components Letters , 28(5):446–448, 2018. [43] Morteza Abbasi, Torgil Kjellberg, Anton de Graauw, Edwin van der Heijden, Raf Roovers, and Herbert Zirath. A broadband differential cascode power amplifier in 45 nm cmos for high-speed 60 ghz system-on- chip. In 2010 IEEE Radio Frequency Integrated Circuits Symposium , pages 533–536, 2010. [44] Behzad Razavi. Design of Analog CMOS Integrated Circuits . McGraw-Hill Education, 2016. [45] S. Karthikeyan, S. Mortezapour, A. Tammineedi, and E.K.F. Lee. Low-voltage analog circuit design based on biased inverting opamp configuration. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing , 47(3):176–184, 2000. [46] Tae Wook Kim. A common-gate amplifier with transconductance nonlinearity cancellation and its high- frequency analysis using the volterra series. IEEE Transactions on Microwave Theory and Techniques , 57 (6):1461–1469, 2009. [47] T. Lehmann and M. Cassia. 1-v power supply cmos cascode amplifier. IEEE Journal of Solid-State Circuits , 36(7):1082–1086, 2001. [48] H.-J. Song and C.-K. Kim. An mos four-quadrant analog multiplier using simple two-input squaring circuits with source followers. IEEE Journal of Solid-State Circuits , 25(3):841–848, 1990. [49] N. R. Sivaraaj and K. K. Abdul Majeed. A comparative study of ring vco and lc-vco: Design, performance analysis, and future trends. IEEE Access , 11:127987–128017, 2023. [50] Cao Wan, Taotao Xu, Xiang Yi, and Quan Xue. A current-reused vco with inductive-transformer feedback technique. IEEE Transactions on Microwave Theory and Techniques , 70(5):2680–2689, 2022. [51] Tuan Thanh Ta, Suguru Kameda, Tadashi Takagi, and Kazuo Tsubouchi. A 5ghz band low noise and wide tuning range si-cmos vco. In 2009 IEEE Radio Frequency Integrated Circuits Symposium , pages 571–574, 2009. [52] R. Aparicio and A. Hajimiri. A noise-shifting differential colpitts vco. IEEE Journal of Solid-State Circuits , 37(12):1728–1736, 2002. [53] Shruti Suman, K. G. Sharma, and P. K. Ghosh. Analysis and design of current starved ring vco. In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT) , pages 3222–3227, 2016. [54] Sungyoung Lee, Ziyi Wang, Seunggeun Kim, Taekyun Lee, and David Z Pan. Self-supervised graph contrastive pretraining for device-level integrated circuits. arXiv preprint arXiv:2502.08949 , 2025. [55] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv
https://arxiv.org/abs/2505.21923v1
preprint arXiv:1412.6980 , 2014. [56] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research , 9(11), 2008. [57] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning , pages 1263–1272. PMLR, 2017. [58] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561 , 2017. 12 [59] A. Dounavis, R. Achar, and M.S. Nakhla. Efficient passive circuit models for distributed networks with frequency-dependent parameters. IEEE Transactions on Advanced Packaging , 23(3):382–392, 2000. [60] Shen Wang, M.A. de Rooij, W.G. Odendaal, J.D. van Wyk, and D. Boroyevich. Reduction of high- frequency conduction losses using a planar litz structure. IEEE Transactions on Power Electronics , 20(2): 261–267, 2005. [61] Tejinder Singh and Raafat R Mansour. Miniaturized 6-bit phase-change capacitor bank with improved self-resonance frequency and q. In2022 52nd European Microwave Conference (EuMC) , pages 572–575. IEEE, 2022. [62] Chenxi Zhao, Xing Zeng, Lin Zhang, Huihua Liu, Yiming Yu, Yunqiu Wu, and Kai Kang. A 37–40-ghz low-phase-imbalance cmos attenuator with tail-capacitor compensation technique. IEEE Transactions on Circuits and Systems I: Regular Papers , 67(10):3400–3409, 2020. [63] Michele Spasaro and Domenico Zito. Millimeter-wave integrated silicon devices: Active versus pas- sive—the eternal struggle between good and evil. In 2019 International Semiconductor Conference (CAS) , pages 11–20. IEEE, 2019. 13 A Qualitative Comparison with Prior Work To contextualize FALCON within the broader landscape of analog circuit design automation, we provide a qualitative comparison against representative prior works in Table 3. This comparison spans key capabilities including topology selection, parameter inference, performance prediction, layout awareness, and simulator fidelity. We additionally assess reproducibility via dataset and code availability, and introduce a new axis— RF/mm-wave support—to highlight methods evaluated on high-frequency circuit blocks such as LNAs, mixers, and VCOs. Compared to existing approaches, FALCON is the only framework that unifies all these dimensions while maintaining foundry-grade fidelity and open-source accessibility. Definitions for each comparison axis are provided in Table 4. Table 3: Qualitative comparison of FALCON with prior works across key capabilities in analog circuit design automation. MethodTopology SelectionParameter InferencePerformance PredictionLayout AwarenessFoundry GradeRF/ mm-WavePublic DatasetPublic Code CktGNN [9] ✔ ✔ ✘ ✘ ✘ (SPICE) ✘ ✔ ✔ LaMAGIC [19] ✔ ✘ ✘ ✘ ✘ (SPICE) ✘ ✘ ✘ AnalogCoder [20] ✔ ✘ ✘ ✘ ✘ (SPICE) ✘ ✔ ✔ GCN-RL [10] ✘ ✔ ✘ ✘ ✔ (SPICE/Cadence) ✘ ✘ ✔ Cao et al. [21] ✘ ✔ ✘ ✘ ✔ (SPICE/Cadence) ✔ ✘ ✘ BO-SPGP [15] ✘ ✔ ✔ ✘ ✔ (Cadence) ✔ ✘ ✘ ESSAB [22] ✘ ✔ ✔ ✘ ✔ (Cadence) ✘ ✘ ✘ AICircuit [23, 24] ✘ ✔ ✘ ✘ ✔ (Cadence) ✔ ✔ ✔ Krylov et al. [11] ✘ ✔ ✘ ✘ ✘ (SPICE) ✘ ✔ ✔ Deep-GEN [16] ✘ ✘ ✔ ✘ ✘ (SPICE) ✘ ✔ ✔ Liu et al. [25] ✘ ✘ ✘ ✔ ✘ (SPICE + Parasitic Model) ✔ ✘ ✘ ALIGN [26] ✘ ✘ ✘ ✔
https://arxiv.org/abs/2505.21923v1
✔ (Cadence) ✔ ✔ ✔ LayoutCopilot [27] ✘ ✘ ✘ ✔ ✔ (Cadence) ✘ ✘ ✘ AnalogGym [18] ✘ ✔ ✘ ✘ ✘ (SPICE) ✘ ✔ ✔ AutoCkt [13] ✘ ✔ ✘ ✘ ✔ (Cadence) ✔ ✘ ✘ (incomplete) L2DC [12] ✘ ✔ ✘ ✘ ✔ (Cadence) ✘ ✘ ✘ CAN-RL [14] ✘ ✔ ✘ ✔ ✔ (Cadence) ✘ ✘ ✘ AnGeL. [17] ✔ ✔ ✔ ✘ ✘ (SPICE) ✘ ✘ ✘ FALCON (This work) ✔ ✔ ✔ ✔ ✔ (Cadence) ✔ ✔ ✔ Table 4: Definitions of each comparison axis in Table 3. Column Definition Topology Selection Does the method automatically select or predict circuit topology given a target specification? Parameter Inference Does the method infer element-level parameters (e.g., transistor sizes, component values) as part of design generation? Performance Prediction Can the method predict circuit performance metrics (e.g., gain, bandwidth, noise) from topology and parameters? Layout Awareness Is layout considered during optimization or training (e.g., via area constraints, parasitics, or layout-informed loss)? Dataset Fidelity Does the dataset reflect realistic circuit behavior (e.g., SPICE/Cadence simulations, PDK models)? RF/mm-Wave Is the method evaluated on at least one RF or mm-wave circuit type that reflects high-frequency design challenges? Public Dataset Is the dataset used in the work publicly released for reproducibility and benchmarking? Public Code Is the implementation code publicly available and documented for reproducibility? B Dataset Details and Performance Metric Definitions During dataset generation, each simulated circuit instance is annotated with a set of performance metrics that capture its functional characteristics. All simulations are performed at a fixed frequency of 30 GHz, ensuring consistency across circuit types and relevance to mm-wave design. A total of 16 metrics are defined across all circuits—spanning gain, efficiency, impedance matching, noise, and frequency-domain behavior—though the specific metrics used vary by topology. For example, phase noise is only applicable to oscillators. An overview of all performance metrics is provided in Table 5. B.1 Low-Noise Amplifiers (LNAs) Low-noise amplifiers (LNAs) are critical components in receiver front-ends, responsible for amplify- ing weak antenna signals while introducing minimal additional noise. Their performance directly influences downstream blocks such as mixers and analog-to-digital converters (ADCs), ultimately de- termining system-level fidelity [ 31]. To capture the architectural diversity of practical radio-frequency (RF) designs, we include four widely used LNA topologies in this study—common-source LNA (CSLNA), common-gate LNA (CGLNA), cascode LNA (CLNA), and differential LNA (DLNA)—as shown in Figure 8. 14 Table 5: Overview of 16 performance metrics used during dataset generation. Performance Name Description DC Power Consumption (DCP) Total power drawn from the DC supply indicating energy consumption of the circuit Voltage Gain (VGain) Ratio of output voltage amplitude to input voltage amplitude Power Gain (PGain) Ratio of output power to input power Conversion Gain (CGain) Ratio of output power at the desired frequency to input power at the original frequency S11 Input reflection coefficient indicating impedance matching at the input terminal S22 Output reflection coefficient indicating impedance matching at the output terminal Noise Figure (NF) Ratio of input signal-to-noise ratio to output signal-to-noise ratio Bandwidth (BW) Frequency span over which the circuit maintains
https://arxiv.org/abs/2505.21923v1
specified performance characteristics Oscillation Frequency (OscF) Steady-state frequency at which the oscillator generates a periodic signal Tuning Range (TR) Range of achievable oscillation frequencies through variation of control voltages Output Power (OutP) Power delivered to the load PSAT Maximum output power level beyond which gain compression begins to occur Drain Efficiency (DE) Ratio of RF output power to DC power consumption. Power-Added Efficiency (PAE) Ratio of the difference between output power and input power to DC power consumption Phase Noise (PN) Measure of oscillator stability represented in the frequency domain at a specified offset Voltage Swing (VSwg) Maximum peak voltage level achievable at the output node The CSLNA is valued for its simplicity and favorable gain–noise trade-off, especially when paired with inductive source degeneration [ 30]. The CGLNA, often used in ultra-wideband systems, enables broadband input matching but typically suffers from a higher noise figure [ 32]. The CLNA improves gain–bandwidth product and reverse isolation, making it ideal for high-frequency, high-linearity applications [ 33]. The DLNA exploits circuit symmetry to enhance linearity and reject common-mode noise, and is commonly found in high-performance RF front-end designs [ 34]. The design parameters and performance metrics associated with these topologies are summarized in Table 6. (a) CSLNA (b) CGLNA (c) CLNA (d) DLNA Figure 8: Schematic diagrams of the four LNA topologies. Table 6: LNA topologies with parameter sweep ranges, sample sizes, and performance metrics. Dataset Type Topology (Code) # of Samples Parameter Sweep Range Performance Metrics (Unit) LNACGLNA (0) 52kC1 [100–600] fF DCP (W) PGain (dB) S11(dB) NF (dB) BW (Hz)C2 [50–300] fF Cb [250–750] fF Ld [80–580] pH Ls [0.5–5.5] nH WN [12–23] µm CLNA (1) 62kC1, C2 [50–250] fF Ld [140–300] pH Lg [0.4–2] nH Ls [50–250] pH WN1 [3–5] µm WN2 [7–9] µm CSLNA (2) 39kC [100–300] fF Lg [4–6] nH Ls [100–200] pH WN [2.5–4] µm Vgs [0.5–0.9] V DLNA (3) 92kC1 [100–190] fF C2 [130–220] fF Ld [100–250] pH Lg [600–900] pH Ls [50–80] pH WN1 [4–9.4] µm WN2 [5–14] µm 15 B.2 Mixers Mixers are fundamental nonlinear components in RF systems, responsible for frequency translation by combining two input signals to produce outputs at the sum and difference of their frequencies. This functionality is essential for transferring signals across frequency domains and is widely used in both transmission and reception paths [ 35]. To capture diverse mixer architectures, we implement four representative topologies in this work—double-balanced active mixer (DBAMixer), double-balanced passive mixer (DBPMixer), single-balanced active mixer (SBAMixer), and single-balanced passive mixer (SBPMixer)—as shown in Figure 9. The DBAMixer integrates amplification and differential switching to achieve conversion gain and high port-to-port isolation. Despite its elevated power consumption and design complexity, it is well suited for systems requiring robust performance over varying conditions [ 36]. The DBPMixer features a fully differential structure that suppresses signal leakage and improves isolation, at the cost of signal loss and a strong local oscillator drive requirement [ 37]. The SBAMixer includes an amplification stage preceding the switching core to enhance signal strength and reduce noise, offering a balanced performance trade-off with increased power consumption
https://arxiv.org/abs/2505.21923v1
and limited spurious rejection [ 30]. The SBPMixer employs a minimalist switching structure to perform frequency translation without active gain, enabling low power operation in applications with relaxed performance demands [ 38]. The parameters and performance metrics for these mixer topologies are listed in Table 7. (a) DPAMixer (b) DBPMixer (c) SBAMixer (d) SBPMixer Figure 9: Schematic diagrams of the four Mixer topologies. Table 7: Mixer topologies with parameter sweep ranges, sample sizes, and performance metrics. Dataset Type Topology (Code) # of Samples Parameter Sweep Range Performance Metrics (Unit) MixerDBAMixer (4) 42kC [1–10] pF DCP (W) CGain (dB) NF (dB) VSwg (V)R [1–10] k Ω WN1 [10–30] µm WN2 [5–25] µm DBPMixer (5) 42kC [100–500] fF R [100–600] Ω WN [10–30] µm SBAMixer (6) 52kC [1–15] pF R [0.7–2.1] k Ω WN1 [10–30] µm WN2 [10–20] µm Itail [3–10] mA SBPMixer (7) 44kC [1–30] pF R [1–30] k Ω WN [5–29.5] µm B.3 Power Amplifiers (PAs) Power amplifiers (PAs) are the most power-intensive components in radio-frequency (RF) systems and serve as the final interface between transceiver electronics and the antenna. Given their widespread use and the stringent demands of modern communication standards, PA design requires careful trade-offs across key performance metrics [ 39]. Based on the transistor operating mode, PAs are typically grouped into several canonical classes [ 40]. In this work, we implement four representative topologies—Class-B PA (ClassBPA), Class-E PA (ClassEPA), Doherty PA (DohPA), and differential PA (DPA)—as shown in Figure 10. 16 The ClassBPA employs complementary transistors to deliver high gain with moderate efficiency, making it suitable for linear amplification scenarios [ 41]. The ClassEPA uses a single transistor configured as a switch, paired with a matching network. By minimizing the overlap between drain voltage and current, this topology enables high-efficiency operation and improved robustness to component variation [ 30]. The DohPA combines main and peaking amplifiers using symmetric two-stack transistors, maintaining consistent gain and efficiency under varying power levels [ 42]. The DPA features a two-stage cascode structure designed to maximize gain and linearity, offering a favorable trade-off between output power and power consumption [ 43]. For this topology, we replace the transformer with a T-equivalent network to simplify modeling and training of the graph neural network. Parameter sweeps and performance metrics for these PAs are listed in Table 8. (a) ClassBPA (b) ClassEPA (c) DohPA (d) DPA Figure 10: Schematic diagrams of the four PA topologies. Table 8: PA topologies with parameter sweep ranges, sample sizes, and performance metrics. Dataset Type Topology (Code) # of Samples Parameter Sweep Range Performance Metrics (Unit) PAClassBPA (8) 35kC [55–205] fF DCP (W) PGain (dB) S11(dB) S22(dB) PSAT(dBm) DE (%) PAE (%)L1 [1–1.4] nH L2 [1–8.5] pH R [1.5–4] k Ω WN [10–20] µm WP [3–8] µm ClassEPA (9) 46kC1 [100–200] fF C2 [500–700] fF L1 [100–300] pH L2 [100–150] pH WN [15–30] µm DohPA (10) 120kC1 [2–3] pF C2 [200–300] fF C3, C5 [100–200] fF C4 [300–400] fF L1, L5 [100–200] pH L2 [350–450] pH L3 [500–600] pH L4 [150–250] pH L6 [300–400] pH WN1, W N2 [6–13] µm DPA (11)
https://arxiv.org/abs/2505.21923v1
80kLip [100–500] pH Lis [300–700] pH Lop [0.8–1.2] nH Los [400–800] pH Lm [50–250] pH WN1 [6–31] µm WN2 [10–35] µm 17 B.4 Voltage Amplifiers (V As) V oltage amplifiers (V As) are fundamental components in analog circuit design, responsible for increasing signal amplitude while preserving waveform integrity. Effective V A design requires balancing key performance metrics tailored to both RF and baseband operating conditions [ 44]. In this work, we implement four widely used V A topologies—common-source V A (CSV A), common- gate V A (CGV A), cascode V A (CV A), and source follower V A (SFV A)—as shown in Figure 11. The CSV A remains the most widely adopted configuration due to its structural simplicity and high voltage gain. It is frequently used as the first gain stage in various analog systems [ 45]. The CGV A is suitable for applications requiring low input impedance and wide bandwidth, such as impedance transformation or broadband input matching [ 46]. The CV A, which cascades a common-source stage with a common-gate transistor, improves the gain–bandwidth product and enhances stability, making it ideal for applications demanding wide dynamic range and robust gain control [ 47]. The SFV A, also known as a common-drain amplifier, provides near-unity voltage gain and low output impedance, making it well suited for interstage buffering, load driving, and impedance bridging [ 48]. Parameter ranges and performance specifications for these V A topologies are listed in Table 9. (a) CSV A (b) CGV A (c) CV A (d) SFV A Figure 11: Schematic diagrams of the four V A topologies. Table 9: V A topologies with parameter sweep ranges, sample sizes, and performance metrics. Dataset Type Topology (Code) # of Samples Parameter Sweep Range Performance Metrics (Unit) V ACGV A (12) 33kC [0.1–1.5] pF DCP (W) VGain (dB) BW (Hz)R [0.1–1.5] k Ω WN1 [5–30] µm WN2 [5–10] µm CSV A (13) 21kR [0.7–1.5] k Ω WN [3–15] µm VDD [1–1.8] V Vgate [0.6–0.9] V CV A (14) 22kR [1–3] k Ω WN1, W N2 [1–10] µm WN3 [10–15] µm SFV A (15) 28kWN1 [40–60] µm WN2 [2–8] µm VDD [1.1–1.8] V Vgate [0.6–1.2] V Vb [0.5–0.9] V B.5 Voltage-Controlled Oscillators (VCOs) V oltage-controlled oscillators (VCOs) are essential building blocks in analog and RF systems, responsible for generating periodic waveforms with frequencies modulated by a control voltage. These circuits rely on amplification, feedback, and resonance to sustain stable oscillations. Owing to their wide tuning range, low power consumption, and ease of integration, VCOs are broadly used in systems such as phase-locked loops (PLLs), frequency synthesizers, and clock recovery circuits [ 49]. In this work, we implement four representative VCO topologies—inductive-feedback VCO (IFVCO), cross-coupled VCO (CCVCO), Colpitts VCO (ColVCO), and ring VCO (RVCO)—as shown in Figure 12. 18 The IFVCO employs an NMOS differential pair with an inductor-based feedback path to sustain oscillations. This topology provides favorable noise performance and compact layout, making it well suited for low-voltage, low-power designs [ 50]. The CCVCO achieves negative resistance through cross-coupling, enabling low phase noise and high integration density, and is widely
https://arxiv.org/abs/2505.21923v1
adopted in frequency synthesizers and PLLs [ 51]. The ColVCO uses an LC tank and capacitive feedback to achieve high frequency stability and low phase noise, making it ideal for precision RF communication and instrumentation [ 52]. The RVCO consists of cascaded delay stages forming a feedback loop, offering low power consumption, wide tuning range, and minimal area footprint, though at the cost of higher phase noise. It is commonly used in on-chip clock generation and low-power sensor applications [ 53]. Design parameters and performance metrics for these VCO topologies are presented in Table 10. (a) IFVCO (b) CCVCO (c) ColVCO (d) RVCO Figure 12: Schematic diagrams of the four VCO topologies. Table 10: VCO topologies with parameter sweep ranges, sample sizes, and performance metrics. Dataset Type Topology (Code) # of Samples Parameter Sweep Range Performance Metrics (Unit) VCOIFVCO (16) 43kC1 [700–900] fF DCP (W) OscF (Hz) TR (Hz) OutP (dBm) PN (dBc/Hz)C2 [50–250] fF L1 [400–600] pH L2 [500–700] pH WN, W var [5–9] µm CCVCO (17) 54kL [200–400] pH WN [10–35] µm Wvar [5–30] µm ColVCO (18) 90kC [80–140] fF L [250–350] pH WN [30–50] µm Wvar [5–15] µm Vb [0.7–1.2] V Itail [5–15] mA RVCO (19) 46kC [300–700] fF L1 [300–500] pH L2 [50–250] pH WN [20–40] µm Wvar [5–25] µm C Graph-Based Circuit Representation To enable GNN-based modeling of analog circuits, we represent each netlist as a directed multigraph where nodes correspond to electrical nets and edges encode circuit components such as transistors, capacitors, inductors, and voltage sources. Each edge is labeled with its component type and terminal role (e.g., gate, source, drain), and component-specific attributes are stored as edge features. For transistors, labels such as GS, DS, and DG denote source-to-gate, drain-to-source, and drain-to-gate connections, respectively. Figure 13 illustrates two representative graph structures extracted from our dataset: an IFVCO and a ClassBPA. The visual encoding highlights the diversity of components and connectivity patterns across topologies. Edges corresponding to the same component type share a common color for visual consistency and semantic clarity. These structured graphs serve as the primary input to our GNN pipeline for performance prediction and inverse design. 19 V2V0 L5C2L4 C1 L1L0 N3_DG N3_GSN3_DSN2_DSN2_DG N2_GS N0_DG L3N0_DSN1_DG L2 N1_DSN1_GS C0 N0_GS GNDVDDVcont Vout-Vout+net9 net7 net15net8Components nmos capacitor vsource inductor(a) IFVCO N0_DGR0N0_GSR4C0 N0_DSV0P0_DSP0_GS L2PORT0 P0_DG R3 C6PORT1L0 R5C1 net7 VDDnet2GNDnet1 net4 net3 net5VinComponents nmos pmos resistor capacitor vsource port inductor (b) ClassBPA Figure 13: Graph representations of two analog circuit topologies from our dataset: (a) IFVCO and (b) ClassBPA. Nodes represent electrical nets, and colored edges denote circuit components such as transistors, capacitors, inductors, and sources. Each component type is visually distinguished by color and labeled with its name and terminal role (e.g., N2_GS, V0). These graphs serve as input to our GNN-based performance modeling and inverse design pipeline. D Generalizing to Unseen Topologies via Fine-Tuning To assess the generalization ability of our pretrained GNN, we evaluate it on the held-out RVCO topology, which was entirely excluded from the Stage 2 training, validation, and test splits (see Section 5). Notably, the RVCO training partition used
https://arxiv.org/abs/2505.21923v1
here matches that of the Stage 1 experiments (Section 4), enabling consistent cross-stage evaluation. We fine-tune the GNN by freezing all encoder and message-passing layers and updating only the final output head ( output_mlp ). Fine-tuning is performed on the RVCO training set, which contains approximately 30,000 instances, and completes in under 30 minutes on a MacBook CPU. Even in the zero-shot setting—where the model has never seen RVCO topologies—the pretrained GNN achieves a nontrivial mean relative error of 33.1%, highlighting its strong cross-topology generalization. Fine-tuning reduces this error to just 0.8% , demonstrat- ing that the structural and parametric priors learned during pretraining are highly transfer- able. Table 11 reports detailed performance across five key metrics, confirming that the pre- trained GNN can be rapidly adapted to novel circuit families with minimal supervision.Table 11: Fine-tuning results on the held-out RVCO topology. Only the output head is updated using RVCO samples. Metric DCP OscF TR OutP PN Unit W GHz GHz dBm dBc/Hz R² 1.0 1.0 1.0 0.97 0.99 RMSE 0.725 0.244 0.022 0.098 0.792 MAE 0.576 0.19 0.018 0.078 0.477 Rel. Err. 0.85% 0.62% 1.4% 0.69% 0.56% E Layout Design and DRC Compliance E.1 Design Rule Enforcement in 45 nm CMOS We implemented FALCON using a 45 nm CMOS technology node, applying rigorous Design Rule Checking (DRC) at both the cellandfull-chip layout levels. At the cell level, our parameterized layout generators enforced foundry-specific constraints, including minimum feature width and length, contact and via spacing, and metal enclosure rules. At the circuit level, we incorporated physical verification to mitigate interconnect coupling, IR drop, and layout-dependent parasitic mismatches—factors that are especially critical in high-frequency and precision analog design. 20 DRC plays a vital role in ensuring that layouts comply with process design rules defined by the semiconductor foundry. Adhering to these rules ensures not only physical manufacturability but also electrical reliability . Violations may lead to fabrication failures, including yield degradation, electrical shorts or opens, electromigration-induced issues, and parasitic mismatches. Moreover, DRC compliance is essential for compatibility with downstream fabrication steps such as photomask generation, optical lithography, and chemical-mechanical planarization (CMP), safeguarding the yield and fidelity of the final IC. Circuit-Level Layout Guidelines. We enforced several topology-aware layout constraints during full-circuit integration to preserve signal integrity and robustness: •Inductor-to-inductor spacing: ≥35.0µm to mitigate mutual inductive coupling and magnetic interference. •Guardring placement: Sensitive analog blocks are enclosed by N-well or deep N-well guardrings with spacing ≥5.0µm to suppress substrate noise coupling. •Differential pair symmetry: Differential signal paths are layout-matched to ensure ∆L < 0.5µm, minimizing mismatch and preserving phase balance. DRC Constraints and Layer Definitions. Table 12 summarizes the DRC constraints applied to key analog components across relevant process layers. Table 13 provides the abbreviations used for metal, contact, and via layers in the 45 nm CMOS process. Table 12: Design rule constraints for key analog components in 45 nm CMOS. Component Layer Physical Constraint Symbol Value Unit MIM Capacitor (QT, LD, VV , OB)QT/LD Minimum Cap Width WMIN 6.05 µm QT/LD Maximum Cap Width WMAX 150.0 µm QT/LD Cap Length L 6.0 µm
https://arxiv.org/abs/2505.21923v1
VV VV Square Size VV_SIZE 4.0 µm VV VV Spacing VV_SPACE 2.0 µm VV VV to Edge Spacing VV_EDGE_MIN 1.0 µm Resistor (RX, CA, M1)RX Minimum Width WMIN 0.462 µm RX Maximum Width WMAX 5.0 µm RX Minimum Length LMIN 0.4 µm RX Maximum Length LMAX 5.0 µm CA Contact Size CA_SIZE 0.06 µm CA Contact Spacing CA_SPACE 0.10 µm CA CA to Edge Spacing CA_EDGE 0.11 µm Inductor (M3)M3 Minimum Width M3_W_MIN 2.0 µm M3 Maximum Width M3_W_MAX 20.0 µm M3 Minimum Spacing M3_S_MIN 2.0 µm Grid All Layers Minimum Grid Min_Grid 0.005 µm Table 13: Process layer abbreviations in the 45 nm CMOS design flow. Layer Name Description RX Resistor implant or diffusion layer used to define integrated resistor geometries. CA Contact layer forming vias between diffusion/poly and the first metal layer (M1). M1 First metal layer, typically used for local interconnects and resistor terminals. M3 Third metal layer, used for wider routing tracks and planar inductor layouts. QT Top metal plate in MIM capacitor structures, providing the upper electrode. LD Lower metal plate in MIM capacitor structures, acting as the bottom electrode. VV Via layer connecting different metal layers, especially in capacitor and dense routing regions. OB Opening/blocking layer used to define restricted zones, often to exclude metal or for CMP mask clarity. E.2 MIM Capacitor Capacitance Model The total capacitance CNof a metal-insulator-metal (MIM) capacitor is modeled as: CN=Ca·L·W+Cp·2·(L+W)[fF] 21 (a) MIM capacitor layout (b) Resistor layout (c) Inductor layout Figure 14: Layout views of passive components. (a) MIM capacitor with metal-insulator-metal stack. (b) Resistor layout with matching geometry. (c) Spiral inductor with octagonal turns for optimized area and Q-factor. where LandWare the layout length and width in µm,Cais the area capacitance density, and Cpis the fringing field contribution per unit length. This model includes both area and perimeter contributions to more accurately reflect layout-dependent capacitance in IC design (see Figure 14(a)). 1. Area Capacitance Term: Ca·L·W Physical Concept: This term represents the primary (parallel-plate) capacitance formed between the overlapping top and bottom metal layers. It arises from the uniform electric field across the dielectric. Layer Physics Explanation: •L·Wcorresponds to the overlap area of the plates. •Ca= 0.335fF/µm2is the area capacitance density, derived from: –Dielectric permittivity εof the insulating material. –Dielectric thickness d, with C∝ε/d. 2. Perimeter (Fringing) Capacitance Term: Cp·2·(L+W) Physical Concept: This term models fringing fields at the plate edges, contributing additional capacitance—particularly relevant in small geometries. Layer Physics Explanation: •2·(L+W)is the physical perimeter of the capacitor. •Cp= 0.11fF/µm accounts for the fringing field contribution per unit length. Summary: This composite model enables accurate estimation of MIM capacitance by capturing both parallel-plate and fringing effects. The constants CaandCpare typically calibrated using process-specific measurements or electromagnetic simulations. For a fixed capacitor length L= 20 µmand width W∈[6.05,150.0]µm, the layout-aware capaci- tance is approximated by: C≈6.92W+ 4.4[fF] (4) The corresponding bounding area is estimated from the component’s geometric envelope: Bounding_Area = 22W+ 44 [µm2] (5) E.3 N+Silicided Polysilicon Resistor Model The resistance of a layout-defined resistor implemented using the ndslires layer is modeled as: R=Rs·L W+ ∆W+ 2Rend+δ[Ω]
https://arxiv.org/abs/2505.21923v1
Physical Concept: This structure uses heavily doped N+polysilicon overlaid with a silicide layer to reduce resistance. Current flows laterally through the poly-silicide film (see Figure 14(b)), and resistance is shaped by the aspect ratio of the layout as well as process-dependent corrections. Layer Physics Explanation: 22 •Rs= 17.6 Ω/□(ohm per square) is the sheet resistance of the silicided poly layer. •W= 5.0µm is the drawn width; ∆W= 0.048µm accounts for process-induced width bias. •Lis the drawn resistor length. •Rend= 1 Ω models terminal resistance due to contact diffusion and current crowding. •δ= 0.917 Ω accounts for residual layout-dependent parasitics. Summary: The empirical layout relation used in parameterized generation is: R≈3.5007·L+ 2.917 [Ω] (6) This model is valid for L∈[0.4,5.0]µm with fixed width W= 5.0µm. The estimated layout area based on bounding box dimensions is: Bounding_Area = 5.2L+ 8.362 [µm2] (7) E.4 Octagon Spiral Inductor Model Physical Concept: Accurate modeling and layout optimization of planar spiral inductors are critical in analog circuit design. Inductor performance is highly sensitive to parasitic elements, achievable quality factor ( Q), and layout constraints imposed by process design rules. To support accurate performance prediction and inform layout choices, we adopt a modified power-law model that expresses inductance as a function of key geometric parameters. The model is validated against empirical measurements and shows strong agreement with classical analytical formulations. Numerous classical formulations relate inductance to geometric factors such as the number of turns, average diameter, trace width, and inter-turn spacing. Among these, the compact closed-form expressions in RF Microelectronics textbook [30] are widely adopted for their balance of simplicity and accuracy. Building on this foundation, we adopt a reparameterized monomial model that better fits our empirical measurement data: L= 2.454×10−4·D−1.21 out·W−0.163·D2.836 avg·S−0.049[nH] Layer Physics Explanation: •Dout= 2(R+W 2)is the outer diameter, •Din= 2(R−W 2)is the inner diameter, •Davg= (Dout+Din)/2 = 2 Ris the aver- age diameter, •Ris the radius in µ m, •Wis the trace width in µ m, •Sis the spacing in µ m.Table 14: Measured inductance for one-turn in- ductors with fixed W= 10 µmandS= 0.0µm R (µm) 30 40 50 60 L (nH)0.123 0.170 0.220 0.276 This expression is calibrated using measured data from a series of one-turn inductors fabricated with varying radius ( R), while keeping the trace width fixed at W= 10 µmand spacing at S= 0.0µm. Table 14 summarizes the measured inductance values used for model fitting. Summary: With WandSfixed, inductance simplifies to: L≈2.337×10−3·R1.164[nH] (8) The bounding area is estimated by: Bounding_Area = 4R2+ 108 R+ 440 [µm2] (9) The performance of on-chip inductors is fundamentally influenced by layout-dependent factors such as trace width, metal thickness, and inter-turn spacing. Increasing the trace width ( Wind) reduces 23 series resistance by enlarging the conductor’s cross-sectional area, thereby improving the quality factor, Q=ωL/R series. However, wider traces also increase parasitic capacitance to adjacent turns and the substrate, which lowers the self-resonance frequency. Metal thickness ( Hind) also plays a crucial role in minimizing ohmic losses. At high frequencies, current is confined near the conductor surface due to the skin effect. For copper at 25
https://arxiv.org/abs/2505.21923v1
GHz , the skin depth δis approximately 0.41µm; thus, using a metal layer thicker than 4δ(i.e.,1.6µm) ensures efficient current flow. However, increasing thickness beyond this threshold yields diminishing returns inQdue to saturation in current penetration. Turn-to-turn spacing ( S) affects both inductance and quality factor ( Q). Tighter spacing enhances magnetic coupling, thereby increasing inductance density. However, it also intensifies capacitive coupling and dielectric losses—particularly in modern CMOS processes with high- kinter-metal dielectrics—which can degrade Q. Conversely, excessive spacing reduces inductance without providing a proportionate benefit in loss reduction. As a result, one-turn spiral inductors are commonly favored in RF design due to their low series resistance, minimized parasitics, and improved modeling predictability. These insights guided our design choices for layout-aware inductor implementation. To balance the competing demands of Qoptimization, parasitic control, and DRC compliance, we implemented inductors using Metal 3 and set W= 10 µmas the default trace width. This width offers a low- resistance path that enhances Qwhile maintaining manageable parasitic capacitance and sufficient pitch for lithographic reliability. Metal 3 was selected for its favorable trade-off between thickness and routing density—it is thick enough to mitigate skin-effect losses at high frequencies while offering sufficient flexibility for compact layout integration. The implemented spiral inductor geometry is shown in Figure 14(c). Table 15 summarizes the DRC-compliant tuning ranges, estimated layout areas, and decomposition strategies for single-cell passive components in our layout library. Table 15: Single-cell passive component limits based on DRC and associated layout area costs. Component Tunable Variable Value Range Area Range Decomposition Rule Resistor Length L 4.32–20.42 Ω 10.44–34.36 µm2Series if >max, parallel if <min Capacitor Width W 46.32–1042.4 fF 176–3344 µm2Parallel if >max, series if <min Inductor Radius R ≥0.1 nH ≥5640µm2Continuous radius scaling E.5 Layout Examples of Synthesized Circuits To illustrate the correspondence between schematic and layout representations, we present two representative synthesized circuits: an IFVCO and a DLNA, shown in Figure 15 and Figure 16, respectively. In the IFVCO example, the inductor labeled L3functions as an RF choke and is excluded from the on-chip layout due to its large area requirement. Instead, it is intended for off-chip implementation at the PCB level and connected to the die via wire bonding. This external connection is indicated by the yellow pad in Figure 15(b), which serves as the wire-bonding interface. Since the current stage of system lacks automated routing, all interconnects in the layout were manually drawn to ensure accurate correspondence with the schematic connectivity. These examples demonstrate that synthesized circuit parameters can be successfully translated into DRC-compliant, physically realizable layouts, bridging the gap between high-level optimization and tapeout-ready design. F Practical Considerations and Limitations F.1 Training and Inference Efficiency Although our codebase supports GPU acceleration, all experiments in this work—excluding initial dataset generation—were conducted entirely on a MacBook CPU. This highlights the efficiency and accessibility of the FALCON pipeline, which can be executed on modest hardware without 24 (a) Designed IFVCO schematic (b) Layout of designed IFVCO Figure 15: Stage 3 results for a synthesized IFVCO. The schematic (a) reflects optimized parameters to meet the target specification. The
https://arxiv.org/abs/2505.21923v1
layout (b) is DRC-compliant and physically realizable. The final design achieves a mean relative error of 1.3% compared to the target performance. (a) Designed DLNA schematic (b) Layout of designed DLNA Figure 16: Stage 3 results for a synthesized DLNA. The schematic (a) reflects optimized parameters to meet the target specification. The layout (b) is DRC-compliant and physically realizable. The final design achieves a mean relative error of 5.0% compared to the target performance. specialized infrastructure. Our MLP and GNN models contain 207k and 1.4M trainable parameters, respectively, with memory footprints of just 831 KB and 5.6 MB. In Stage 1, the MLP classifier trains in approximately 30 minutes with a batch size of 256 and performs inference in the order of milliseconds per batch. Stage 2’s GNN model takes around 3 days to train on the full dataset using the same batch size and hardware. Fine-tuning on an unseen topology (e.g., RVCO) using ∼30,000 samples completes in under 30 minutes. In Stage 3, the pretrained GNN is used without retraining to perform layout-constrained parameter inference via gradient-based optimization. Inference is conducted one instance at a time (batch size 1), with typical runtimes under 1 second per circuit. Runtime varies based on the convergence threshold and circuit complexity but remains below 2–3 seconds in the worst case across the full test set. A solution is considered successful if the predicted performance meets the target within a specified relative error threshold. While tighter thresholds (e.g., 5%) improve accuracy, they require more optimization steps—particularly over large datasets. As a result, both success rate and inference time in Stage 3 are directly influenced by this tolerance, which can be tuned based on design fidelity requirements. 25 F.2 Limitations This work focuses on a representative set of 20 curated analog topologies spanning five circuit families. While this enables rigorous evaluation and benchmarking, extending support to arbitrary user-defined netlists is a natural next step. Notably, our GNN demonstrates strong generalization even to topologies not seen during training (e.g., RVCO), suggesting broad applicability within this space. All interconnect routing in the layout phase is currently performed manually, as the flow does not yet integrate a full analog router. This decision allows precise control over parasitic management and DRC compliance but limits scalability for more complex designs. Integration with automated layout tools can further streamline this process. We do not perform repeated training runs for each model configuration, as the pipeline is designed to be computationally efficient and executable on CPU-only hardware. Unlike benchmarking-focused pipelines, our flow incorporates analog simulation and layout-aware inference, both of which are costly to rerun at scale. Moreover, simulation and layout processes are deterministic, eliminating the need for result averaging over multiple random seeds. 26
https://arxiv.org/abs/2505.21923v1
arXiv:2505.21926v1 [cs.CL] 28 May 2025Beyond Completion: A Foundation Model for General Knowledge Graph Reasoning Yin Hua♠, Zhiqiang Liu♠, Mingyang Chen♠, Zheng Fang♣, Chi Man Wong♣ ♢, Lingxiao Li♣,Chi Man VONG♢,Huajun Chen♠ ♡,Wen Zhang♠† ♠Zhejiang University ♣Shopee Pte.Ltd., ♢University of Macau ♡Zhejiang Key Laboratory of Big Data Intelligent Computing {22351088,zhang.wen}@zju.edu.cn Abstract In natural language processing (NLP) and com- puter vision (CV), the successful application of foundation models across diverse tasks has demonstrated their remarkable potential. How- ever, despite the rich structural and textual information embedded in knowledge graphs (KGs), existing research of foundation model for KG has primarily focused on their struc- tural aspects, with most efforts restricted to in- KGtasks (e.g., knowledge graph completion, KGC). This limitation has hindered progress in addressing more challenging out-of-KG tasks. In this paper, we introduce MERRY , a founda- tion model for general knowledge graph reason- ing, and investigate its performance across two task categories: in-KG reasoning tasks (e.g., KGC) and out-of-KG tasks (e.g., KG question answering, KGQA). We not only utilize the structural information, but also the textual in- formation in KGs. Specifically, we propose a multi-perspective Conditional Message Passing (CMP) encoding architecture to bridge the gap between textual and structural modalities, en- abling their seamless integration. Additionally, we introduce a dynamic residual fusion module to selectively retain relevant textual informa- tion and a flexible edge scoring mechanism to adapt to diverse downstream tasks. Compre- hensive evaluations on 28 datasets demonstrate that MERRY outperforms existing baselines in most scenarios, showcasing strong reasoning capabilities within KGs and excellent general- ization to out-of-KG tasks such as KGQA. 1 Introduction Knowledge graphs (KGs) are structured knowledge bases that represent entities and their relationships, providing a foundation for reasoning and informa- tion retrieval in various real-world domains. With their rich entity representations and rigorous logi- cal connections, KGs have become integral to ap- plications such as classification (Liu et al., 2023), † Corresponding authorrecommendation (Guo et al., 2020), knowledge re- trieval (Xu et al., 2024), and QA systems (Ji et al., 2022), as well as knowledge-grounded LLM align- ment (Liu et al., 2025). Recently, foundation models in NLP and CV Raffel et al. (2023); ChatGPT and Barnes (2023); Li et al. (2024); Ravi et al. (2024) have demonstrated significant advancements in transfer learning, enabling improved performance across datasets and tasks. Inspired by these successes, researchers have developed foundational models for KGs that aim to generalize across datasets and adapt to diverse reasoning tasks. KGs naturally encompass both structural and textual information, yet existing research has predominantly focused on leveraging their structural aspects, with relatively limited attention to the textual modality (Galkin et al., 2024; Zhu et al., 2021; Teru et al., 2020; Geng et al., 2022; Chen et al., 2022; Liu et al., 2024). However, fully utilizing both modalities is crucial, as textual information provides contextual knowledge that complements structural represen- tations. This integration is particularly important for downstream applications such as commonsense reasoning and KGQA, where the combination of relational and contextual knowledge significantly enhances task performance (Yasunaga et al., 2021; Zhang et al., 2021; Markowitz et al.,
https://arxiv.org/abs/2505.21926v1
2022). In addition, prior work has largely been restricted to in-KG reasoning tasks, such as KG Completion (KGC), and has not adequately addressed the chal- lenges posed by out-of-KG reasoning tasks, such as KGQA. out-of-KG tasks require models to gen- eralize beyond the explicit structure of KGs, incor- porating both modalities to handle more complex reasoning scenarios. Overcoming these limitations involves address- ing three key challenges in model design: (1) miti- gating the semantic disparity between textual and structural information to facilitate effective integra- tion; (2) balancing the contributions of textual and structural modalities to suit diverse task require- ments, particularly for reasoning beyond KGs; and (3) maintaining an unbiased training procedure to enable robust generalization across datasets with- out favoring specific entities or relations (Wang et al., 2022; Markowitz et al., 2022). To address these challenges, we propose the Multi-pErspective Reasoning sYstem, MERRY , a universal knowledge graph reasoning framework. MERRY integrates textual and structural informa- tion through a global structural semantic encoding module (GCMP), designed to reconcile their se- mantic differences. To enhance adaptability, we introduce a dynamic text-adaptive fusion module (DTAF) that selectively preserves essential tex- tual information, facilitating effective application across a range of tasks. Furthermore, we develop a flexible edge scoring mechanism that adjusts adap- tively to meet the specific requirements of down- stream tasks, thereby enhancing the model’s trans- ferability across diverse reasoning scenarios. Both in-KG (zero-shot KGC) and out-of-KG (KGQA) tasks are evaluated in our MERRY . Re- sults across 28 datasets demonstrate that MERRY consistently outperforms multiple benchmark mod- els in both tasks, highlighting its robust generaliza- tion and adaptability. Our codes are released to the GitHub1. The main contributions of this paper are as follows: •We propose a novel framework for addressing in-KG and out-of-KG reasoning tasks, inte- grating textual and structural modalities. •We propose MERRY as a foundation model for general KG reasoning. By harmonizing structural and textual information, the frame- work achieves effective integration and en- sures smooth transferability across reasoning tasks with varying modality demands. •We validate MERRY’s performance on 28 datasets, demonstrating its effectiveness in zero-shot KGC and KGQA, with consistent improvements over multiple benchmarks. 2 Related Work Inductive Knowledge Graph Completion KG Completion (KGC) is a fundamental task for rea- soning over knowledge graphs. Its evolution can be categorized into three stages. Early work focused 1https://github.com/zjukg/MERRYon the transductive setting, where KGs are static, and entity and relation representations are precom- puted and stored (Bordes et al., 2013; Sun et al., 2019; Vashishth et al., 2020). Real-world KGs, however, are dynamic (Cui et al., 2022), requiring inductive methods to han- dle unseen entities and relations (Teru et al., 2020; Geng et al., 2022). These approaches rely on su- pervised training, limiting their generalization to unseen datasets and diverse KGC tasks. Recent efforts leverage pre-training paradigms from NLP and CV . For example, ULTRA (Galkin et al., 2024) identifies meta-topology types in KG structures, enabling zero-shot transfer through dataset-agnostic representations of entities and re- lations. Nevertheless, it remains limited to struc- tural information and does not incorporate textual modalities, which
https://arxiv.org/abs/2505.21926v1
are critical for contextual rea- soning. Moreover, it focuses exclusively on in-KG reasoning tasks, neglecting out-of-KG tasks. Text-aware Knowledge Graph Completion While earlier studies emphasized KG structures, recent work explores textual information for im- proved reasoning. BLP and StAR enhance repre- sentation learning by initializing embedding tables with language models (LMs) (Daza et al., 2021; Wang et al., 2021). StATik (Markowitz et al., 2022) combines LMs and graph neural networks (GNNs) by encoding node text with LMs and capturing structural information via message passing. Although these methods integrate textual and structural modalities effectively, their reliance on fine-tuning limits generalization to unseen datasets or tasks (Galkin et al., 2024). Additionally, they remain limited to in-KG reasoning tasks and lack the flexibility to address out-of-KG tasks, such as Knowledge Graph Question Answering (KGQA), which demands broader integration of textual and structural information. Knowledge Graph Question Answering KGQA represents a key out-of-KG reasoning task. It links topic entities in queries to detailed KG, improving answer accuracy through relational and contextual reasoning (Wang et al., 2019). Early methods used dual-tower architectures combining graph- and textual features with min- imal interaction between modalities (Yang et al., 2019). Later approaches trained LMs on KG data to extract implicit knowledge and generate effec- tive subgraphs for QA (Mihaylov and Frank, 2018; Lin et al., 2019; Feng et al., 2020; Lv et al., 2020). Recent advancements include QA-GNN, which jointly updates LM and GNN layers through message passing (Yasunaga et al., 2021), and GreaseLM, which enhances LM-GNN integration by aligning GNN and Transformer layers for com- prehensive information fusion (Zhang et al., 2021). However, KGQA methods focus solely on out- of-KG reasoning tasks, while most KGC methods are confined to in-KG reasoning. This task-specific specialization highlights a key limitation: the lack of a unified framework capable of addressing both in-KG and out-of-KG reasoning effectively. 3 Task Definition A KG with textual information is defined as G= {E,R,T,D}, where EandRare the set of entities and relations, Dis the set of textual descriptions for entities and relations. The set of factual triples in the KG is denoted as T={(eh, r, et)|eh, et∈ E, r∈ R} , where eh, et∈ Eandr∈ R. Inductive KGC. KG Completion (KGC) task aims to predict the correct entity efrom the given KGGfor query (h, r,?)or(?, r, t). In particular, inductive KGC tasks aim to train a score function based on the train KG Gtr={Etr,Rtr,Ttr,Dtr}. Considering the different inductive settings of the test KG Gte={Ete,Rte,Tte,Dte}, we can cat- egorize the evaluation into: (1) KG containing only unseen entities , which satisfies Etr̸=Ete andRtr=Rte;(2) KG containing both un- seen entities and unseen relations, which satisfies Etr̸=EteandRtr̸=Rte. KGQA. Given a query question and several answer options C, the KGQA task aims to retrieve subgraph from the KG Gand predict the correct answer a∈ C. To maintain consistency with the KGC task format, we define query as q= (question, REL _the_answer _is,?), where REL _the_answer _isis an auxiliary relation specifically introduced to establish a connec- tion between the query and its corresponding correct answer node. Additionally, a subgraph retrieved from the
https://arxiv.org/abs/2505.21926v1
whole KG is represented as Gsub ={Esub,Rsub,Tsub,Dsub}with entities Esub ={Etopic,Eoption ,Eother}, where Etopic represents the entity mentioned in the question q,Eoption represents the entity mentioned in the options, and Eother encompasses entities within the subgraph that do not carry particular contextual significance. The goal is to identify the correct answer option such that the triple(question, REL _the_answer _is, answer )is logically valid. 4 Methodology A detailed breakdown of MERRY’s components is presented in this section, as illustrated in Figure 1. MERRY adopts an encoder-decoder architecture, and its processing can be formalized as follows: scores =MERRY (q,G,C) (1) where qis the query, Gis the graph containing relevant textual descriptions, and Care the candi- dates to be predicted. For KGC, Ccorresponds to candidate entities, while for KGQA, it includes all possible answer options. MERRY produces a probability distribution over the candidates, where higher scores reflect a higher likelihood of correct- ness. In the encoding phase, MERRY encodes the graph structure to derive its structural represen- tation (Section 4.2) and explores strategies to effec- tively integrate textual and structural information (Section 4.3). A multi-perspective fusion module further enhances this process, enabling robust fea- ture integration while preserving key textual se- mantics (Section 4.4). Additionally, we employ a flexible edge scoring mechanism to adapt to differ- ent tasks (Section 4.5). In the decoding phase, a flexible cross-attention decoder facilitates adaptation to diverse down- stream tasks, including zero-shot KGC and KGQA. 4.1 Conditional Message Passing MERRY adopts Conditional Message Pass- ing (CMP) as the basic GNN unit. Com- pared to traditional message-passing neural net- works (MPNNs) like GCN(Kipf and Welling, 2017), GAT(Veli ˇckovi ´c et al., 2018), and Graph- SAGE(Hamilton et al., 2017), CMP explicitly con- ditions the representation of a target node von both a source node uand a query relation rq. For de- tailed architectural specifications of this condition- ing mechanism, see Huang et al. (2023). This pro- cess generates pairwise contextualized representa- tions that dynamically adapt to the structural and se- mantic constraints imposed by (u, rq), enabling di- rect modeling of triple-level interactions(Zhu et al., 2021; Zhang and Yao, 2022; Galkin et al., 2024). Formally, the CMP process can be defined as: Hnode=INIT(q) (2) Hnode=CMP (Hnode,Hedge,G) (3) LM DTAFCorssAttnR-CMPE-CMPR-CMPE-CMPQCMPMLPqueryk...GCMP Relation GraphEdge Scoringh2hh2th2ht2hh2ht2th2hh2th2hh2tKGCKGQAQuery: eg. (michael jackson, genre) popQuery:eg. Question: The lion sensed his competitor was timid, so what attitude did the lion take?Options: [A reckless], [B bellicose], [C defensive], [D, aggressive], [E dauntless]? Scores0.90.70.30.80.60.50.70.50.2(A) KG Reasoning for Diverse Tasks (B) Data Processing(C) MERRY Architecture?relation nodeentity nodequery entity nodecandidate entity nodeFigure 1: Overview of the MERRY Framework. (A) All tasks, including KGC and KGQA, are unified under a standardized query representation. (B) The data processing pipeline comprises two main components: (1) relation graph construction to model meta-relations, and (2) edge scoring to assign task-specific weights to edges. (C) The MERRY architecture processes these graphs through QCMP, GCMP, and a multi-perspective dynamic fusion module. In the decoder, the query node is represented as the Query embedding, while candidate nodes serve as Key embeddings, outputting a probability distribution over all candidates. where INIT is a conditional initialization
https://arxiv.org/abs/2505.21926v1
function that initializes node representations conditioned on query q. It can be flexibly adapted for specific scenarios, as demonstrated in subsequent sections. Hnode represents the node representations, Hedge is a learnable matrix for edge representations, and Gdenotes the graph structure. Detailed descrip- tions of the CMP calculations are provided in Ap- pendix A. In the following sections, we develop two core modules for structural and textual encod- ing based on CMP unit. 4.2 Query Conditional Structural Encoding To handle the scenario of unseen relationships in arbitrary KGs, we follow previous works (Galkin et al., 2024; Chen et al., 2021), using the raw entity graph Gand four fixed meta-relations Rmeta = {h2h, h2t, t2h, t2t}to construct the correspond- ing relation graph. The relation graph is denoted asGr={R,Rmeta,Tr}, where the nodes are re- lations derived from the entity graph G, and the edges correspond to four types of meta-relations Rmeta. Details on the construction of triple sets Tr can be found in the Appendix B. The introduction of the relation graph enables us to encode arbitrary structures. To achieve this, we propose the QCMP module, which applies CMP updates sequentially on the relation graph and the entity graph. This process yields queryconditioned representations for both relations and entities. Given a query q= (eq, rq,?)and a KG G={E,R,T,D}, we first extract its relation graphGrand then encode it as follows: rr=( 1d,ifr=rq 0d,otherwise,forr∈ R (4) Rq=CMP |||R| r=1rr,Rmeta,Gr (5) where ||is the concatenation operation, Rmeta∈ R4×dis a learnable matrix corresponding to the four types of meta-relations, and Gris the relation graph constructed from G. The conditional initial- ization function assigns an all-ones embedding 1d to the query relation rq, while all other relations are initialized with an all-zeros embedding 0d, where dis the dimension of embeddings. The final out- putRqrepresents the query conditioned relation embeddings. Subsequently, we update the entity graph with CMP module: he=( Rq[rq],ife=eq 0d, otherwise,fore∈ E (6) Hq=CMP |||E| e=1he,Rq,G (7) where the embedding of rqis used as the initializa- tion for eq, while all other entities are initialized to all-zero embeddings. The final output Hqrepre- sents the query conditioned entity embeddings. 4.3 Global Structural Semantic Encoding Textual information, as intrinsic node information, can be considered global information for the nodes. However, directly merging it with the structural modality information output by QCMP can lead to ineffective fusion due to the significant difference in their semantic spaces. Therefore, we propose the GCMP module to eliminate the semantic gap and achieve a more comprehensive modality fusion. Specifically, we employ a Large Language Model (LLM) to encode textual information. How- ever, since CMP requires features for all nodes in the graph as input, the substantial size of LLM weights can lead to an out-of-memory (OOM) risk. Therefore, we adopt a parameter-free strategy that extracts the representation of the last token from the LLM output to derive textual features for all nodes. The process of GCMP can be formalized as follows: Rg=CMP (1|R|×d,ˆRmeta,Gr) (8) Hg=CMP (Xe,Rg,G) (9) where ˆRmeta∈R4×drepresents a learnable ma- trix for meta-relations from textual perspective, Xe represents the textual embeddings of all entities
https://arxiv.org/abs/2505.21926v1
ob- tained via the parameter-free strategy. Specifically, each relation is initialized as an all-ones embed- ding, while the entity graph uses the textual embed- dingsXeas the initial representations. By applying this sequential CMP update process, we generate the global semantic embeddings for relations Rg and entities Hg. 4.4 Multi-Perspective Dynamic Fusion Multi-Channel CMP Fusion As discussed ear- lier, MERRY encodes entities and relations from both query-specific and global perspectives through QCMP and GCMP, respectively. To integrate the outputs of these two CMP channels, we employ a multi-layer perceptron (MLP) for fusion: RCMP =MLP [Rq||Rg] (10) HCMP =MLP [Hq||Hg] (11) Dynamic Text-Adaptive Fusion Although multi-channel CMP fusion bridges structural and textual information, empirical observations indicate that tasks such as KGC and KGQA place differing levels of emphasis on textual features. To accommodate this variability and dynamically pre- serve task-specific textual information, we furtherpropose a Dynamic Text-Adaptive Fusion ( DTAF ) module. Specifically, we adopt a parameterized cross-attention mechanism to encode input textual descriptions d∈ D into fixed-length embeddings: X=Attn Qtoken,LM(d),LM(d) (12) where Qtoken∈Rk×drepresents trainable query parameters, kis a tunable hyperparameter, and LM(d)serves as both the Key and Value in the cross-attention mechanism. DTAF aggregates token-level information into meaningful represen- tationsXwhile avoiding information loss. Building on the textual embeddings, DTAF adap- tively fuses textual and structural features using learnable weights αandβ, balancing their contri- butions based on task requirements: Xr=Attn Qtoken,LM(Dr),LM(Dr) (13) Xe=Attn Qtoken,LM(De),LM(De) (14) Rf=α∗Xr+ (1−α)∗RCMP, (15) Hf=β∗Xe+ (1−β)∗HCMP, (16) where DrandDeare the textual descriptions of relations and entities, respectively. The outputs Xr andXerepresent the textual features of relations and entities, respectively. The fused embeddings RfandHfare unified representations that inte- grate three different perspectives. 4.5 Query Conditional Edge Scoring Edge scores in MPNNs are crucial for model perfor- mance and vary significantly across tasks. To adapt to these differences, we design a flexible module tailored to task-specific requirements. In KGC tasks, most methods focus on message passing and aggregation, often setting all edge scores to 1 (Veli ˇckovi ´c et al., 2018). But in KGQA tasks, noisy paths in the retrieved subgraph necessi- tate more refined edge scoring. Compared to node relevance scores, edge scores capture richer inter- actions among the head entity, relation, and tail entity, offering a more accurate relevance measure for the query (Yasunaga et al., 2021). For each edge (h, r, t )in the subgraph, its query relevance is calculated using a bilinear layer: η=Norm [xh||xr||xt]⊤Wx q , (17) where W∈R3d×dis the bilinear coefficient, xh, xr,xt,xq∈Xrepresent the textual features of(h, r, t )and the query q, obtained using the parameter-free method introduced in Section 4.3. The output η∈R2×1includes relevance and irrele- vance scores, normalized with a Softmax function. The relevance score is then used in the update func- tion of CMP. 4.6 Training Mechanism Self-Supervised Pre-Training The encoding process of MERRY is both entity-agnostic and relation-agnostic, making it suitable for inductive scenarios and allowing pre-training on arbitrary or hybrid KGs. The pre-training task employs self-supervised link prediction, with binary cross- entropy loss for positive and negative samples (Sun et al., 2019; Zhu et al., 2021): L=−logp(q, ans )−nX i=11
https://arxiv.org/abs/2505.21926v1
nlog(1−p(q, neg _ans)), (18) where qis the query prefix of the triple (h, r,?), andansis the tail entity tthat makes (h, r, t )valid in the knowledge graphNegative samples are gener- ated by randomly selecting tail entities. MERRY is pre-trained on multiple hybrid KG datasets, which equips it with generalizable transferability across diverse knowledge graphs. Task Adaptation For the KGC task, the model is evaluated in a zero-shot setting without fine-tuning, using the same process as pre-training. For the KGQA task, input questions are sum- marized as a combination of the query and the retrieved subgraph. The query is formal- ized as q= (question, REL _the_answer _is), where the candidates are the possible options. The goal is to select the correct answer such that (question, REL _the_answer _is, answer ) forms a valid triple, with REL _the_answer _is is a newly introduced relation. We adapt the data in three steps. First, a question- node is introduced to represent the input question, connected to all topic entities via a new relation. Its text description is the question itself. Additionally, each candidate option is represented by an answer- node, connected to the entities in the option via a special relation. Its text description is the original text of the option. Finally, we introduce a new relation, REL _the_answer _is, which connects the question-node to the correct answer-node. Since REL _the_answer _islacks neighboring nodes in the relation graph, we adopt a few-shot approach. Using Sentence-BERT(Reimers and Gurevych, 2019) we compute sentence embeddingsfor each question and retrieve the top-K most sim- ilar questions based on cosine similarity. These few-shot examples are used to enrich the instances ofREL _the_answer _is. With these modifications, MERRY can seam- lessly transfer to perform the KGQA task. 5 Experiments We evaluate MERRY on 28 datasets across two tasks: Inductive Knowledge Graph Completion (KGC) and Knowledge Graph Question Answer- ing (KGQA). Our evaluation focuses on the fol- lowing research questions: RQ1 : How effective is MERRY in reasoning for in-KG tasks under a zero-shot setting? RQ2 : Can MERRY effectively transfer and generalize to out-of-KG tasks? RQ3 : What is the impact of key components on the per- formance of MERRY? RQ4 : How do key hyperpa- rameters affect the performance of MERRY? 5.1 Datasets and Metrics Inductive KGC We perform zero-shot inductive KGC experiments on 27 datasets, categorized by entity and relation visibility: (1) Inductive Entity (e) Datasets (IndE) : These datasets feature unseen entities in the test set, with fixed relations. This cat- egory includes 12 datasets from (Teru et al., 2020): WN18RR (WN), FB15k-237 (FB), and NELL-995 (NL), each with four different versions. (2) In- ductive Entity and Relation (e, r) Datasets (In- dER) : These datasets include unseen entities and relations in the test set. This category comprises 13 graphs from (Lee et al., 2023): FB15k-237 (FB) and Wikidata68K (WK), each with four versions, and NELL-995 (NL), which has five versions. We report Mean Reciprocal Rank (MRR) and Hits@10 results. KGQA We use CommonsenseQA (CSQA) dataset (Talmor et al., 2019), which focuses on com- monsense reasoning. It consists
https://arxiv.org/abs/2505.21926v1
of 12,102 multiple- choice questions. We follow the in-house split method from (Lin et al., 2019) for experiments and compare our results with several baseline models. We report Accuracy (Acc) on the CSQA dataset. For detailed information on datasets and metric computation formulas, refer to Appendix C and Appendix D, respectively. 5.2 Baselines Inductive KGC We compare MERRY against state-of-the-art supervised methods and recent KG MethodsIndE(WN) IndE(FB) IndE(NL) IndER(FB) IndER(WK) IndER(NL) Total A VGSOTA Num MRR Hits@10 MRR Hits@10 MRR Hits@10 MRR Hits@10 MRR Hits@10 MRR Hits@10 MRR Hits@10 Supervised SOTA 0.640 0.734 0.477 0.636 0.464 0.654 0.166 0.296 0.152 0.244 0.296 0.481 0.366 0.507 - ULTRA(3g) 0.517 0.678 0.486 0.667 0.561 0.742 0.386 0.599 0.254 0.403 0.393 0.561 0.433 0.608 4 / 24 ProLINK 0.553 0.690 0.494 0.684 0.546 0.759 0.372 0.591 0.234 0.393 0.400 0.590 0.433 0.618 8 / 24 MERRY 0.563 0.709 0.486 0.662 0.567 0.767 0.378 0.592 0.282 0.443 0.397 0.586 0.445 0.626 12 / 24 MERRY PNA 0.559 0.694 0.484 0.660 0.560 0.754 0.359 0.584 0.261 0.426 0.384 0.569 0.435 0.615 - Table 1: Zero-shot and supervised SOTA performance on 24 KG inductive reasoning datasets. The best results across baselines, supervised methods, and MERRY are bolded . The second-best results are underlined . The SOTA Num column indicates the number of datasets where each method achieves SOTA performance. foundation models, including ULTRA and Pro- LINK (Galkin et al., 2024; Wang et al., 2024), for zero-shot learning. Here, ULTRA(3g) refers to pre-training on three graphs. KGQA For KGQA, we use a fine-tuned stan- dard LM as the baseline for models without exter- nal knowledge. Additionally, we evaluate several LM+KG-based methods, including RN (Santoro et al., 2017), RGCN (Schlichtkrull et al., 2017), GconAttn (Wang et al., 2018), KagNet (Lin et al., 2019), MHGRN (Feng et al., 2020), QA-GNN (Ya- sunaga et al., 2021), and GreaseLM (Zhang et al., 2021). Among these, the best-performing models synchronize updates between the LM and GNN, enabling mutual interaction between textual and structural modalities. 5.3 Implementation & Training details We pre-train MERRY on three hybrid knowledge graph datasets: WN18RR, CoDEx-Medium, and FB15k237, to capture diverse relational struc- tures and sparsity patterns (Dettmers et al., 2018; Toutanova and Chen, 2015; Safavi and Koutra, 2020). Based on ULTRA, we set QCMP to a 6- layer CMP and GCMP to a 3-layer CMP, with each hidden layer having a dimension 64. To enhance convergence, we employ a two-stage training strat- egy: (1) QCMP weights from ULTRA are frozen, and other modules, particularly GCMP, are trained. (2) All components are unfrozen, allowing QCMP and other modules to converge jointly. During train- ing, the LM backbone remains frozen. For Inductive KGC, we evaluate the zero-shot ca- pability of the pre-trained model directly on down- stream datasets, using the Llama3 8B LM backbone (Grattafiori et al., 2024). For KGQA, due to the substantial gap between pre-training and the downstream task, we fine-tune the model with three few-shot examples before test- ing. Considering commonsense reasoning requires alignment with human cognitive preferences, we use the Llama3 8B Instruct backbone.Methods IHdev-Acc. (%) IHtest-Acc.
https://arxiv.org/abs/2505.21926v1
(%) RoBERTa-Large 73.1 68.7 LLaMA-3-8b-instruct 72.9 71.9 RGCN 72.7 68.4 GconAttn 72.6 68.6 KagNet 73.5 69.0 RN 74.6 69.1 MHGRN 74.5 71.1 QA-GNN 76.5 73.4 GreaseLM 78.5 74.2 MERRY 78.6 74.9 Table 2: Performance comparison on CommonsenseQA in-house split (controlled experiments). 5.4 Main Results (RQ1) We compare MERRY with baselines on 27 induc- tive link prediction KG datasets, categorized into 7 benchmarks based on data sources. For a fair com- parison, datasets IndE (ILPC-small), IndE (ILPC- large), and IndER (NL-0) are excluded. Table 1 presents the average results across 6 benchmarks, 24 datasets. A full comparison of results across 27 datasets is provided in Appendix E. Four benchmarks, IndE(X) from (Teru et al., 2020), contain unseen entities in the test graph. In contrast, the IndER (X) benchmark from (Lee et al., 2023) includes unseen entities and relations, mak- ing it significantly more challenging. Among all dataset benchmarks, IndER (WK), IndE (NL), and IndER (NL) contain entities and relations unseen during pre-training, providing a strong evaluation of the model’s zero-shot generalization capability. Table 1 shows that MERRY outperforms baselines. Additionally, we compare MERRY with a parameter-free PNA method (Corso et al., 2020), used for encoding textual descriptions of entities and relations (4.3). From the average results, while theMERRY PNA variant shows a slight decline in performance, it demonstrates that our design re- tains a certain level of robustness. Overall, MERRY surpasses state-of-the-art su- pervised models and existing zero-shot transfer methods in total average metrics. While ULTRA and ProLINK excel on specific datasets, their per- formance is largely limited to datasets they were trained on. 5.5 Generalization to KGQA (RQ2) Table 2 compares MERRY with previous state-of- the-art methods on the CSQA dataset. MERRY achieves superior performance, surpassing all base- lines and delivering the best overall results. No- tably, compared to GreaseLM, which integrates GNN and LM layers through bidirectional interac- tions, MERRY performs comparably on the val- idation set but exceeds it on the test set. This demonstrates the effectiveness of our approach in integrating textual and structural modalities. These results highlight the robustness of our mul- timodal fusion strategy and strong generalization capabilities. Additionally, in zero-shot inference using Llama3 8b Instruct, MERRY shows signifi- cant improvement, further validating its ability to incorporate structural information without compro- mising textual understanding. 5.6 Ablation Studies (RQ3) We conducted ablation experiments on multiple datasets, including IndE(X) and IndER(X), to eval- uate the impact of two key components in our method for KGC. As shown in Figure 2, "w/o GCMP" indicates the removal of the GCMP mod- ule, where node text and structural features are in- stead concatenated and fused via an MLP. "w/o DTAF" refers to the model where DTAF is ig- nored, relying solely on CMP-based fusion for downstream predictions. Figure 2: Ablation study results. The results demonstrate a significant perfor- mance drop in the "w/o GCMP" variant, highlight- ing its critical role in bridging the gap between tex- tual and structural modalities for better integration. In contrast, the "w/o DTAF" variant shows a slight performance decline, indicating that while original text features aid KGC,
https://arxiv.org/abs/2505.21926v1
DTAF primarily enhances the understanding of structural information. Similarly, we conducted ablation experiments on the CSQA dataset, as shown in Table 3. An addi- tional variant, "w/o Edge Scoring", sets all edgeEdge Scoring DTAF IHdev-Acc. (%) IHtest-Acc. (%) ✓ ✓ 78.6 74.9 ✓ 77.7 75.0 71.4 70.7 Table 3: Ablation results of the edge scoring mechanism and DTAF module on the CSQA dataset. scores to 1, similar to the KGC tasks. The results indicate that DTAF significantly impacts KGQA performance, highlighting the importance of text feature understanding in these tasks and its role in preserving the LM’s text processing capability. Moreover, ignoring edge scores results in a per- formance decline, underscoring the importance of edge weights in KGQA. 5.7 Hyperparameter Sensitivity (RQ4) Figure 3: Performance of different GCMP layers in KGC and different numbers of shots in KGQA. We investigated the impact of GCMP layers on zero-shot KGC tasks and assessed the role of few- shot learning in KGQA. As illustrated in Figure 3, using too few GCMP layers results in poor con- vergence, while excessive layers lead to feature smoothing. Aggregating information from up to three hops strikes an optimal balance, enabling ef- fective performance. For KGQA, the introduction of few-shot learn- ing proves essential. As expected, zero-shot per- formance is initially poor. However, as the number of shots increases, performance stabilizes, demon- strating the model’s capacity to rapidly adapt and learn new relationships with minimal data. 5.8 Computational Complexity and Scalability Analysis To ensure practical applicability, we theoretically analyze MERRY’s computational efficiency under two decoupled phases: •Phase 1: LLM Text Encoding Complexity scales as O(|V| ·TLLM), where |V|is the node count and TLLM is the per-node encod- ing time. Our parameter-free feature extrac- tion (Section 4.3) enables one-time offline pre- processing , converting TLLM into a fixed cost during model deployment. •Phase 2: CMP Graph Updates Each iter- ation requires O(|E|d+|V|d2)operations, where |E|denotes the number of edges and dis the feature dimension. This complex- ity aligns with state-of-the-art GNNs like UL- TRA (Galkin et al., 2024) and NBFNet (Zhu et al., 2021), while demonstrating significant advantages over classic inductive KGC ap- proaches. Specifically, compared to GraIL’s O(|E|d2+|V|d2)complexity for closed sub- graph encoding (Teru et al., 2020), MERRY achieves a d-fold reduction in edge-related computation, making it particularly advanta- geous for graphs with large edge sets or high- dimensional features. Scalability Advantages: Based on the above time-complexity analysis, MERRY demonstrates strong scalability on large-scale graphs. By de- coupling the LLM encoding phase, all node tex- tual features can be precomputed offline at a cost ofO |V| ·TLLM and then stored and retrieved via a distributed system. Furthermore, the CMP graph-update complexity shows that, for a fixed hidden-layer dimension d, MERRY’s online com- putation O |E|d+|V|d2 is substantially lower than the O |E|d2+|V|d2 required by classical approaches. Together, these results demonstrate that our framework achieves a favorable trade-off between performance and efficiency. 6 Conclusion In this paper, we introduced MERRY , a general knowledge graph reasoning framework that bridges textual and structural modalities through multi- channel CMP encoding and multi-perspective dy- namic
https://arxiv.org/abs/2505.21926v1
fusion mechanisms. Additionally, we pro- posed a flexible edge scoring mechanism to adapt to diverse downstream tasks. Experiments across 28 datasets demonstrate MERRY’s strong gener- alization capabilities in in-KG tasks, such as zero- shot KGC, and its adaptability to out-of-KG tasks, such as KGQA, highlighting its potential as a uni- fied framework for reasoning across in-KG and out-of-KG tasks.Acknowledgment This work is founded by National Natural Sci- ence Foundation of China (NSFC62306276/NS- FCU23B2055/NSFCU19B2027), Zhejiang Provin- cial Natural Science Foundation of China (No. LQ23F020017), Yongjiang Talent Introduction Pro- gramme (2022A-238-G), and Fundamental Re- search Funds for the Central Universities (226- 2023-00138). Limitations Here, we discuss three limitations of this work. First, through hyperparameter tuning experiments, it is evident that the CMP module’s depth has limi- tations. A higher number of layers leads to feature smoothing, which is a challenge commonly faced by models incorporating GNN architectures. Sec- ond, we assumed that each entity and relation in the KG dataset has a corresponding textual description. However, our investigation discovered that some datasets need better maintenance, resulting in miss- ing textual fields for certain entities. This issue of data completeness poses challenges for approaches that rely on language models. Finally, while LLM have demonstrated significant potential across vari- ous tasks, they face unique challenges in the in-KG task. Due to the size of the graph, encoding all nodes becomes particularly difficult, not only in- troducing substantial time and memory overhead during training but also consuming considerable storage space for offline feature storage. Efficiently leveraging LLMs in the in-KG tasks thus remains a crucial area for future exploration. References Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. Advances in neural information pro- cessing systems , 26. ChatGPT and Richard L. Barnes. 2023. AI sarcasm detection: Insult your AI without offending it. RFC , 9405:1–5. Jiajun Chen, Huarui He, Feng Wu, and Jie Wang. 2021. Topology-aware correlations between relations for inductive link prediction in knowledge graphs. Preprint , arXiv:2103.03642. Mingyang Chen, Wen Zhang, Zhen Yao, Xiangnan Chen, Mengxiao Ding, Fei Huang, and Huajun Chen. 2022. Meta-learning based knowledge extrapolation for knowledge graphs in the federated setting. In Pro- ceedings of the Thirty-First International Joint Con- ference on Artificial Intelligence, IJCAI-22 , pages 1966–1972. International Joint Conferences on Arti- ficial Intelligence Organization. Main Track. Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veli ˇckovi ´c. 2020. Principal neighbourhood aggregation for graph nets. Preprint , arXiv:2004.05718. Yuanning Cui, Yuxin Wang, Zequn Sun, Wenqiang Liu, Yiqiao Jiang, Kexin Han, and Wei Hu. 2022. In- ductive knowledge graph reasoning for multi-batch emerging entities. Preprint , arXiv:2208.10378. Daniel Daza, Michael Cochez, and Paul Groth. 2021. Inductive entity representations from text via link prediction. In Proceedings of the Web Conference 2021 , WWW ’21, page 798–808. ACM. Tim Dettmers, Pasquale Minervini, Pontus Stene- torp, and Sebastian Riedel. 2018. Convolu- tional 2d knowledge graph embeddings. Preprint , arXiv:1707.01476. Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. 2020. Scalable multi- hop relational reasoning for knowledge-aware
https://arxiv.org/abs/2505.21926v1
ques- tion answering. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP) , pages 1295–1309, Online. As- sociation for Computational Linguistics. Mikhail Galkin, Max Berrendorf, and Charles Tap- ley Hoyt. 2022a. An open challenge for induc- tive link prediction on knowledge graphs. CoRR , abs/2203.01520. Mikhail Galkin, Etienne Denis, Jiapeng Wu, and William L. Hamilton. 2022b. Nodepiece: Composi- tional and parameter-efficient representations of large knowledge graphs. Preprint , arXiv:2106.12144. Mikhail Galkin, Xinyu Yuan, Hesham Mostafa, Jian Tang, and Zhaocheng Zhu. 2024. Towards founda- tion models for knowledge graph reasoning. In The Twelfth International Conference on Learning Repre- sentations . Yuxia Geng, Jiaoyan Chen, Jeff Z. Pan, Mingyang Chen, Song Jiang, Wen Zhang, and Huajun Chen. 2022. Relational message passing for fully inductive knowl- edge graph completion. Preprint , arXiv:2210.03994. Genet Asefa Gesese, Harald Sack, and Mehwish Alam. 2023. Raild: Towards leveraging relation features for inductive link prediction in knowledge graphs. In Proceedings of the 11th International Joint Confer- ence on Knowledge Graphs , IJCKG ’22, page 82–90, New York, NY , USA. Association for Computing Machinery. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, AnirudhGoyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Ro- driguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis An- derson, Govind Thattai, Graeme Nail, Gregoire Mi- alon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Is- han Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Jun- teng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kam- badur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Niko- lay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Va- sic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan,
https://arxiv.org/abs/2505.21926v1
Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ron- nie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sa- hana Chennabasappa, Sanjay Singh, Sean Bell, Seo- hyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sha- ran Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Van- denhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Syd- ney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Vir- ginie Do, Vish V ogeti, Vítor Albiero, Vladan Petro- vic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whit- ney Meers, Xavier Martinet, Xiaodong Wang, Xi- aofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xin- feng Xie, Xuchao Jia, Xuewei Wang, Yaelle Gold- schlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Sri- vastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit San- gani, Amos Teo, Anam Yunus, Andrei Lupu, An- dres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchan- dani, Annie Dong, Annie Franco, Anuj Goyal, Apara- jita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yaz- dan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Han- cock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching- Hsiang Chu, Chris Cai, Chris Tindal, Christoph Fe- ichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Este- ban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanaz- eri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry As- pegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jen- nifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan Mc- Phie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khan- delwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Ki- ran Jagadeesh, Kun Huang, Kunal
https://arxiv.org/abs/2505.21926v1
Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrst- edt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Pa- tel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso,Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pe- dro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lind- say, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun V ontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wen- wen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Qingyu Guo, Fuzhen Zhuang, Chuan Qin, Hengshu Zhu, Xing Xie, Hui Xiong, and Qing He. 2020. A survey on knowledge graph-based recommender systems. Preprint , arXiv:2003.00911. William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Con- ference on Neural Information Processing Systems , NIPS’17, page 1025–1035, Red Hook, NY , USA. Curran Associates Inc. Xingyue Huang, Miguel Romero Orth, ˙Ismail ˙Ilkan Ceylan, and Pablo Barceló. 2023. A theory of link prediction via relational weisfeiler-leman on knowl- edge graphs. In Thirty-seventh Conference on Neural Information Processing Systems . Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Mart- tinen, and Philip S. Yu. 2022. A survey on knowl- edge graphs: Representation, acquisition, and appli- cations. IEEE Trans. Neural Networks Learn. Syst. , 33(2):494–514. Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In International Conference on Learning Representations . Jaejun
https://arxiv.org/abs/2505.21926v1
Lee, Chanyoung Chung, and Joyce Jiyoung Whang. 2023. Ingram: Inductive knowledge graph embedding via relation graphs. Preprint , arXiv:2305.19987. Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. 2024. Llava-next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895 . Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. KagNet: Knowledge-aware graph net- works for commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2829–2839, Hong Kong, China. Association for Computational Linguistics. Ye Liu, Kai Zhang, Zhenya Huang, Kehang Wang, Yang- hai Zhang, Qi Liu, and Enhong Chen. 2023. Enhanc- ing hierarchical text classification through knowledge graph integration. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 5797– 5810, Toronto, Canada. Association for Computa- tional Linguistics. Zhiqiang Liu, Mingyang Chen, Yin Hua, Zhuo Chen, Ziqi Liu, Lei Liang, Huajun Chen, and Wen Zhang. 2024. Unihr: Hierarchical representation learning for unified knowledge graph link prediction. arXiv preprint arXiv:2411.07019 . Zhiqiang Liu, Chengtao Gan, Junjie Wang, Yichi Zhang, Zhongpu Bo, Mengshu Sun, Huajun Chen, and Wen Zhang. 2025. Ontotune: Ontology-driven self- training for aligning large language models. In Pro- ceedings of the ACM on Web Conference 2025, WWW 2025, Sydney, NSW, Australia, 28 April 2025- 2 May 2025 , pages 119–133. ACM. Shangwen Lv, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. 2020. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. Proceedings of the AAAI Conference on Artificial Intelligence , 34(05):8449–8456. Elan Markowitz, Keshav Balasubramanian, Mehrnoosh Mirtaheri, Murali Annavaram, Aram Galstyan, and Greg Ver Steeg. 2022. StATIK: Structure and text for inductive knowledge graph completion. In Find- ings of the Association for Computational Linguistics: NAACL 2022 , pages 604–615, Seattle, United States. Association for Computational Linguistics.Todor Mihaylov and Anette Frank. 2018. Knowledge- able reader: Enhancing cloze-style reading compre- hension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 821–832, Melbourne, Australia. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023. Exploring the limits of transfer learning with a unified text-to-text trans- former. Preprint , arXiv:1910.10683. Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Rong- hang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Va- sudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Dollár, and Christoph Feichtenhofer. 2024. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714 . Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. Preprint , arXiv:1908.10084. Tara Safavi and Danai Koutra. 2020. CoDEx: A Com- prehensive Knowledge Graph Completion Bench- mark. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 8328–8350, Online. Association for Computational Linguistics. Adam Santoro, David Raposo,
https://arxiv.org/abs/2505.21926v1
David G. T. Barrett, Ma- teusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural net- work module for relational reasoning. Preprint , arXiv:1706.01427. Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolu- tional networks. Preprint , arXiv:1703.06103. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. Preprint , arXiv:1902.10197. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowl- edge. Preprint , arXiv:1811.00937. Komal K. Teru, Etienne Denis, and William L. Hamil- ton. 2020. Inductive relation prediction by subgraph reasoning. arXiv: Learning . Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Composi- tionality , pages 57–66, Beijing, China. Association for Computational Linguistics. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2020. Composition-based multi- relational graph convolutional networks. Preprint , arXiv:1911.03082. Petar Veli ˇckovi ´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Ben- gio. 2018. Graph attention networks. Preprint , arXiv:1710.10903. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Ying Wang, and Yi Chang. 2021. Structure-augmented text representation learning for efficient knowledge graph completion. In Proceedings of the Web Confer- ence 2021 , WWW ’21, page 1737–1748. ACM. Kai Wang, Yuwei Xu, Zhiyong Wu, and Siqiang Luo. 2024. LLM as prompter: Low-resource inductive rea- soning on arbitrary knowledge graphs. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 3742–3759, Bangkok, Thailand. Associ- ation for Computational Linguistics. Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. SimKGC: Simple contrastive knowledge graph completion with pre-trained language models. InProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4281–4294, Dublin, Ireland. Association for Computational Linguistics. Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, and Michael Witbrock. 2018. Improving natural language inference using external knowl- edge in the science questions domain. Preprint , arXiv:1809.05724. Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, and Michael Witbrock. 2019. Improving natural language inference using external knowl- edge in the science questions domain. Proceedings of the AAAI Conference on Artificial Intelligence , 33(01):7208–7215. Wenhan Xiong, Thien Hoang, and William Yang Wang. 2018. Deeppath: A reinforcement learning method for knowledge graph reasoning. Preprint , arXiv:1707.06690. Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng Wang, and Zheng Li. 2024. Retrieval-augmented generation with knowledge graphs for customer service question answering. In Proceedings of the 47th International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval , SIGIR 2024, page 2905–2909. ACM. An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. 2019. Enhanc- ing pre-trained language representations with rich knowledge for machine reading
https://arxiv.org/abs/2505.21926v1
comprehension. InProceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 2346– 2357, Florence, Italy. Association for Computational Linguistics. Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. Qa-gnn: Rea- soning with language models and knowledge graphs for question answering. In North American Chap- ter of the Association for Computational Linguistics (NAACL) . Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning, and Jure Leskovec. 2021. Greaselm: Graph reason- ing enhanced language models. In International Con- ference on Learning Representations . Yongqi Zhang and Quanming Yao. 2022. Knowledge graph reasoning with relational digraph. In Proceed- ings of the ACM Web Conference 2022 , pages 912– 924. Zhaocheng Zhu, Xinyu Yuan, Mikhail Galkin, Sophie Xhonneux, Ming Zhang, Maxime Gazeau, and Jian Tang. 2023. A*net: A scalable path-based rea- soning approach for knowledge graphs. Preprint , arXiv:2206.04798. Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhon- neux, and Jian Tang. 2021. Neural bellman-ford net- works: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems , 34. A Details of CMP Updates Given a graph G= (E,R,T), where the feature of any entity uhuand the feature of any relation is denoted as r, the update process for the (t+ 1) -th layer of CMP (Conditional Message Passing) is formalized as follows: mt+1 u=MSG (ht w,r), w∈ Nr(u), (19) ht+1 u=UPDATE ht u,AGG (mt+1 u) (20) where, we follow the settings of NBFNet, where the message function uses the parameter-free DistMult, the aggregation function employs summation, and UPDATE is implemented as a linear layer with LayerNorm. When edge scores are introduced, the message function is adjusted to incorporate relevance scores. If the relevance score for any edge is denoted as s, the modified update equations become: mt+1 u=s·MSG (ht w,r), w∈ Nr(u), (21) ht+1 u=UPDATE ht u,AGG (mt+1 u) (22) where the edge score sweights the message contri- bution from each neighbor, enhancing the model’s ability to capture relevance-specific information in graph updates. B Relation Graph Construction Given a graph G= (E,R,T), we apply the lifting function Gr=LIFT(G)to build a graph of relations Gr= (Er,Rmeta,Tr)where each node is a distinct relation type in G. Triples Tr∈(R × R meta× R) in the relation graph Grdenote interactions between relations in the original graph G, and we distinguish four such meta-relation interactions Rmeta:tail-to- head (t2h) edges, head-to-head (h2h) edges, head- to-tail (h2t) edges, and tail-to-tail (t2t) edges. Each of the four adjacency matrices can be efficiently obtained with one sparse matrix multiplication; for details, refer to Galkin et al. (2024). C Datasets Pre-Training Considering MERRY’s effective generalization across datasets, we perform pre- training using a mix of the WN18RR, FB15k237, and CodexMedium datasets. Table 4 presents the statistics of these three datasets, highlighting their data diversity. Inductive KGC Our zero-shot Inductive KG Completion (KGC) experiments are conducted on 27 datasets. Among these, 12 datasets are derived from the GraIL framework (Teru et al., 2020), which utilizes widely recognized KG bench- marks such as WN18RR (Dettmers et al., 2018), FB15k237 (Toutanova and Chen, 2015),
https://arxiv.org/abs/2505.21926v1
and NELL-995 (Xiong et al., 2018), and 2 datasets are derived from the ILPC (Galkin et al., 2022a). These datasets are designed such that the training and test- ing graphs maintain consistent relation types. Additionally, we incorporate 13 datasets from the InGram framework (Lee et al., 2023) to further assess inductive reasoning performance. These datasets are generated from three real-world knowl- edge graph benchmarks: FB15k237 (Toutanova and Chen, 2015), Wikidata68K (Gesese et al., 2023), and NELL-995 (Xiong et al., 2018). Each dataset is partitioned into subsets with varying proportions of novel relational triples, specifically 100%, 75%, 50%, and 25%, enabling evaluation under diverse inductive settings. Additionally, the NELL-995 also has a variant dataset with 0 While other KG datasets with textual descrip- tions exist, their limited accessibility precludestheir inclusion in this study. Future research may focus on evaluating these datasets. Comprehensive structural statistics for the datasets employed in this work are presented in Table 5. KGQA In our KG question answering (KGQA) experiments, the CommonsenseQA dataset is used as a representative for this type of task (Talmor et al., 2019). CSQA is a multiple-choice question- answering benchmark with five answer options per question, aimed at assessing reasoning based on commonsense knowledge. It includes a total of 12,102 questions. As the test set for CSQA is not openly accessible, evaluation can only be con- ducted biweekly through submissions to the official leaderboard. For our primary experiments, we rely on the in- house (IH) data splits introduced by (Lin et al., 2019) for training and validation purposes. The performance of our final system is also evaluated on the official test set to provide a direct comparison with existing methods. Dataset |Etr| |R tr|#Train #Validation #Test WN18RR 40.9k 11 86.8k 3.0k 3.1k FB15k-237 14.5k 237 272.1k 17.5k 20.4k CodexMedium 17.0k 51 185.5k 10.3k 10.3k Table 4: Statistics of pre-training KG datasets. D Metrics Mean Reciprocal Rank (MRR) TheMean Re- ciprocal Rank (MRR) evaluates the quality of the ranking in Knowledge Graph Completion (KGC) tasks. For a given query q, let the rank of the correct candidate be rq. The reciprocal rank is defined as 1 rq. Averaging over all queries, MRR is calculated as: MRR =1 |Q|X q∈Q1 rq(23) where Qrepresents the set of all queries. A higher MRR indicates better model performance in rank- ing the correct candidate higher in the prediction list. Hits@10 TheHits@10 metric measures the pro- portion of queries for which the correct candidate is ranked within the top 10 predictions. For a given query q, let the rank of the correct candidate be rq. Hits@10 is defined as: Hits@10 =1 |Q|X q∈Q1[rq≤10], (24) where 1[·]is an indicator function that equals 1 if the condition inside is true and 0 otherwise. A higher Hits@10 value reflects the model’s ability to include the correct candidate within the top 10 ranked predictions. Accuracy (Acc) TheAccuracy (Acc) metric is used to evaluate performance on Knowledge Graph Question Answering (KGQA) tasks. For a dataset of queries, let 1[q]indicate whether the predicted answer for question qmatches the ground truth. Accuracy is computed as: Acc=1 |Q|X q∈Q1[q],
https://arxiv.org/abs/2505.21926v1
(25) where Qrepresents the set of all questions. A higher Accuracy score indicates the model’s effec- tiveness in selecting the correct answer from the set of options. E Full Results The full, per-dataset results of MRR and Hits@10 of the zero-shot inference of the pre-trained MERRY model, the pre-trained ULTRA model, and best reported supervised SOTA baselines are presented in Table 6. The detailed results from Table 1 are presented in Table 6, which also includes the outcomes for two ILPC datasets ans IndER(NL-0) that are not covered in (Wang et al., 2024). Group Dataset Training Graph Validation Graph Test Graph SOTA Entities Rels Triples Entities Rels Triples Entities Rels Triples IndE(WN) WN:v1 2746 9 5410 2746 9 5410 922 9 1618 Zhu et al. (2021) WN:v2 6954 10 15262 6954 10 15262 2757 10 4011 Zhu et al. (2021) WN:v3 12078 11 25901 12078 11 25901 5084 11 6327 Zhu et al. (2021) WN:v4 3861 9 7940 3861 9 7940 12334 9 7084 Zhu et al. (2023) IndE(FB) FB:v1 1594 180 4245 1594 180 4245 1093 180 1993 Zhu et al. (2023) FB:v2 2608 200 9739 2608 200 9739 1660 200 4145 Zhu et al. (2021) FB:v3 3668 215 17986 3668 215 17986 2501 215 7406 Zhu et al. (2021) FB:v4 4707 219 27203 4707 219 27203 3352 219 11714 Zhu et al. (2023) IndE(NL) NL:v1 3103 14 4687 3103 14 4687 833 14 833 Zhang and Yao (2022) NL:v2 2564 88 8219 2564 88 8219 2086 88 4586 Zhang and Yao (2022) NL:v3 4647 142 16393 4647 142 16393 3566 142 8048 Zhang and Yao (2022) NL:v4 2092 76 7546 2092 76 7546 2795 76 7073 Zhang and Yao (2022) IndE(ILPC) ILPC:small 10230 48 78616 6653 48 2908 6653 48 2902 Galkin et al. (2022b) ILPC:large 46626 65 202446 29246 65 10179 29246 65 10184 Galkin et al. (2022b) IndER(FB) FB-25 5190 163 91571 4097 216 17147 5716 4097 17147 Lee et al. (2023) FB-50 5190 153 85375 4445 205 11636 3879 4445 11636 Lee et al. (2023) FB-75 4659 134 62809 2792 186 9316 3106 2792 9316 Lee et al. (2023) FB-100 4659 134 62809 2624 77 6987 2329 2624 6987 Lee et al. (2023) IndER(WK) WK-25 12659 47 41873 3228 74 3391 1310 3228 3391 Lee et al. (2023) WK-50 12022 72 82481 9328 93 9672 3224 9328 9672 Lee et al. (2023) WK-75 6853 52 28741 2722 65 3430 1143 2722 3430 Lee et al. (2023) WK-100 9784 67 49875 12136 97 13487 4496 12136 13487 Lee et al. (2023) IndER(NL) NL-0 1814 134 7796 2026 112 2287 2026 112 2287 Lee et al. (2023) NL-25 4396 106 17578 2230 146 2230 743 2230 2230 Lee et al. (2023) NL-50 4396 106 17578 2335 119 2576 859 2335 2576 Lee et al. (2023) NL-75 2607 96 11058 1578 116 1818 607 1606 1818 Lee et al. (2023) NL-100 1258 55 7832 1709 53 2378 793 1709 2378 Lee et al. (2023) Table 5: Inductive KG datasets used in the experiments. "Triples" refers to the number of edges
https://arxiv.org/abs/2505.21926v1
in the graph used for training, validation, or testing. "Valid" and "Test" refer to the triples that need to be predicted in the validation and test sets, respectively, within the corresponding graphs. Group Dataset Supervised SOTA ULTRA(3g) MERRY MRR Hits@10 MRR Hits@10 MRR Hits@10 IndE(WN) WN:v1 0.741 0.826 0.593 0.779 0.635 0.795 WN:v2 0.704 0.798 0.620 0.752 0.654 0.783 WN:v3 0.452 0.568 0.371 0.494 0.397 0.526 WN:v4 0.661 0.743 0.484 0.687 0.562 0.710 IndE(FB) FB:v1 0.457 0.589 0.486 0.657 0.478 0.628 FB:v2 0.51 0.672 0.501 0.694 0.503 0.694 FB:v3 0.476 0.637 0.482 0.644 0.478 0.636 FB:v4 0.466 0.645 0.477 0.671 0.484 0.688 IndE(NL) NL:v1 0.637 0.866 0.716 0.861 0.643 0.892 NL:v2 0.419 0.601 0.525 0.719 0.558 0.753 NL:v3 0.436 0.594 0.511 0.687 0.564 0.730 NL:v4 0.363 0.556 0.490 0.701 0.498 0.691 IndE(ILPC) ILPC:small 0.130 0.251 0.302 0.443 0.335 0.472 ILPC:large 0.070 0.146 0.290 0.424 0.302 0.437 IndER(FB) FB-25 0.133 0.271 0.383 0.633 0.363 0.616 FB-50 0.117 0.218 0.330 0.536 0.330 0.540 FB-75 0.189 0.325 0.391 0.594 0.377 0.574 FB-100 0.223 0.371 0.438 0.631 0.443 0.638 IndER(WK) WK-25 0.186 0.309 0.307 0.507 0.293 0.487 WK-50 0.068 0.135 0.158 0.296 0.216 0.402 WK-75 0.247 0.362 0.373 0.519 0.401 0.531 WK-100 0.107 0.169 0.178 0.289 0.220 0.360 IndER(NL) NL-0 0.269 0.431 0.342 0.523 0.351 0.536 NL-25 0.334 0.501 0.387 0.538 0.406 0.601 NL-50 0.281 0.453 0.398 0.549 0.376 0.530 NL-75 0.261 0.464 0.348 0.527 0.344 0.550 NL-100 0.309 0.506 0.442 0.631 0.462 0.666 Table 6: The full results (MRR and Hits@10) of MERRY , ULTRA, and the best-reported Supervised SOTA are presented across 27 datasets, highlighting their performance under both zero-shot inference and fine-tuning scenarios.
https://arxiv.org/abs/2505.21926v1
Subspecialty-SpecificFoundationModelforIntelligent GastrointestinalPathology LianghuiZhu1#,XitongLing1#,MinxiOuyang1#,XiaopingLiu2#,MingxiFu1,TianGuan1,Fanglei Fu1,XuanyuWang2,MaomaoZeng3,MingxiZhu1,YiboJin4,LimingLiu5,SongDuan6,Qiming He1,YizhiWang1,LuxiXie7*,HouqiangLi8*,YonghongHe1*,SufangTian2* 1.InstituteofBiopharmaceuticalandHealthEngineering,TsinghuaShenzhenInternational GraduateSchool,Shenzhen,Guangdong,China. 2.DepartmentofPathology,ZhongnanHospitalofWuhanUniversity,Wuhan,HubeiProvince, China. 3.ShenzhenZhengjingdaInstrumentCo.,Ltd.,Shenzhen,Guangdong,China. 4.SchoolofForeignStudies,GuangzhouUniversity,Guangzhou,Guangdong510006,China 5.DepartmentofPathology,theSecondAffiliatedHospitalofSouthernUniversityofScienceand Technology,Shenzhen,Guangdong,China. 6.DepartmentofPathology,ChongqingUniversityAffiliatedThreeGorgesHospital,Chongqing, China. 7.DepartmentofPathology,LiuzhouPeople'sHospitalAffiliatedtoGuangxiMedicalUniversity, Liuzhou,Guangxi,China. 8.DepartmentofPathology,FuzhouUniversityAffiliatedFujianProvincialHospital,Fuzhou, Fujian,China. *:Correspondingauthor. #:Theseauthorscontributeequally. Abstract Gastrointestinal(GI)diseasesrepresentaclinicallysignificantburden,necessitating precisediagnosticapproachestooptimizepatientoutcomes.Conventional histopathologicaldiagnosis,heavilyreliantonthesubjectiveinterpretationof pathologists,suffersfromlimitedreproducibilityanddiagnosticvariability.To overcometheselimitationsandaddressthelackofpathology-specificfoundation modelsforGIdiseases,wedevelopDigepath,aspecializedfoundationmodelforGI pathology.Ourframeworkintroducesadual-phaseiterativeoptimizationstrategy combiningpretrainingwithfine-screening,specificallydesignedtoaddressthe detectionofsparselydistributedlesionareasinwhole-slideimages.Digepathis pretrainedonmorethan353millionimagepatchesfromover200,000hematoxylin andeosin-stainedslidesofGIdiseases.Itattainsstate-of-the-artperformanceon33 outof34tasksrelatedtoGIpathology,includingpathologicaldiagnosis,molecular prediction,genemutationprediction,andprognosisevaluation,particularlyin diagnosticallyambiguouscasesandresolution-agnostictissueclassification.We furthertranslatetheintelligentscreeningmoduleforearlyGIcancerandachieve near-perfect99.6%sensitivityacross9independentmedicalinstitutionsnationwide. TheoutstandingperformanceofDigepathhighlightsitspotentialtobridgecritical gapsinhistopathologicalpractice.ThisworknotonlyadvancesAI-drivenprecision pathologyforGIdiseasesbutalsoestablishesatransferableparadigmforother pathologysubspecialties. Keywords:Gastrointestinaldiseases,Foundationmodel,Computationalpathology, Earlycancerscreening Introduction TheglobalburdenofGIdiseasehasbeensteadilyincreasing,withparticularly notablerisesintheincidenceofmalignanttumorssuchasgastricandcolorectal cancers[1-3].AccordingtoWorldHealthOrganization(WHO)statistics,GIcancers rankamongthemostprevalentmalignanciesworldwide,withmortalityrates remainingalarminglyhighinmanyregions[4,5].EarlydetectionandtreatmentofGI tumorsarecriticalforimprovingpatientsurvivalratesandclinicaloutcomes[6-9]. Advancesinmedicalimagingandpathologyhaveenhancedtheimportanceofearly screeningandprecisediagnosisforGIdiseases[10-12].Clinically,biopsiesareroutinely performedtodeterminethenatureoflesions,suchasidentifyinggastriculcerswith malignantpotential.Thesespecimensalsoenabletheassessmentoftissuepathology, includinginflammationgradingandtumorsubtyping,whichguidestherapeutic decision-making[13,14].Endoscopicsubmucosaldissection(ESD)isindicatedfor precancerouslesionsandearly-stagecancers,whilesurgicalresectionremainsthe standardapproachforadvancedorundifferentiatedtumors[15-18].Postoperative pathologicalevaluationprovidescomprehensivetumorcharacterization,including histologicaltype,tumordimensions,invasiondepth,pathologicalstaging,and molecularprofile[19-21].Thesecriticalparametersfacilitateaccurateassessmentof treatmentefficacyandreliablepredictionofpatientprognosis[22,23].Nevertheless, traditionalhistopathologicaldiagnosisreliesheavilyonpathologists’expertise, makingitsusceptibletosubjectivity,particularlyinlarge-scaleearlycancerscreening taskswhichrequirespathologiststometiculouslyevaluatenumerousspecimensto detecttherelativelyrarecasesofearly-stagemalignancies[24-26]. Theemergenceofartificialintelligence(AI)presentstransformativepotentialfor pathologicaldiagnosis.Itholdssignificantpromiseforbothalleviatingpathologists' workloadandenhancingdiagnosticprecision[27-29].Recently,theadventoffoundation modelshassubstantiallyincreasedthepotentialforAIdeploymentinclinical pathologypractice.Throughself-supervisedlearningonmillionsofwhole-slide images(WSIs),researchershavetrainedvisiontransformer(ViT)withhundredsof millionsofparameters.Thesefoundationmodelshavedemonstratedexceptional versatilityacrossmultipledownstreamtasks,includingtissuetypeclassification, tumorsegmentation,genomicmutationprediction,andprognosticanalysis[30-34]. Remarkably,UNI,pretrainedon100millionhematoxylinandeosin(H&E)-stained imagesencompassingover20tissuetypes,achievedsuperiorperformanceacross34 diagnostictasks[35].Gigapathestablishedahierarchicalfeatureencodingarchitecture formulti-scalepathologicalfeaturerepresentation,achievingstate-of-the-art performancein25outof26benchmarkcancerdiagnostictasksafterpretrainingon 1.3billionWSIs[36].Subsequently,TITANpioneeredvision-languagealignment paradigmforslide-levelrepresentationlearning,incorporatingregionofinterest(ROI) duringitsfirststagetraining,therebysubstantiallyenhancingthefeatureembedding powerforWSI[37]. Thesestudiesconfirmthatfoundationmodelspretrainedonlarge-scalepathological datasetssignificantlyoutperformconventionalmodels(e.g.,ImageNet-pretrainedand TCGA-derivedarchitectures)inaccuracy(ACC),sensitivity,andgeneralizability. Despitetheseadvances,theirperformancestillrequiresfurtherenhancementfor reliableclinicaladoptionacrossvariousspecializeddiagnostictasks.Current foundationmodelspredominantlytrainonpathologyimagesatasinglemagnification level,whereasreal-worldclinicalpracticenecessitatesmulti-scaleanalysis[38-40].For instance,gradingofatrophyandintestinalmetaplasiaistypicallyperformedatlow magnification(e.g.,5×),whileassessmentofacuteinflammatoryactivityand helicobacterpyloridetectionrequireshigh-powerexamination(e.g.,20×). Moreover,forAI-basedearlycancerscreeningtobeclinicallyavailable,itmust achievenear-perfectsensitivity,minimizingfalsenegativeswithoutcompromisingan acceptablefalse-positiverate[41].Whilefine-tuningfeaturestrategiesforspecific downstreamtaskshavebeenproposedtoenhancemodelperformance[40-45],such approachesoftencompromisesthemodel'sgeneralizability.InpracticalGIpathology workflows,wherediversediagnostictaskscoexist,maintainingmultipletask-specific featureextractorswould:(1)introduceprohibitivecomputationaloverheadduring task-switching,(2)necessitatecostlyhardwareinfrastructuretostoreandrunparallel large-scalefeatureencoders,and(3)offernoguaranteeofmodelgeneralizability. Toaddresstheselimitations,weproposeatwo-stagepretrainingframeworkforaGI pathology-specializedfoundationmodel(Digepath,asshowninFig.1).InPhaseI,a ViT-basedencoderpretrainedon210,043multi-scale(2.5×,5×,10×,and20×)WSIs usingself-supervisedlearningapproachtocapturegastricdomain-specificfeatures. Unlikecurrentpathologyfoundationmodelspretrainedonpan-tissuedatasets[34-36], DigepathwaspretrainedonGIpathologyimageswithvaryingscales,whichenhanced captureoffine-grainedpathologicalfeaturesoftenmissedbygeneral-purpose models.PhaseIIdevelopedaROIminingalgorithmtofullyutilizediagnostically valuableareasinWSI,establishingaclosed-loopenhancementmechanismof"feature optimization-datarefinement"toboosttask-specificdiagnosticACC.Through34 clinicallyrelevantdownstreamtaskscomparingmultiplefoundationmodels,we validatedDigepath'ssuperiorperformanceinhistopathologicaldiagnosis,molecular andgeneticmutationprediction,prognosticassessment,andresolution-agnostictissue classification.Theearlycancerscreeningmodulewassuccessfullytranslatedinto clinicalpracticeandevaluatedina9-centertrial,demonstratinghighsensitivity (99.6%)andspecificity(85%).Wealsoproposedanend-to-endpipelineoftheagent systemforGIpathology,whichleveragestheDigeToolslibrarytosequentially performfeatureextraction,cancerdetection,subtypeidentification,andROI-level reportdeliverythroughmulti-turndialogue.TheGIspecialtyfoundationmodel establishesanewparadigmandexhibitsremarkablepotentialinpathological diagnosis. Results 1.Dual-phasepretraining Wedevelopedatwo-stageself-supervisedframeworkforGIpathologyanalysis.First, aViT-Lmodel(Digepath-V1)waspretrainedusingDINOv2on210,043WSIs.Next, expertpathologists(withmorethan10years’clinicalexperience)annotated471,445 diagnosticregions(2,048×2,048pixelsat0.42μm/pixel)from26,320WSIstotrain atumorclassifier.Thisclassifierprocessedtheoriginaldatasettoidentify5million tumorregions,subdividedinto50millionpatcheswithsizeof256×256.Anequal numberofnon-tumorpatcheswererandomlysampledtocreateabalanced 100-million-patchdatasetforfine-tuning,yieldingtheenhancedDigepath-V2model. Infourdiagnostictasks,itoutperformedDigepath-V1by3.81%(ESO-AS:3.58%, ESO-2cls:3.43%,R-X:1.51,andLHN-3cls:6.73%),asdemonstratedinFig.2a. Digepath-V2demonstratedanaverageimprovementof4.18%overDigepath-V1in TNMstagingtasks(4.78%forgastricTNMand3.58%forintestinalTNM).For PD-L1mutationprediction,Digepath-V2achieveda3.00%higheraccuracythan Digepath-V1,whiletheimprovementwas0.49%forMSIstatusprediction.These resultsvalidatetheeffectivenessoftwo-stagepretraining. 2.Multiscalepretrainingenablesrobustrepresentation Unlikeconventionalpathologymodelslimitedtosinglemagnifications,Digepath introducesaclinicallyinspiredmultiscalepretrainingframeworkspanningfour diagnosticresolutions(2.5×,5×,10×,and20×).Thisdesignovercomesafundamental limitationindigitalpathology,wherestandard224×224pixelsevaluationprotocols compromisemorphologicalinterpretationofcriticalfeatureslikenuclearatypiaand tissuearchitecture.Systematicevaluationusingself-builtSTLCdataset(11-class classificationacross224×224to1,120×1,120)revealedDigepath’suniquescale adaptability.Performancepeakedat672×672(ACC:95.46%)withonly0.91% variationacross5timesscalechanges(Gigapath:2.02%),ultimatelyachievingan ACCof96.31%throughintegratedmultiscalepredictions,asdemonstratedinFig.2b. InSTLC,downsamplingto224 × 224obscuredcriticalfine-graineddetailsof high-gradeintraepithelialneoplasia,suchastheenlargednuclei,coarsechromatin, andlossofcellularpolarity,thatremaineddetectablebyDigepathatlowresolutions (Fig.2f).TheseobservationsdemonstratethatDigepathencodessemantically meaningfulrepresentationsthatarelargelyinvarianttoimageresolution,acapability ofmatchingpathologists’multi-scalediagnosticworkflow. Figure1|OverviewofDigepath.Digepathisavisualfoundationmodelpretrained intwostagesviatheDINO-V2frameworkon210,043GIpathologyWSIs.a. DistributionofGIdatasetsincludedinDigepathpretraining.b.Two-stagepretraining scheme:InPhaseI,aViT-basedencoderwaspretrainedonmulti-scale(2.5×,5×,10×, and20×)WSIsusingself-supervisedlearningapproachtocapturegastric domain-specificfeatures.PhaseIIdevelopedaregion-of-interestminingalgorithmto fullyutilizediagnosticallyvaluableareasinwhole-slideimages,establishinga closed-loopenhancementmechanismoffeatureoptimization-datarefinement(see Methods).c.Digepathdemonstratesstate-of-the-artperformanceacrossa comprehensivebenchmarkof33downstreamGIpathologytasks.d.Engineering implementationoftheearly-cancerscreeningmodule.e.Spectrumofclinical diagnosesinGIpathologyassociatedwithdownstreamtasks. 3.Routineclinicopathologicaldiagnosis AlignedwiththeWHOClassificationofDigestiveSystemTumours,thisstudy establishedacomprehensivevalidationframeworkspanningthreeanatomicalregions (esophageal,gastric,andintestinal)across24clinicaltasks,incorporatingboth classificationandsegmentationatregion-of-interest(ROI)andwhole-slideimages (WSIs).Digepathdemonstratedsuperiordiagnosticperformanceacrossallevaluation metrics(seeFig.2,Fig.3,ExtendedDataFig.1). 3.1Stomach WeusedMILmethodforautomatedpathologicalgradingofnon-neoplasticGI lesionsinaccordancewithWHOdiagnosticcriteria.Digepathdemonstratedgood performanceacrossthreecriticaldiagnosticcategories:gradingassessmentofchronic gastritis(Digepath:94.67%,Gigapath:92.84%),acuteinflammatoryactivity (Digepath:88.31%,Gigapath:87.86%),andintestinalmetaplasia(Digepath:76.44%, Gigapath:75.03%).Additionally,weevaluatedgastricepithelialatrophybasedon histomorphologyandachievedanACCof86.06%(Gigapath:85.79%).Additional comparativeresultsandvisualizationdetailsareprovidedinFig.2cande. Fortheclassificationofgastricepithelialtumorsandhyperplasticlesions,whichisthe mostanatomicallycomplexdomainwith11distinctsubtypes,Digepathachieved remarkableaccuracyof93.21%,surpassingthesecond-bestmethod(Gigapath: 90.26%)bynearly3%.AttentionheatmapsofROIareshowninExtendedDataFig.1. TheseresultshighlightDigepath'sdualcapabilityinbothnon-neoplasticlesions evaluationandneoplasticlesionsdifferentiation. 3.2Intestine OntheCRC-100K10-classcolorectaltissueclassificationtask,Digepathachieved state-of-the-artACCof95.24%,outperformingthepreviousbestmethod(Gigapath: 94.97%)by0.27%.Forintestinalpolypclassificationevaluatedonthe UNITOPATHOdataset,Digepathattained85.92%ACC,representingasignificant 4.45%improvementoverUNI(81.47%).Theadvantagepersistedinadenoma identification(CAMELdataset),whereourmodelachieved92.36%accuracyversus UNI's91.48%.Notably,intheclinicallycriticalWSI-level5-classclassificationof colorectalepithelialtumorsandhyperplasticlesions,Digepathmaintainedrobust performance(80.73%),consistentlysurpassingallcompetingmethods(seeFig.3a). TheseresultscollectivelyestablishDigepathasanewbenchmarkincomputational pathologyforcolorectaltissueanalysis. Figure2|Two-stagepretrainingofDigepathanditsperformanceongastric diagnosis. a.ComparisonofDigepathperformanceafterthefirstandsecondpretrainingstages. b.Multi-resolutionevaluationonan11-classgastrictissueclassificationtask(n = 687ROIs)acrossvariousmethods.c.Comparativeperformanceofdifferentencoders on11clinicaldiagnostictasksingastricpathology.d.t-SNEvisualizationof DigepathfeaturesintheSTLC11-classgastricclassificationtask.e.Attention heatmapsforgastritisandacuteactivitygradingtasks.f.Attentionheatmapsof DigepathatdifferentresolutionsintheSTLC11-classtask.g.Visualizationsof Digepathoutputsonintestinalized/non-intestinalizedglandsegmentationandESD tumorregionsegmentationtasks. 3.3Esophagus Digepathalsodemonstratedexcellentdiagnosticcapabilityintheanalysisof esophagealpathologies(Fig.3aandFig.3c).Inthe4-classclassificationofepithelial tumorsandhyperplasticlesionsatWSI-level,Digepathachieved80.41%ACC, representingasubstantial2.88%improvementoverthesecond-bestapproach(UNI: 77.53%). Whenevaluatedonanothercommonclinicaldiagnostictaskofdistinguishing betweenkeratinizingandnon-keratinizingsquamoustumors,Digepathattained 78.75%classificationACC,exceedingthenearestcompetitor(UNI:75.18%)by 3.57%,demonstratingsuperiorhistopathologicaldiscernment. 3.4Earlycancerscreening Inearlycancerscreeningtasksacrossthreemajoranatomicalsites(stomach, colorectum,andesophagus),Digepathdemonstratedsuperiorperformancedespite highbaselineACCamongallevaluatedmodels.Forgastriccancerscreeningusing ROI,DigepathachievedanACCof99.01%,surpassingthesecond-bestmodelby 0.23%(UNI,98.78%).Similarly,inintestinalcancerscreening(ROI-based),it attained99.78%ACC,exceedingthenearestcompetitorby0.17%(UNI,99.61%). Notably,inesophagealcancerscreening,whichisprocessedinWSI,Digepath achieved99.63%ACC,outperformingtherunner-upbyasignificantmarginof1.96% (Gigapath,97.67%).DetailscouldbeavailableinFig.2c,Fig.3aandExtendedData Fig.2. 3.5Segmentationtask Accuratetumorsegmentationservesasafundamentalpillarofmoderncomputational pathology,enablingquantitativehistopathologicalanalysis.Ourstudyadvancesthis fieldthroughthreeclinicallyrelevantbenchmarks.First,wepresentanenhanced TransUnetframework[46]incorporatingnovelencoderarchitectureswithpathological foundationmodels.WhenevaluatedontheCRAGdatasetforcolorectalmucosal glandsegmentation,theDigepathencoderachievedstate-of-the-artperformance(IoU: 71.69%vs.Gigapath's68.74%,Fig.3aandExtendedDataFig.3c).Wefurther segmentintestinalmetaplasticglandsandnon-metaplasticglandstoestablisha quantitativebasisformetaplasiagrading.DigepathattainedIoUof65.99% (surpassingGigapathby1.10%,Fig.2candExtendedDataFig.3a).Moreover,to addresstheunmetneedforprecisetumormargindelineationinESD,wecurateda clinically-annotateddatasetof200WSIs.Digepathdemonstratedsuperior segmentationaccuracy(IoU:75.08%)overexistingsolutions(Gigapath:74.43%)and detailscouldbeavailableinFig.2candExtendedDataFig.3b. 3.6TNMstaging Weevaluatedpathologicalstagingperformanceusing1–3representativetumor sectionsfromsurgicallyresectedgastricandintestinalspecimens.TheAmecican JointCommitteeonCancer(AJCC)stagingsystem,whichincorporatestumor histotype,invasiondepth,lymphnodeinvolvement,anddistantmetastasis,servedas ourreferencestandard[47,48].Notably,ourapproachreliedsolelyonhematoxylinand eosin(H&E)-stainedtumorsectionsfordirectstagingprediction,withoutancillary clinicalorimagingdata.DigepathdemonstratedsuperiorstagingACCcomparedto competingmodels:68.46%versus64.97%(Gigapath)forgastriccancer(Δ=3.49%, Fig.2c)and64.24%versus63.68%forintestinalcancer(Δ=0.56%,Fig.3a).While conventionalstagingrequireslymphnodeexaminationandmetastaticworkup,our resultsindicatethatH&E-basedassessmentofprimarytumorsalonecanprovide pathologicalstaginginformation. Figure3|PerformanceofDigepathonintestinalandesophagealdiagnostictasks. a.Comparativeperformanceofvariousmethodsacross12clinicaldiagnostictasksin intestinalandesophagealpathology.b.ROI-basedearly-cancerscreeninginthe intestine:pathologist-annotatedtumorregions(topleft)vsmodelpredictions(bottom left);orangeboxesdenotepredictedtumorregionsandwhiteboxesdenotepredicted non-tumorregions.Insets(right)correspondtotheredandblueboxesinthetop-left panel,showingpathologist-annotatedtumorandnon-tumorregions.c. Attention-basedvisualizationsforesophagealcarcinomaprediction:toprowshows keratinizingsquamouscellcarcinomaprediction;bottomrowshowsvisualizationsfor esophagealcarcinomaprediction.d.Few-shotperformancecomparison (K = 1,2,4,8,16,32,64,128,256)ofmultiplemodelsonUNITOPATHO.e. AttentionheatmapsofDigepathatvariousresolutionsintheintestinalclassification task. 3.7Challengingpathologicaldiagnoses Thehistopathologicaldistinctionbetweenpoorlydifferentiatedsquamouscell carcinomaandpoorlydifferentiatedadenocarcinomainupperGIspecimensremainsa formidablediagnosticdilemma.Thesehigh-grademalignanciesposeparticular diagnosticdifficultiesastheyprogressivelylosetheirdefiningmorphological characteristics.Thismorphologicalambiguityroutinelynecessitatesancillary immunohistochemicalstudiesfordefinitiveclassificationinclinicalpractice.Notably, DigepathdemonstratedexceptionaldiagnosticcapabilitysolelybasedonH&Estained sections.Thealgorithmachieved99.16%classificationACCindiscriminatingthese challengingsubtypes,with0.64%improvementoverexistingmethods(Fig.3a, ExtendedDataFig.4a,andExtendedDataFig.4b).Thisresultindicatesithas potentialtoreducetheneedforsupplementarystainsinroutinediagnosis. Histopathologicaldifferentiationamongreactivehyperplasia,intestinalmetaplasia, andintraepithelialneoplasiaingastricbiopsiesremainsasignificantdiagnostic challenge.Inthisstudy,weanalyzed384gastricbiopsycasesfromfivemedical centers.Ourdiagnosticmodeldemonstrated88.52%ACCinthethreeclassification task(lowgradeneoplastic,highgradeneoplasticandnon-tumor),outperformingthe next-bestmethodby0.88%(Fig.2c). Distinguishingxanthomafromsignet-ringcellcarcinomainGIbiopsiesposes recurrentdiagnosticdifficulties,especiallyamongjuniorpathologists.Wecurateda datasetof400casesincludingxanthomaandsignet-ringcellcarcinoma.Themodel achievedanear-perfectdiscrimination(ACC:99.22%),representing1.33% improvementoverthesecondbestmethod(Conch-V1.5:97.89%,Fig.2c). 4.Genomicalterations,molecularprofiling,andprognostic predictionoutcomes Tumorgenomicprofilingguidestherapeuticdecision-makingandisindispensablefor precisiononcology.Thediscoveryofhistomorphologicalsignaturesinroutine H&E-stainedslidesthatmirrormolecularalterationsoffersascalable,cost-efficient paradigmtocomplementgenomictesting.Wepresentacomprehensiveevaluationof Digepath’sabilitytoinfermolecularphenotypesdirectlyfromhistopathologyimages, addressingthreecriticalclinicalneeds,whicharepredictionofrecurrentoncogenic mutationsinGImalignancies,identificationoftherapeuticallyactionablegenomic variants,anddeterminationofmicrosatelliteinstability(MSI)status. 4.1PredictionofrecurrentgenomicalterationsinGIcancers Weconductedananalysiscorrelatingprevalentsomaticmutationswith histopathologicalpatterns.Acrossthefourgenomicalterationpredictiontasks,three achievedAUROCvaluesexceeding60%(Fig.4).Histopathologicalimagesexhibited somemorphologicalsignalsassociatedwithMUC16andTTNmutationson TCGA-COAD,andNRASmutationonTCGA-READ,achievingthebestAUROCs of65.61%,62.37%,and60.55%,respectively(Fig.4d). 4.2Predictionofclinicallyactionablegenomicalterations Digepathdemonstratedrobustpredictivecapabilityforthreetherapeuticallytargetable genomicalterationsapprovedbytheUSFoodandDrugAdministration(FDA),with allperformancemetricsexceedinganAUROCof0.7(Fig.4a).ForPD-L1status prediction,themodelachievedanAUROCof74.83%,representinga1.69% improvementoverGigapath(76.19%AUROC).InHER2amplificationdetection, Digepathshowedstrongpredictivevalue(AUROC=83.27%),outperformingthe Gigapathbenchmarkby8.2%.ThesystemalsoexhibitedhighdiagnosticAUROCfor TP53mutationidentification(AUROC=72.23%),surpassingGigapath's performanceby2.4%. MSIrepresentsawell-establishedbiomarkerforpredictingresponsetoimmune checkpointblockadetherapyincolorectalcancer.Inthisstudy,weevaluated Digepath'sperformanceinpredictingMSIstatusdirectlyfromhistopathological images.ThemodelachievedanAUROCof88.41%ontheself-builtdataset, exceedingGigapath(87.58%)by0.83%(Fig.4a).Attentionanalysisrevealedthat regionscontainingsolidtumorcomponents,luminalnecrosis,andtumor-infiltrating lymphocytesreceivedhighmodelattention(Fig.4e). Figure4|MolecularandprognosticpredictionusingDigepath.a.Molecular markerpredictionperformance(HER2,PD-L1,P53,MSI)acrossfourin-house datasets(n =80WSIs,n=151WSIs,n=142WSIs,n=194WSIs ).b. Kaplan–MeiersurvivalanalysispredictionsonTCGAcohorts:TCGA-COAD(top, n = 82WSI)andTCGA-READ(bottom,n =31WSIs).c.Comparisonof concordance-index(C-index)forsurvivalpredictionacrossmethodson TCGA-COADandTCGA-READ.d.ComparativemutationspredictionofTNN, NARAS,andMUC16onTCGA-COADandTCGA-READ(n = 81WSIs,n= 26WSIs).e.AttentionheatmapsofDigepathforMSIstatusprediction. 4.3PrognosticstratificationinGIoncology Currentprognosticparadigms,incorporatingbothclinicopathologicalandmolecular biomarkers,demonstratelimitedcapacityforpreciseindividualizedoutcome prediction.Inthisstudy,weimplementedanaugmentedDigepatharchitectureto generatehistomorphology-basedsurvivalmodelswithpubliclyaccessiblecohorts. Digepathachievedstatisticallyrobustdiscriminationbetweenfavorable(long-term) andpoor(short-term)survivalsubgroupsforTCGA-COAD(log-ranktestP<0.001), significantlyoutperformingexistingapproacheswithaconcordanceindexof71.82% (Fig.4bandFig.4c),representingrelativeimprovementsof3.80%overConch-V1.5 (68.02%).Themodel'ssuperiorprecisionwasfurtherevidencedbysignificantly tighterconfidenceintervalsinKaplan-Meieranalyses(P<0.01forCIwidth comparisons;ExtendedDataFig.5),indicatingenhancedprognosticreliability. Systematicanalysisidentifieddistincthistopathologicalsignaturesassociatedwith clinicaloutcomesacrossmalignancies.Patientswithfavorablelong-termsurvival demonstratedhigh-attentionregionsenrichedfordenselymphocyticinfiltrates (ExtendedDataFig.6)andpreservedtissuemicroarchitecture,accompaniedbyonly moderatenuclearatypia. 5.Clinicaltranslationandimplementation Ingastrointestinalendoscopicbiopsyscreening,pathologistsarerequiredtoidentifya smallnumberofearly-stagecancercasesfromalargevolumeofdigestivetract biopsyspecimensinordertoguidetimelytreatment.Thisprocessislabor-intensive andtime-consuming.Toaddressthis,wedevelopedanearlygastriccancerscreening modulebasedontheDigepathmodelandconductedamulti-centervalidationstudy across9hospitalsinChina.Theparticipatinginstitutionsspannedgeographically diverseregions(eastern,southern,western,andnorthernChina)andrepresented varyingtiersofhealthcarefacilities,includingprovincialcenters,andcounty-level hospitals.FollowingthefiftheditionoftheWHOclassificationofdigestivesystem, tumors,low-gradeintraepithelialneoplasia,high-gradeintraepithelialneoplasia,and malignanttumorswerelabeledaspositivecases,whereasbenignpolyps,chronic gastritis,andintestinalmetaplasiawereconsiderednegative.Approximately1,000 biopsyslideswereevaluatedateachsite,totaling11,224slides,amongwhich657 wereclassifiedaspositive(Fig.5h). Thislarge-scaleprospectiveevaluationvalidatedthefeasibilityandbroad applicabilityoftheDigepathmodelforearlygastriccancerscreening,providing strongsupportforimprovingdiagnosticefficiency.Thescreeningmoduleachievedan overallaccuracyof89.99%,withasensitivityof99.70%andspecificityof89.30%. Sensitivityreached100%in7ofthe9hospitals,andspecificityexceeded90%in5 hospitals.Detailedsite-specificmetricsareshownintheFig.5g.Acrossthenine hospitals,themodulesuccessfullyidentifiedoneneuroendocrinetumor(NET),one signetringcellcarcinoma(SRCC),onehighlydifferentiatedadenocarcinoma,one poorlydifferentiatedcarcinoma,andsixcasesoflow-gradeintraepithelialneoplasia (LGIN).TheNETcase,classifiedasG1,exhibitedmildcytologicalfeaturesandwas easilyoverlooked.TheSRCCcaseinvolvedasmallfocuslocatedattheedgeofthe biopsyspecimenwithinaninflammatorybackground,whichwassimilarlyproneto misdiagnosis.Nevertheless,themodelhascorrectlyflaggedthelesion,whichwas subsequentlyconfirmedviaimmunohistochemicalstainingforCEAandCK.In addition,themodelaccuratelyidentifiedfourmoreSRCCcasesthatwerenotmissed bypathologistsbutposeddiagnosticchallengesduetotheirmorphological resemblancetohistiocytesorbecausetheyconsistedofonlyafewscatteredsignet ringcells.Thehighlydifferentiatedadenocarcinomacasedisplayedfeaturesclosely resemblingnormalgastricepithelium,anditssmallbiopsyvolumefurther complicateddiagnosis.However,themodelcorrectlylocalizedthesubtleserrated structuresindicativeofmalignancy.OneLGINcasethatwasinitiallymissedbythe modelwaslaterconfirmedaspositiveviaESDresection.Furthermore,threecases initiallydiagnosedasLGINwerereclassifiedasintestinalmetaplasiaafterexpert consensusreview,aligningwiththemodel’soriginalpredictionofnon-neoplastic.The model’stwomissedcasesincludedoneacid-secretingadenomaandoneadditional instanceofLGIN.DetailscouldbeavailableinFig.5a-f. Figure5|Early-cancerscreeningresults.a–f.Caseswherethemodelassisted pathologistsinidentifyingpreviouslymisseddiagnoses.a.signet-ringcellcarcinoma. b.low-gradeepithelialneoplasia.c.highlydifferentiatedadenocarcinoma.d.fundic glandtumor.e.neuroendocrinetumor.f.poorlydifferentiatedcarcinoma.g. Performanceoftheearlycancerscreeningmoduleacrossnineindependentcenters.h. Datadistributionfromtheninecenters. 6.Few-shotlearning WeassessedDigepath’slabelefficiencyonthedatasetsofSTLCandUNITOPATHO usingthenon-parametricSimpleShotframework—arobustbaselinewidelyadopted inthefew-shotclassificationliterature[49].Infew-shotlearning,thechoiceof'way' hasasignificantimpactontaskdifficultyandmodelperformance.Typically, increasingthenumberofwaysindicatesagreaternumberofclassestoclassify, therebyraisingthetaskdifficulty.Detailedexperimentalprotocolsandperformance resultsareprovidedintheMethodsandExtendedDataFig.7. Acrossdifferenttasksandwaysevaluation,wefoundthatDigepathisapowerful few-shotlearnerwithmarkedlyhigherlabelefficiencythanotherpretrainedencoders. Whencomparingmedianperformanceatvaryingsamplesizes,Digepathconsistently surpassesthenext-bestencoderateveryshotacrosstwotasks(Fig. 3dandExtended DataFig. 7).Weobservedthatvarianceinone-,two-,andfour-shotperformanceis relativelyhighforallencoders.However,asthenumberofsupportsamplesperclass increases,thevarianceoffew-shotrunsdeclinesmonotonically,demonstratingthe stabilityofprototypeestimationviaSimpleShot’ssampleaggregation.Remarkably, Digepath’s8-shotperformanceinbothtaskscanexceedthemaximumperformance achievedbyotherencodersover1,000trials.Meanwhile,asthenumberoftheway increases,thegapinfew-shotperformancebetweentheDigepathmodelandother modelsbecomesincreasinglyevident.ThisdemonstratesthatDigepathcanbetter leverageitssuperiorcapabilitiesoflabelefficiencyandrepresentationqualityinmore challenginggastrointestinaltractclassificationtasks. 7.ROIretrieval ROIretrievalisimplementedthroughPrototypicalNetwork(ProtoNet).TheProtoNet firstutilizesapretrainedpathologyfoundationmodeltoconvertalltrainingimages intoembeddingvectors,thenperformsmean-poolingonembeddingsofthesame categorytoobtainprototyperepresentations[50].Webenchmarkedhistopathological imageretrievalacrosstwoROI-leveltasks,witheachtestsampleclassifiedbasedon itshighestsimilaritytoclassprototypes.Detailedexperimentalproceduresand performancemetricsareprovidedintheMethods,ExtendedDataFig. 8. Ontworetrievaltasks,Digepathconsistentlyoutperformedcompetingencoders, demonstratingsuperiorretrievalACCacrossdiversesettings.OnSTLCdataset, Digepathachieveda11.11%gainoverthenextbestencoder(Ctranspath[34],74.07% vs62.96%).OnUNITOPATHOdataset,theperformancegapnarrowed—Digepath exceededUNIby8.26%(Digepath:67.21%vsUNI:58.95%)inACC—likely reflectingthepronouncedmorphologicaldistinctions. Forbothpapillarycarcinomaandhigh-gradeintraepithelialneoplasiacategoriesinthe STLCdataset,weselectedthetop5imagesmostsimilartoeachmodel'sprototype. ResultsdemonstratedthatDigepathaccuratelyidentifiedrepresentativeimagesfor bothcategories(ExtendedDataFig.8),confirmingitssuperiorfeaturerepresentation capability.ThisfindingwasconsistentlyreplicatedintheUNITOPATHOdataset. Discussion Thisstudyrepresentsatransformativeadvancementincomputationalpathology forGIdisease,withthreefundamentalinnovationsthataddresscriticalchallengesin thefield.Firstly,weintroducetheconceptofspecialty-specificfoundationmodelsfor pathology.Currentpathologicalfoundationmodelsfacelimitationsin domain-specificdiagnosticperformance.Our"specialty-specificfoundationmodel" paradigmresolvesthelongstandingtrade-offbetweengeneralizabilityand specialization:whileretainingthetransferlearningadvantagesoffoundationmodels, ourtwo-stagepretrainingapproachondedicatedGIdatasets(>200,000WSIs) achievesclinicaldiagnosticrequirements,whichisacriticaladvancetowardclinically deployableAI.Secondly,wedevelopanoveltwo-stageprogressivetraining architecturewithmethodologicalbreakthroughs.Thefirststageemploys multi-resolutionimageanalysiscombinedwithDINOv2self-supervisedlearningon 350millionpatchesofthedigestivetracttoextractuniversalfeatures.Thesubsequent stageimplementsdynamicROIselectionwithcontrastivelearningoptimization, drivingsignificantperformanceimprovementsacrossmorethan30downstream diagnostictasksingastrointestinalpathology.Thirdly,weestablishedaclosed-loop frameworkforclinicaltranslationthatseamlesslyintegratesclinicalneeds, technologicaldevelopment,andreal-worldapplication. Onthetaskdesignside,acomprehensivevalidationframeworkwasbuilt,covering21 morphologicaldiagnostictasks,threesegmentationtasks,eightgenemutation predictiontasks,andtwosurvivalpredictiontasks.Fromanengineeringperspective,a gastricbiopsyearlycancerscreeningmodulewasdevelopedanddeployedinroutine testingacrossmultiplemedicalcenters.Thesemethodologicalandtranslational innovationseffectivelybridgethecriticalgapbetweencomputationalpathology researchandclinicalpractice.Oursystemshowsparticularpromiseforenhancing earlycancerdetectioninresource-limitedsettings. Wealsodevelopedanend-to-endpipelineofagentsystemfordigestivepathology (ExtendedDataFig.9e).TakingaWSItogetherwithausertextpromptasinputs,the systemleveragestheDigeToolslibrarytosequentiallyperformcancerdetection, subtypeidentification,andROI-levelreportdeliverythroughmulti-turndialogue. First,theagentactivatestheFeatureExtractionmodule,partitioningtheWSIinto patcheswiththesizeof256×256at20×magnification.ApretrainedDigepath encodergeneratesembeddingsofthesetiles,whichareimmediatelyanalyzedbythe tumordetectionmoduleforearlycancerscreening.Thesystemthenproceedsthrough sequentialdiagnosticmodules,ultimatelyidentifyingthecaseasnon-keratinizing squamouscellcarcinomawhileautomaticallygeneratingdetailedcytologicand histologicdescriptionsthroughalargelanguagemodel(LLM).Forlocalanalysis,the systemperformsROISelectionacrosstheentireslide,usingtheROIFinder pinpointingimageswithhighattentionscores.Finally,theDigeCaptionmodule producesacomprehensiveChinesediagnosticreportcombiningquantitativedatawith qualitativeinterpretation,completinganintegratedworkflowfrommacroscopic detectiontomicroscopicanalysis. Whileourattention-basedmultipleinstancelearning(ABMIL)framework demonstratesrobustperformanceinwhole-slide-levelpredictiontaskssuchas non-neoplasticlesiongradingandtumorhistologicalsubtyping,ithasinherent limitationsinmodelingthecomplexspatialrelationshipsbetweentissuepatches acrossentireslides,whichisacriticalcomponentofcomprehensivepathological
https://arxiv.org/abs/2505.21928v1
assessmentthatrequiresintegrationofbothlocalmorphologicalfeaturesandglobal architecturalpatterns.Currentcomputationalapproachesincludinggraphneural networksandvisiontransformerspresentpromisingsolutionstotheselimitations throughtheirabilitytoexplicitlyencodespatialdependenciesbetweendistanttissue regions. Clinically,ourdeploymentplatformisdesignedforcontinuousevolutionthrough adaptivefeaturesthatincorporatenewdiagnosticmoduleslikeinflammatorybowel diseaseactivityscoringwhilemaintainingrigorousvalidationstandards.Thesystem's real-timeoptimizationusinghospital-deriveddataanddynamicupdatingprotocols willfacilitateitsdevelopmentintoamorereliablediagnosticassistantthatremains alignedwithevolvingclinicalrequirements. ExtendedDataFig.1|ROIvisualizationsacrossmodels.a–c.Visualizationsof fivemodelsonSTLC.d–g.VisualizationsoffivemodelsonUNITOPATHO. ExtendedDataFig.2|ROI-basedtumorregiondetectionvisualizations.For panelsa–d,thetop-leftinsetshowspathologist-annotatedtumorregions;the bottom-leftshowsmodelpredictions(orangeboxes:tumor;whiteboxes:non-tumor); thetop-rightandbottom-rightinsetsarezoomedviewsoftheredandblueboxes, respectively.a.ESDspecimen(gastric).b.Biopsyspecimen(gastric).c.Surgical specimen(gastric).d.Surgicalspecimen(intestinal). ExtendedDataFig.3|Visualizationsofsegmentationtask.a.Visualizationof Digepathonintestinalized/non-intestinalizedglandsegmentation.b.Visualizationof DigepathonESDtumorregionsegmentation.c.VisualizationofDigepathongland andtumorsegmentationusingthepublicdata(CRAG). ExtendedDataFig.4|Attention-basedvisualizationsonchallengingcases.For panelsa–d,theleftimageshowstheoriginalslide(redbox:pathologist-annotated tumor;bluebox:pathologist-annotatednon-tumor),thecenterimageshows model-predictedtumorregions,andtherightinsetsshowzoomedviewsofthered andblueboxes.a.Poorlydifferentiatedadenocarcinoma.b.Poorlydifferentiated squamouscellcarcinoma.c.StomachTNMstagingtask.d.IntestinalTNMstaging task. ExtendedDataFig.5|Kaplan–Meiercurvesforsurvivalprediction.a-e.K-M curvesacrossmodelsonTCGA-COAD(n =82WSIs).f-j.K-Mcurvesacross modelsonTCGA-READ(n =31WSIs). ExtendedDataFig.6|Survivalvisualizationsacrossvariousmodels.a–b.thefirst imageinthetoprowistheoriginalslide(redbox:pathologist-annotatedtumor;blue box:pathologist-annotatednon-tumor),followedbyheatmapvisualizationsfrom eachmodel;thebottomrowshowszoomedviewsofregions1–4fromthefirstimage. ExtendedDataFig.7|Full-way-few-shotperformancecomparison.a-e. Few-shotlearningperformanceacrossmodelsasthewaynumberincrementsfrom2 to6onUNITOPATHO(n=2,399ROIs).f-o.Few-shotlearningperformanceacross modelsasthewaynumberincrementsfrom2to11onSTLC(n=3,435ROIs). ExtendedDataFig.8|Imageretrievalperformancecomparisonacrossmodels.a. Acomparativevisualizationofthe5highestsimilarityimagestothepapillaryclass prototype,asretrievedbydistinctmodelsonSTLC.b.Acomparativevisualizationof the5highestsimilarityimagestotheHGINclassprototype,asretrievedbydistinct modelsonSTLC.c.Acomparativevisualizationofthe5highestsimilarityimagesto thelow-gradetubularadenomaclassprototype,asretrievedbydistinctmodelson UNITOPATHO.d.Acomparativevisualizationofthe5highestsimilarityimagesto thehigh-gradetubularadenomaclassprototype,asretrievedbydistinctmodelson UNITOPATHO. ExtendedDataFig.9|Early-cancerscreeningmoduleinterfaceandtestresults. a.Screenshotoftheearly-cancerscreeningsoftwareinterface.b–d.Testexamples,b. low-gradeepithelialneoplasia.c.poorlydifferentiatedadenocarcinoma.d. non-tumorcase.e.Theend-to-endpipelineoftheagentsystemforGIpathology proposed.TakingaWSIofdigestivetissuetogetherwithausertextpromptasinputs, thesystemleveragestheDigeToolslibrarytosequentiallyperformpathologyfeature extraction,cancerdetection,subtypeidentification,andROI-levelreportdelivery throughmulti-turndialogue. Method Inrecentyears,foundationmodelshavedemonstratedremarkabletransfercapabilities innaturalimageanalysis.ArchitecturessuchasVisionTransformer(ViT),knownfor theirrobustglobalfeatureembeddingcapability,havebeenwidelyappliedacross diversetasks[51].Therapidadvancementofself-supervisedlearningmethods(e.g., DINOv2[52]andMoCov3[53])hasfurtheroptimizedfeaturerepresentation, significantlyenhancingmodelperformanceinvariedscenarios.Inthefieldof computationalpathology,researchershavedevelopeddomain-specificfoundation models,suchasUNI,Gigapath,andTitan[35-37].Thesemodelsleveragetheadvantages ofself-supervisedlearningtoconstructgeneralizedfeaturerepresentationsby pretrainingonlarge-scalepathologicaldatasets. BasedontheDINOv2framework,weproposeatwo-stagepretrainedmodeltailored forgastrointestinaldiseases.ItfullyutilizesROIinformationanddisease-specific characteristicstoimprovediagnosticaccuracyforgastrointestinalpathology. Two-stagepretrainingforthegastrointestinalpathologicalfoundationmodel Petrainingonmulti-scalegastrointestinalpathologicalimages.Theoverall frameworkisillustratedinFig.1b.Distinctdiagnostictasksnecessitateexamination atspecificmagnification,forinstance,2.5×and5×formacroscopictissueassessment, 10×foranalyzingcellularmorphology,and20×forassessingnuclearatypia.To addressthesemulti-scalediagnosticrequirements,wepretrainedthemodelwith multi-magnification(2.5×,5×,10×,and20×)imagesasinputs.Thisapproach capturescomprehensivepathologicalinformationspanningmacroscopictissue morphologytomicroscopiccellularfeatures,therebyestablishingacomprehensive multi-scaledatabaseforgastrointestinalpathologyimages. VisualfoundationmodelspretrainingbasedonDINOV2.Thisstudyemploysthe DINOv2frameworkforvisualpretrainingtoenhancefeatureextractionand domain-specificsemanticunderstandingforgastrointestinalpathologyimageanalysis. DINOv2advancesself-supervisedlearningthroughteacher-studentdistillation tailoredforViTs.Byintegratingself-distillationwithmaskedimagemodeling,it learnstransferablerepresentationswithoutmanualannotations.Keyinnovations includeenhancedaugmentationsanddual-lossoptimization,bothofwhich significantlyimproveitsself-supervisedlearningcapacity. Two-stageROI-basedoptimizationframework.Ourtrain-refine-repeatframework tacklestheneedle-in-a-haystackproblemoffindingrarediagnosticregionsin gigapixelWSIsby:(1)buildinggeneralizedfeaturerepresentationsthrough large-scalepretraining,then(2)usingthesefeaturestoidentifyandprioritize diagnosticallycriticalregionsforfocusedlearning.Thisbidirectionaloptimization elevatesbothdataqualityandmodelperformancethroughsuccessiveiterations. webuiltafinedatasetcontaining471,445histopathologicallyconfirmedannotations. ThemanuallyannotatedROIsareusedastrainingdataforaMIL-basedclassifier[54-58], facilitatingautomatedscreeninginsubsequentdatacuration.Bylearningthe discriminativecontributionsofindividualinstances,themodelautomaticallyassigns instance-specificweights,therebyadaptivelyfocusingonthemost classification-relevantregions. DynamicROIselectionstrategy.FollowingthetrainingofthebinaryROIclassifier, wedesignedaconfidence-weightedadaptivesamplingalgorithmtoautomatically refinethetrainingdataset.Thetrainedclassifierwasappliedtothefullfirst-stage datasetforinference,andROIselectionwasguidedbypredictedprobabilities.For WSIscontainingatleastonepredicted"tumor"ROI,weselectedthetopN₁ROIs withthehighestclassificationconfidenceforthe"tumor"classandrandomlysampled N₂ROIspredictedasnon-tumorfromthesameslide.ForWSIsinwhichallROIs werepredictedasnon-tumor,werandomlyselectedN₃ROIs.Thedetailsofthe samplingstrategyaredefinedinEquations(7)–(9). N1=⌈12×ptumor/0.7⌉ (7) N2=⌈4×(1−ptumor)⌉ (8) N3=Possion(λ=8) (9) Inthissamplingframework,ptumordenotesthepredictedprobabilitythatagivenROI containstumortissue.Basedonthisstrategy,weaimedtoconstructarefined, high-qualitydatasetcomprisingapproximately100millionROIs,withabalanced tumortonon-tumorratioof1:1(±2%).TheseselectedROIsserveasprecise, task-relevantinputsforcontinuedmodeltraining,enablingimprovedsupervisionin thesecond-stagepretrainingprocess. Othermethodsandrelevantparameters Weaklysupervisedslideclassification.ForWSI-leveldiagnostictasks,weadopted astandardtwo-stageMILframeworkfordownstreamevaluation.Thispipelinebegins byapplyingthepretrainedfeatureextractortoallpatcheswithinaWSI,resultingina setofencodedfeaturevectorsinaunifiedembeddingspace.Thesepatch-level featuresarethenaggregatedintoaWSI-levelrepresentationusingagated attention-basedMIL(ABMIL)architecture[54].Dependingonthespecificdiagnostic task,differentmagnificationlevelswereemployedtoextracttheinputpatches. However,foralltasks,theWSIwasuniformlydividedintonon-overlappingpatches ofsize224×224pixels,andtheextractedpatcheswerenormalizedusingthesame normalizationparametersasthoseemployedduringpretrainingofthecorresponding featureextractor. ROIclassification.Toevaluatethetransferabilityandrepresentationalqualityof pretrainedfeaturesonROI-levelclassificationtasks,wefollowedthestandard evaluationprotocolintroducedinUNI.Specifically,weemployedlinearprobing usinglogisticregressionwithL2regularization,wheretheregularizationcoefficientλ wasdefinedas100/M×C,withMdenotingthefeatureembeddingdimensionandC thenumberofclasses.ThemodelwasoptimizedusingtheL-BFGSalgorithm[59]with amaximumof1,000iterations.ForallROIdatasets,weusedaninputresolutionof 224×224pixels.Forhigh-resolutionROIdatasets,weadditionallyevaluatedmodel robustnesstoscalevariationusingresolutionsof224,448,672,896,and1120pixels, toassesstherobustnessofdifferentpretrainedfeatureextractorstoinputresolution changes. ROIattentionvisualization.Tofurtherinvestigatethespatialattentionpatternsof pathologyfoundationmodels,wevisualizedtheattentionscoresbetweenthe[CLS] tokenandpatchtokensinthelastTransformerlayeracrossdifferentinputresolutions. Specifically,weconductedexperimentsusingROIimageswithresolutionsof224, 448,672,896,and1120pixelsontheSTLC,andUNITOPATHOdatasetstoexamine howpretrainedmodelsattendtodifferentregionswithineachROI.Asthe self-supervisedfoundationmodelsaretrainedwithoutlabelsupervision,the interpretabilityoftheirattentiondistributionswithrespecttoclass-relevantregions remainsuncertain.Toaddressthis,weappendedanadditionalTransformerlayer architecturallyalignedwiththepretrainedmodelattheendoftheViTbackboneand fine-tuneditundersupervisedconditionsusinglabeleddata.Wethenrepeatedthe sameattentionvisualizationprocedureonthefine-tunedmodel,examiningthe attentionscoresbetweenthe[CLS]tokenandpatchtokensinthenewTransformer layer.Theresultingmapsmoreaccuratelyreflectedattentiondistributionsaligned withclass-specificregions,suggestingimprovedlocalizationandinterpretability underlabelsupervision. ROIsegmentation.WeconductedsemanticsegmentationexperimentsonROI-level imagesusingthestandardTransUnetarchitecture[46].TransUnetintegratesa Transformer-basedfeatureextractorintotheconventionalU-Netframework, effectivelyaddressingthelimitationofU-Netinmodelinglong-rangedependencies. TheTransformerencoderwithinTransUnetwasinitializedwithpretrainedweights fromvariouspathologyfoundationmodels.Duringtraining,randomhorizontaland verticalflippingwasusedfordataaugmentation.Themodelwasoptimizedusingthe Adamoptimizerwithabaselearningrateof0.00005.TheDicelossfunctionwasused asthetrainingobjective.Modelsweretrainedforatotalof100epochs.For experimentswithavalidationset,earlystoppingwasmonitoredbasedonvalidation Dicescore,withapatiencesettingof5epochs. Survivalanalysis.ThedataprocessingparadigmforWSI-levelsurvivalprediction followedthesamepreprocessingpipelineasstandardclassificationtasks,including patchextraction,featureencoding,andMIL-basedaggregation.However,unlike traditionalMILclassificationmodels,whichoutputclassprobabilitiesandfinal predictedlabels,survivalanalysismodelsaredesignedtoproduceariskscore,a predictedsurvivallabel,andatime-dependentsurvivalprobabilitycurveforeach sample.Duringtraining,weadoptedtheCoxproportionalhazardslosstooptimizethe modelforcensoredsurvivaldata.TheCoxlossisdefinedasfollows(Equation10): ℒ=−1 Ni=1NEiθi−logj:Tj≥Tieθj (10) InEquation(10),Ndenotesthetotalnumberofsamples.Eiistheeventindicatorfor thei-thsample.θirepresentsthepredictedriskscoreforthei-thsample.Theriskset indicatormatrixRij=Tj≥Tidefineswhethersamplejisatriskatthetimeof eventoccurrenceinsamplei. Few-shotlearning.ForROI-levelclassificationcationtasks,wefollowedthe evaluationstandardsestablishedinthefew-shotlearningliteraturebyadoptingthe SimpleShotframework[49].Inthispipeline,featurerepresentationsofC-way,K-shot samplesfromthesupportsetareextractedusingapretrainedfeatureencoder.The choiceof'way'hasasignificantimpactontaskdifficultyandmodelperformance. Typically,increasingthenumberofwaysindicatesagreaternumberofclassesto classify,therebyraisingthetaskdifficulty.Classprototypesarethencomputedby averagingthenormalizedandcenteredfeaturevectorswithineachclass.Predictions forthequerysetareobtainedbycomputingthedistancebetweenqueryfeaturesand classprototypes.Eachevaluationrun,referredtoasanepisode,followsthis procedure.Weconducted1,000episodesforeachdataset,usingallavailableclasses (ways).ThenumberofshotsKwasvariedacross{1,2,4,8,16,...,256},depending ontheminimumnumberofavailablesamplesineachclass. Digestivepathologyagentarchitecture.Thecurrentsystemcomprisesthreecore modules:Dige Task Suite,WSI Process,andDige Caption.TheGPT-4oengine handlesnaturallanguageinstructionparsingdomainknowledgeinferenceand dynamictoolorchestration.Externalfunctionalitiesareimplementedasstandardized functioncallswithmetadatadescriptorsincludingtoolsummariesI/Oschemasand exemplarpromptsenablingcontext-awareretrievalandexecutionduringreasoning processes. DownstreamTasksunifiesdiagnosticmodelsforwhole-slideanalysisincluding benign/malignantdiagnosishistopathologicalsubtypingandsurvivalprognosis alongsideROItaskssuchasclassificationandsegmentation.Eachsubsystem providesanAPIacceptingslideIDs,coordinatesorfeaturevectorsreturning predictionswith95%confidenceintervalstosupportmultimodaldecisionfusion. WSI Processisresponsibleforslidepreprocessing,featureextraction,andROI selection.Specifically,therawWSIisfirstcolour-normalizedandthenpartitionedat 20 × magnificationinto256 × 256-pixeltiles;eachtileismappedtoa 1024-dimensionalembeddingthroughthepretrainedDigepathencoder,afterwhich thetoolselectedfromDownstreamTaskscomputesattentionweightstoobtaina slide-levelrepresentationanditscorrespondingclassification.Regionswithattention scoresgreaterthanthreshold τareoutputasROIs. TheDige Captionmoduleadoptsatwo-stagetrainingstrategytoenhance cross-modaldescriptivecapability.Thebackboneconsistsofthepre-trainedvisual encoderDigepathandthelargelanguagemodelQwen-2.5-14B-Instruct,bridgedby aMLP-basedprojectorwiththreelayersforfeaturealignment.Thetrainingcorpus comprises272 kQuilt-GIimage–textpairsand18.4 kROI-VQAsamplesfrom Liuzhou Hospital.DuringStage 1onlytheMLP-projectorweightsareupdated,while thevisualencoderandthelargelanguagemodelremainfrozen. Comparisonsandbaselines.Tocomprehensivelyevaluatetheperformanceofour proposedmethod,weestablishedacomparativebenchmarkingframeworkcomprising fivepubliclyavailablepathologyfoundationmodels:Ctranspath[34],UNI[35], Gigapath[36],andConch-V1.5[37].Detailedconfigurationsandmodelparametersused forcomparisonaresummarizedinSupplementaryTableX. Ctranspathwaspretrainedon29,753WSIsspanning25anatomicalsitesfromthe TCGApublicdataset.UtilizingtheMoCo-v3self-supervisedlearningframeworkand approximately15millionpathologytiles,itbuildsaSwinTransformer-Tiny–based visualencoder.UNIintegrated100,000H&E-stainedslidesrepresenting20tissue typesandadoptedtheDINOv2self-supervisedparadigmtotrainaViT-Large–based modelonover100millionpathologytiles,producingageneral-purpose representationmodel.Gigapathwasdevelopedusingamulti-centerdatasetfrom ProvidenceHealthintheUnitedStates,comprising171,189WSIsfromover30,000 patientsacross28cancercenters.Covering31majortissuetypes,thisdatasetenabled pretrainingon1.3billiontilestoconstructaViT-Giant–basedfeatureextraction system.Conch-V1.5employsUNIasitsvisiontowerandutilizesthenativetext encoderofConch.Itunderwentmultimodaltrainingonadatasetof1.26million image-captionpairsusingCOCA[60].Conch-V1.5,inconjunctionwithitsslide encoderTitan,demonstratedexceptionalperformanceintaskssuchaszero-shotand few-shotlearning. Inalldownstreamtaskevaluations,weusedtheofficialpretrainedweightsprovided byeachoftheaforementionedpathologyfoundationmodels.Toensureconsistency, imagenormalizationwasperformedusingthemeanandstandarddeviation parametersemployedduringeachmodel’spretrainingphase.Foreachdownstream task,wemaintainedidenticaloptimizationhyperparameters,trainingsteps,andmodel selectioncriteriaacrossallmodels.Thisuniformevaluationprotocolwasadoptedto ensureafairandunbiasedcomparisonofperformance. Evaluationmetrics Forclassificationtasks,weusethefollowingmetrics:BalancedAccuracy,Weighted F1-score,AreaUndertheReceiverOperatingCharacteristicCurve(AUCROC), Sensitivity,andSpecificity.BalancedAccuracyisequivalenttothemacro-averaged recallandreflectsthemeanper-classaccuracy.TheWeightedF1-scorerepresentsthe class-wiseF1-scoreaveragedaccordingtoclasssampleproportions.AUCROC measurestheareaunderthereceiveroperatingcharacteristiccurve.Forsemantic segmentationtasks,weuseMeanDicecoefficient(M-Dice)andMeanIntersection overUnion(M-IoU).M-Dicecalculatesthemacro-averagedDicescoreacrossall classesandassessesthedegreeofoverlapbetweenpredictionsandgroundtruth. M-IoUcomputesthemacro-averagedintersection-over-unionscoreacrossclasses, reflectingsegmentationprecisionandcoverage.Forsurvivalpredictiontasks,we reporttheconcordanceindex(C-index),whichmeasuresthemodel’sabilityto correctlyranksurvivaltimes. Statisticalanalysis Forallsemiandfullysupervisedexperiments,weestimate95%confidenceintervals forthemodelperformancewithnon-parametricbootstrappingusing1,000bootstrap replicates.ForROI-levelfew-shotclassification,foreachC-way,K-shotsetting,we randomlysampleKtrainingexamplesperCclasseswith1,000repeatedexperiments (called‘episodes’or‘runs’)evaluatedontheentiretestset.ForWSI-leveltasks,we use5-foldcross-validationtoevaluatetheperformanceofeachmodel.Forsurvival analysistasks,weadoptthet-testtoevaluatethestatisticalsignificance. Datasetconstruction Pretrainingdataset TrainingDatasetfortheGIdomain-specificfoundationmodelinstageIwas constructedincollaborationwithpathologydepartmentsfromfivedifferenthospitals. InaccordancewiththeStandardsforDe-identificationofMedicalImagingData,all WSIsfrom101,336patientswereanonymizedtoensurecompliancewithprivacyand ethicalguidelines.Thedatasetcomprisesatotalof210,043WSIsscannedata resolutionof0.25 μm/pixel,coveringthreemajoranatomicalsites:esophagus(10%), stomach(40%),andintestine(50%). InthepretrainingofstageII,pathologistsassistedtoconstructarefined,high-quality datasetcomprising471,445ROIs,withabalancedtumortonon-tumorratioof1:1 (±2%).Threeseniorgastrointestinalpathologists(minimum10yearsofclinical practice)formedourvalidationcommittee.Adheringstrictlytothediagnosticcriteria outlinedintheWHOClassificationofDigestiveSystemTumors(5thedition),the expertsperformedcomprehensiveblindedreviewsof26,320diagnosticWSIs. Throughstatisticallybalancedsampling(1:1tumor/normaltissueratio)withexpert consensusverification,wedevelopedaclinicallyrobustdatasetcontaining471,445 histopathologicallyconfirmedannotations.Thisisthelargestclinically-adjudicated collectionspecificallydesignedforGIpathologyAIapplications. Datasetofdownstreamtasks BasedonthedigestivesystemdiseasesissuedbytheWorldHealthOrganization (WHO),weestablishedacomprehensivebenchmarkcomprising34clinicallyrelevant tasksacrossthreemajoranatomicalsites.Adetailedsummaryoftheevaluationtasks isprovidedinSupplementaryTable. Earlyesophagealcancerscreening(2classes,ESO-2cls).Thistaskutilizeda self-builtdatasetconsistingof860WSIscollectedfromthreemedicalcenters, ZhongnanHospitalofWuhanUniversity(ZN),LiuzhouPeople’sHospital(LZ),and theThirdPeople'sHospitalofShenzhen(SZT),withallcaseshistologicallyclassified asstageT1aorT1baccordingtotheWHOcriteria.TheWSIsweregroupedintotwo categories:Class0includedsquamousepithelialpapillomaandchronicesophagitis (415WSIs);Class1includedlow-gradesquamousintraepithelialneoplasia, high-gradesquamousintraepithelialneoplasia,squamouscellcarcinoma,and esophagealadenocarcinoma(445WSIs);Afive-foldcross-validationstrategywas adopted,witheachfoldfurthersplitintotraining,validation,andtestsets (550:138:172WSIs).Allclassificationtaskswereperformedusing20×magnification duringtheMILpreprocessingstage. Esophagealepithelialneoplasiaclassification(4classes,ESO-4cls).Thistaskwas conductedonanself-builtdatasetcomprising860WSIscollectedfromthreemedical centers(ZN,LZ,andSZT).TheWSIswerecategorizedintofiveclasses:Class0 includednon-neoplasticcasessuchassquamousepithelialpapillomaandchronic esophagitis(415WSIs);Class1includedlow-gradesquamousintraepithelial neoplasia(29WSIs);Class2includedhigh-gradesquamousintraepithelialneoplasia (150WSIs);andClass3consistedofcarcinoma(266WSIs).Afive-fold cross-validationschemewasadopted,witheachfoldfurthersplitintotraining, validation,andtestsets(550:138:172WSIs).Allmodelsweretrainedandevaluated usinginputpatchesat20×magnificationduringtheMILpreprocessingstage. Differentiationbetweenkeratinizingandnon-keratinizingsubtypesof esophagealsquamouscellcarcinoma(2classes,ENSOKRT).Atotalof338WSIs werecollectedfromthreemedicalcenters(ZN,LZ,andSZT)aspartofaself-built dataset.Thedatasetwasdividedintotwoclasses:Class0consistedofkeratinizing ESCC(167WSIs),andClass1comprisednon-keratinizingESCC(171WSIs).A five-foldcross-validationprotocolwasemployed,witheachfoldfurthersplitinto training,validation,andtestsets(216:55:67WSIs).Allinputpatcheswereprocessed at20×magnificationduringMIL-basedclassification. Differentiationbetweenesophagealsquamouscellcarcinomaand adenocarcinoma(2classes,ESO-AS).Thisdatasetcomprised349WSIscollected fromthreemedicalcenters(ZN,LZ,andSZT),annotatedintotwocategories:Class0 consistedofEsophagealSquamousCellCarcinoma(272WSIs)andClass1 consistedofEsophagealadenocarcinoma(77WSIs).Afive-foldcross-validation strategywasused,witheachfoldsplitintotraining,validation,andtestsets (223:56:70WSIs).Allslideswereprocessedat20×magnificationduringMIL-based classification. Earlygastriccancerscreening(2classes,STMSCR).Thedatasetwascollected fromfourmedicalcenters,ZN,LZ,SZT,andFujianProvincialHospital(FJ),, comprising238,643annotatedROIsfrom12,435WSIs,eachmeasuring2048×2048 pixelsatthenative20×magnificationlevel.Thedatasetwasdividedintotwoclasses: Class0includednon-neoplasticconditionssuchasgastritis,intestinalmetaplasia, reactivehyperplasia,fundicglandpolyps,andhyperplasticpolyps(128,575ROIs); Class1includedlesionssuchaslow-gradeandhigh-gradeglandularintraepithelial neoplasia,adenocarcinoma,neuroendocrinetumors,andlymphomas(110,068ROIs). Afive-foldcross-validationprotocolwasemployed,witheachfoldfurthersplitinto training,validation,andtestsets(167,050:23,864:47,729ROIs).AllROIinputswere processedat20×magnificationduringMIL-basedclassification. Gastricepithelialneoplasiaandhyperplasiaclassification(11classes,STLC).A multi-classclassificationtaskwasdesignedtoassessmodelperformanceacrossa diversesetofgastricepitheliallesions.Thedatasetcomprised11,449ROIs,each measuring2,048×2,048pixelsatthenative20×magnificationlevel.TheseROIs wererigorouslycuratedfromfourmulticentermedicalinstitutions(ZN,LZ,SZT,and FJ)andencompassed11distincthistopathologicaldiagnosticcategories.Class0 includednon-neoplasticlesionssuchasgastritis,intestinalmetaplasia,reactive hyperplasia,fundicglandpolyps,andhyperplasticpolyps(432ROIs).Class1and Class2correspondedtolow-grade(1,139ROIs)andhigh-grade(722ROIs) intraepithelialneoplasia,respectively.Classes3to9includedvariousgastric carcinomas:well-differentiatedadenocarcinoma(1,273ROIs),moderately differentiatedadenocarcinoma(2,042ROIs),poorlydifferentiatedadenocarcinoma (1,749ROIs),signet-ringcellcarcinoma(1,085ROIs),mucinouscarcinoma(796 ROIs),otherpoorlycohesivecarcinomas(756ROIs),papillaryadenocarcinoma(837 ROIs)andatypicalhyperplasia(618ROIs).Fortrainingandevaluation,weused train-test(8014:3435ROIs)split. Chronicgastritisgrading(3classes,CGS-G).Thisdataset,comprising499 biopsy-WSIs,wascollectedfromZNandincludesonlybiopsysamples.Itis annotatedforthreelevelsofchronicgastritis:class0includedmildchronic inflammation(171WSIs),class1includedmoderatechronicinflammation(144WSIs) andclass2includedseverechronicinflammation(184WSIs).Thedatasetwassplit intotraining,validation,andtestsets(319:80:100WSIs),followingafive-fold cross-validationprotocol.AllWSIinputswereprocessedat20×magnificationduring MIL-basedclassification. Acutegastricactivitygrading(3classes,AGS-G).Thisdatasetconsistedof348 biopsy-derivedWSIscollectedfromZN,annotatedintothreecategories:class0 comprisedmildacuteactivity(100WSIs),class1comprisedmoderateacuteactivity (121WSIs),andclass2comprisedsevereacuteactivity(127WSIs).Thedatasetwas dividedintotraining,validation,andtestsets(223:56:69WSIs),followingafive-fold cross-validationprotocol.AllWSIinputswereprocessedat20×magnificationduring MIL-basedclassification. Atrophicgastritisclassification(2clasees,CAG).Thisdatasetcomprised571 biopsy-WSIscollectedfromZN,annotatedfortwocategories:class0consistedof non-atrophic(225WSIs)andclass1consistedofatrophic(346WSIs).Thedatawere splitintotraining,validation,andtestsets(364:92:115WSIs),followingafive-fold cross-validationscheme.AllWSIinputswereprocessedat20×magnificationduring MIL-basedclassification. Intestinalmetaplasiagrading(4classes,IM-G).Thisdatasetcomprised667 biopsy-derivedWSIscollectedfromZN,categorizedintofourclasses:class0 consistedofnometaplasia(120WSIs),class1consistedofmildmetaplasia(216 WSIs),class2consistedofmoderatemetaplasia(64WSIs),andclass3consistedof severemetaplasia(267WSIs).Thedatasetwassplitintotraining,validation,andtest sets(426:107:134WSIs),followingafive-foldcross-validationconfiguration.All WSIinputswereprocessedat20×magnificationduringMIL-basedclassification. Gastriccancerstagingprediction(4classes,STM-TNM).Thisdatasetcomprised 300WSIscollectedfromZN.Basedonthe8theditionoftheAJCCTNMstaging system,theWSIswerecategorizedintofourstages:class0(stageI,100WSIs),class 1(stageII,74WSIs),class2(stageIII,106WSIs),andclass3(stageIV,20WSIs). Thedatasetwasdividedintotraining,validation,andtestsets(192:48:60WSIs), followingafive-foldcross-validationprotocol.Fortheslide-levelclassificationtask, 20×magnificationimageswereusedasmodelinput. Intestinalmetaplasiaglandsegmentation(IM-S).Thisdatasetcomprised85 biopsy-derivedWSIsofintestinalmetaplasiacollectedfromFJ,whichwerecropped into1,135imagepatcheswithsizeof512×512pixels.Eachpatchwasmeticulously annotatedatthepixellevelbypathologists,distinguishingintestinalmetaplastic glandsfromnon-intestinalmetaplasticglands.Thedatasetwasdividedintotraining, validation,andtestsets(794:114:227ROIs),followingfivereplicateexperiments.. Fortheglandsegmentationtask,10×magnificationimageswereusedasmodelinput. Gastrictumorregionsegmentation(STESD-S).Thisdatasetcomprised60 endoscopicsubmucosaldissection(ESD)gastrictumorslidescollectedfromZNand FJ.Theslidesweredividedinto4,455imagepatchesofsize512×512pixels.Each patchwasmeticulouslyannotatedatthepixellevelbypathologiststodelineatetumor andnon-tumorregions.Thedatasetwassplitintotraining,validation,andtestsets (3,120:445:890ROIs),followingfivereplicateexperiments.Forthetumorregion segmentationtask,10×magnificationimageswereusedasmodelinput. CRAGcolorectalglandsegmentation(CRAG).Thispublicdatasetcontains213 imagestakenfrom38H&EstainedWSIsofcolorectaladenocarcinoma.weusedthe officiallyprovidedtraining–testsplit(173:40ROIs),followingfivereplicate experiments.Allimagesmostlyhaveasizeof1512×1516withpixel-levelgland annotations. Earlycolorectalcancerscreening(2classes,BOWSCR).Thisdatasetwas constructedacrossthreemedicalcenters(ZN,LZ,SZT),comprising5,837 whole-slideimages(WSIs)andapproximately150,318ROIs.TheROIswere classifiedintotwocategories:class0includedenteritis,inflammatorypolyps,and hyperplasticpolyps(46,934ROIs);class1includedlow-gradeintraepithelial neoplasia,high-gradeintraepithelialneoplasia,adenocarcinoma,neuroendocrine tumors,andlymphoma(103,384ROIs).Thedatasetwassplitintotrainingandtest setsusingfive-foldcross-validation.Withineachtrainingfold,ROIswerefurther dividedintotraining,validation,andinternaltestsets(105,223:15,032:30,063ROIs). Fortheslide-levelclassificationtask,20×magnificationimageswereusedasmodel input. CRC-100Ktissueclassification(9classes,CRC-100K).TheCRC-100Kdataset consistsof107,180annotatedregionsofinterest(ROIs)extractedfromH&E-stained formalin-fixedparaffin-embedded(FFPE)diagnosticWSIsof136colorectal adenocarcinomasamples.ThesesampleswereobtainedfromtheNationalCenterfor TumorDiseases(NCT)tissuebankandthepathologyarchivesoftheUniversity MedicalCenterMannheim(UMM).TheROIsarelabeledintoninetissuecategories: adiposetissue(11,745ROIs),background(11,413ROIs),debris(11,851ROIs), lymphocytes(12,191ROIs),mucus(9,931ROIs),smoothmuscle(14,128ROIs), normalcolonmucosa(9,504ROIs),cancer-associatedstroma(10,867ROIs),and colorectaladenocarcinomaepithelium(15,550ROIs).Fortrainingandevaluation,we usedtheofficiallyprovidedcase-stratifiedtraining–testsplit(100,000:7,180ROIs). UNITOPATHOcolorectalpolypclassification(6classes,UNITOPATHO).This datasetcomprises8,669ROIsataresolutionof1,812×1,812pixelsand867ROIsat 15,855×15,855pixels,allwithaspatialresolutionof0.44μm/pixel.TheseROIs wereextractedandannotatedfromH&E-stainedFFPEdiagnosticWSIsof292 colorectalpolypsamplescollectedattheUniversityofTurin.TheROIswere classifiedintosixcategories:normaltissue(950ROIs),hyperplasticpolyps(545 ROIs),tubularadenomawithhigh-gradedysplasia(454ROIs),tubularadenomawith low-gradedysplasia(3,618ROIs),tubulo-villousadenomawithhigh-gradedysplasia (916ROIs),andtubulo-villousadenomawithlow-gradedysplasia(2,186ROIs).The datasetwassplitintotrainingandtestsets(6,068:2,601ROIs).Toevaluatethe resolutionsensitivityandadaptabilityofpathologyfoundationmodels,weconducted linearheadfine-tuning,andSimpleShotlearningusingfivedifferentinputresolutions: 224,448,672,896,and1,120pixels.Additionally,wevisualizedthemulti-head attentiondistributionsofthepretrainedmodelsandthefine-tunedTransformerlayers acrossdifferentresolutionstoexploreattentiondynamicsatvaryingscales. CAMELcolorectaladenomascreening(2classes,CAMEL).Thescreeningdataset comprises15,403ROIimagesextractedfrom177colorectalslidesfromthe DepartmentofPathology,ChinesePLAGeneralHospital.Theoriginalresolutionof theimagesis1,280×1,280pixelsandweresizeditto224×224pixelsduringthe experiments.Thecohortconsistedof8,450adenoma-containingROIsand6,953 normaltissueROIs.Toensurerigorousevaluation,thedatasetispartitionedinto training(10,782ROIs)andtest(4,621ROIs)subsets. Colorectalepithelialtumorsandproliferativelesionclassification(3classesand5 classes,IMP-CRS2024andBOW-5cls).Thisdatasetincludesboththe IMP-CRS2024publicdatasetandacustom-builtdataset.Werandomselected1132 colorectalWSIsfromtheIMP-CRS2024trainingdatasetandusedtheofficailtest dataset(900WSIs)forevaluation,whichwerelabeledintothreecategories: non-tumorouslesions(484WSIs),low-gradelesions(1004WSIs),andhigh-grade lesions(544WSIs).Fortheslide-levelclassificationtask,10×magnificationimages wereusedasmodelinput. Self-builtdatasetincludes1,686colorectalWSIscollectedfromLZ,annotatedinto sixcategories:class0(normal,522WSIs),class1(hyperplasticpolyps,130WSIs), class2(low-gradeintraepithelialneoplasia,379WSIs),class3(high-grade intraepithelialneoplasia,163WSIs),class4(adenocarcinoma,492WSIs).Thedataset wasdividedintotraining,validation(1,180:169:337WSIs)andevaluatedusing five-foldcross-validation.Fortheslide-levelclassificationtask,10×magnification imageswereusedasmodelinput. Precancerouslesionsandreactivehyperplasia(3classes,LHN-3cls).Thedataset comprised462WSIscollectedfromfourmedicalcenters(ZN,LZ,SZT,FJ), annotatedintotwoclasses:Class0consistedofreactivehyperplasia(160WSIs), Class1consistedoflow-gradeintraepithelialneoplasia(93WSIs),andClass2 consistedofhigh-gradeintraepithelialneoplasia(209WSIs).Afive-fold cross-validationschemewasemployed,witheachfoldfurthersplitintotraining, validation,andtestsets(296:74:92WSIs).Allslideswereprocessedat20× magnificationduringMIL-basedclassification. Differentiationbetweensignet-ringcellcarcinomaandhistiocytes(2classes, R-X).Thisdatasetconsistedof384WSIscollectedfromfourmedicalcenters(ZN, LZ,SZT,FJ),annotatedintotwocategories:Class0(histiocytes,182WSIs)and Class1(signet-ringcellcarcinoma,202WSIs).Afive-foldcross-validationprotocol wasapplied,witheachfoldfurtherdividedintotraining,validation,andtestsets (246:61:77WSIs).Allslideswereprocessedat20×magnificationduringMIL-based
https://arxiv.org/abs/2505.21928v1
classification Differentiationbetweenpoorlydifferentiatedadenocarcinomaandpoorly differentiatedsquamouscellcarcinoma(2classes,LA-LS).Differentiating betweenpoorlydifferentiatedadenocarcinomaandpoorlydifferentiatedsquamous cellcarcinomainthegastrointestinaltractpresentsamajordiagnosticchallenge.This datasetcomprised384WSIscollectedfromfourmedicalcenters(ZN,LZ,SZT,FJ), annotatedintotwocategories:Class0(poorlydifferentiatedadenocarcinoma,236 WSIs)andClass1(poorlydifferentiatedsquamouscellcarcinoma,148WSIs).A five-foldcross-validationstrategywasused,witheachfoldsplitintotraining, validation,andtestsets(246:61:77WSIs).Allslideswereprocessedat20× magnificationduringMIL-basedclassification. Genemutationprediction(2classes,TCGA-COAD-MUC16, TCGA-COAD-TTN,TCGA-READ-TTN,TCGA-READ-NRAS).Thisstudy performedgenemutationpredictionbasedongastrointestinal-relateddatasetsfrom TheCancerGenomeAtlas(TCGA),includingCOAD(colonadenocarcinoma,403 WSIs),andREAD(rectumadenocarcinoma,128WSIs).Wefocusedonthethree frequentdrivergenes(MUC16inTCGA-COAD,TTNinTCGA-COAD,NRASin TCGA-READ,andTTNinTCGA-READ)inthesecancertypes.Afive-fold cross-validationstrategywasused,witheachfoldsplitintotraining,validation,and testsets(258:64:81WSIsforTCGA-COAD,and82:20:26WSIsforTCGA-READ). Allslideswereprocessedat20×magnificationduringMIL-basedclassification. Molecularstatusprediction(2classes,PD-L1,P53,HER2).Toassessthe capabilityofthemodelinpredictingmolecularmarkersfromroutinehistopathology, weconstructedthreein-housedatasetsacrossfivemedicalcenters(H1–H5),targeting PD-L1,P53,andHER2expressionstatus.ThePD-L1datasetconsistedof751WSIs (positive:negative=483:268WSIs),theP53datasetincluded710WSIs (positive:negative=361:349WSIs),andtheHER2datasetcomprised399WSIs (positive:negative=92:307WSIs).Foreachtask,five-foldcross-validationwas performed,witheachfoldsplitintotraining,validation,andtestsets:PD-L1 (479:121:151WSIs),P53(454:114:142WSIs),andHER2(255:64:80WSIs).All imageswereprocessedat20×magnificationduringMIL-basedclassification. Microsatelliteinstabilityprediction(2classes,MSI).Anself-builtdataset comprising970surgicalWSIsfromZNwasusedtoevaluateMSIprediction performanceingastrointestinalcancers,includingbothgastricandcolorectal specimens.Thisdatasetwascategorizedintotwogroups:Class0(microsatellite instable)andClass1(microsatellitestable).Afive-foldcross-validationstrategywas employed,witheachfolddividedintotraining,validation(582:194:194WSIs).All slideswereprocessedat20×magnificationforMIL-basedclassification. Survivalprediction(TCGA-COAD-SUR,TCGA-READ-SUR).Forsurvival outcomemodeling,wecuratedadatasetfrom408fromcolonadenocarcinoma (COAD),and153fromrectumadenocarcinoma(READ).Afive-foldcross-validation strategywasused,witheachfoldsplitintotraining,validation,andtestsets (261:65:82WSIsforCOADand98:24:31WSIsforREAD).Allslideswere processedat20×magnificationforMIL-basedclassification. Prospectivemulti-centerstudyforearlycancerscreening Accordingtothe5theditionoftheWHOClassificationofTumoursoftheDigestive System,wedefinedpositivesamplesasthosediagnosedwithlow-grade intraepithelialneoplasia,high-gradeintraepithelialneoplasia,orconfirmedmalignant tumors.Allothersamples,includingnon-neoplasticlesionsandbenignpolyps,were labeledasnegative.Aprospectivevalidationstudywasconductedacrossnine hospitals,representingawidegeographicdistributionacrosseastern,southern, western,andnorthernChina.Eachhospitaltestedapproximately1,000biopsyslides, yieldingatotalof11,224WSIs,amongwhich657werepositivecases.Case distributionacrossthehospitalsissummarizedinFig.5. Computinghardwareandsoftware AllexperimentsandanalyseswereimplementedinPython3.8.13withPyTorch2.0.0 (CUDA11.7).Thecomputationalframeworkisfullyreproducibleusingopen-source librariesandcodebasesasfollows.ForDigepathpretraining,wemodifiedtheVision Transformerfromthetimm0.9.2library(https://huggingface.com)astheencoder backboneandintegrateditwiththeoriginalDINOv2self-supervisedalgorithm,with pretrainingexecutedon8×80GBNVIDIAA800GPUs.Downstreamtaskswere performedonasingle24GBNVIDIARTX4090GPU.Whole-slideimage(WSI) processingreliedonOpenSlide4.3.1,openslide-python1.2.0,opensdpc(GitHub: WonderLandxD/opensdpc),andtheCLAMframework(GitHub: mahmoodlab/CLAM).BenchmarkvisualencodersincludedCTransPath(https:// github.com/Xiyue-Wang/TransPath),UNI(https://github.com/mahmoodlab/UNI), Gigapath(https://github.com/prov-gigapath/prov-gigapath),andConch-V1.5 (https://github.com/mahmoodlab/TITAN).Weaklysupervisedmultipleinstance learning(MIL)modelswereadaptedfromtheMIL_BASELINEcodebase (https://github.com/lingxitong/MIL_BASELINE),whilesemanticsegmentation utilizedtheTransUNetimplementation(https://github.com/Beckschen/TransUNet). Evaluationprotocolsforlinearprobingandprototypicalnetworkswerebasedonthe UNIcodebase(https://github.com/mahmoodlab/UNI).Visualizationworkflows employedPillow9.3.0,Matplotlib3.7.1,andSeaborn0.12.2.Allreferencedcode repositoriesarepubliclyaccessiblethroughtheirrespectiveGitHubURLsprovidedin themanuscript. Dataavailability TCGAdataconsistingofwhole-slideimagesandlabelscanbeaccessedthroughthe NIHgenomicdatacommons(https://portal.gdc.cancer.gov). CRC-100KdatacanbeaccessedthroughtheZenododatabase (https://zenodo.org/record/1214456). CAMELdatacanbeaccessedthroughthegithublink (https://github.com/ThoroughImages/CAMEL). CRAGdatacanbeaccessedthroughthegithublink (https://github.com/XiaoyuZHK/CRAG-Dataset_Aug_ToCOCO). UNITOPATHOdatacanbeaccessedthroughtheieee-dataportdatabase (https://ieee-dataport.org/open-access/unitopatho). IMP-CRSdatacanbeaccessedthroughthelink (https://rdm.inesctec.pt/dataset/nis-2023-008). Theprivatepathologicalimagesusedinthispaperisnotpubliclyavailable.However, theycanbeobtainedbycontactingthecorrespondingauthor (heyh@sz.tsinghua.edu.cn)forscientificresearchpurposes. Codeavailability CodeandmodelweightsforDigepathcanbeaccessedlaterforacademicresearch purposesathttps://github.com/lingxitong/Digepath.Wehavedocumentedall technicaldeeplearningmethodsandsoftwarelibrariesusedinthestudywhile ensuringthatthepaperisaccessibletothebroaderclinicalandscientificaudience. Acknowledgements WethankShenzhenShengqiangTechnologyCo.,Ltd.forprovidingslidescanners; H3CTechnologiesCo.,Ltd.forprovidingthetrainingservers;ZhuobinLiangand MaomaoZengforsupplyinginterfacestoprocessrawdata;Thisworkwassupported inpartbyNationalNaturalScienceFoundationofChina(82430062),theShenzhen EngineeringResearchCentre(XMHT20230115004),theJilinFuyuanGuanFood GroupCo.,Ltd.,FujianProvincialScienceandTechnologyInnovationJointFunds (grantno.2024Y96010076),andtheFujianProvincialNaturalScienceFoundationof China(grantno.2024J011006). Authorcontributions L.H.Z.,X.T.L.,M.X.O.Y.,Y.H.H.,andS.F.T.conceivedthestudyanddesignedthe experiments.L.H.Z.,X.T.L.,M.X.O.Y.,X.P.L.,S.F.T.,L.X.C.,S.D.,Z.Q.C.,H.Q.L. collectedthedataforself-supervisedlearning.L.H.Z.,X.T.L.,M.X.O.Y.performed modeldevelopmentforself-supervisedlearning.L.H.Z.,X.T.L.,M.X.O.Y.,M.X.F., F.L.F.,M.X.Z.,Y.B.J.organizedthedatasetsandcodebasesforalldownstreamtasks regardingROIclassification,ROIsegmentationandslideclassification.L.H.Z., X.T.L.,M.X.O.Y.,X.P.L.,M.S.,andH.Q.L.performedqualitycontrolofthe codebaseandtheresults.L.H.Z.,X.T.L.,M.X.O.Y.,Y.H.H.,andS.F.T.performedthe experimentsanalysis.L.H.Z.,X.T.L.,X.P.L.,M.X.O.Y.,M.X.F.,F.L.F.,M.M.Z., M.X.Z.,Y.B.J.,Y.H.H.,andS.F.T.interpretedtheresultsandprovidedfeedbackon thestudy.L.H.Z.,X.T.L.,M.X.O.Y.,andX.P.L.preparedthemanuscript.Y.H.H., andS.F.T.supervisedtheresearch.L.H.Z.,X.T.L.,M.X.O.Y.,M.X.F.andF.L.F. haveaccessedandverifieddata.Allauthorshavereadandagreedtopublishthe paper. Competinginterests Allauthorsdeclarenocompetinginterests. Reference [1]KhalavehF,ChoA,ShaltoutA,etal.Concomitantradiosurgicalandtargeted oncologicaltreatmentimprovestheoutcomeofpatientswithbrainmetastasesfrom gastrointestinalcancer.RadiatOncol.2023Dec9;18(1):197. [2]SinghA,FerlayJ,SiegelRL,etal.Globalburdenoffivemajortypesof gastrointestinalcancer[J].PrzGastroenterol.2024;19(3):236-254. [3]SungH,SiegelRL,TorreLA,etal.Globalpatternsinexcessbodyweightandthe associatedcancerburden[J].CACancerJClin.2019;69(2):88-112. [4]BrayF,LaversanneM,SungH,etal.Globalcancerstatistics2022:GLOBOCAN estimatesofincidenceandmortalityworldwidefor36cancersin185countries.CA CancerJClin.2024May-Jun;74(3):229-263. [5]FerlayJ,ErvikM,LamF,etal.GlobalCancerObservatory:CancerToday (Version1.0).InternationalAgencyforResearchonCancer;2024.AccessedFebruary 1,2024. [6]LumishMA,CercekA.PracticalConsiderationsinDiagnosingandManaging Early-OnsetGICancers[J].JClinOncol.2022Aug20;40(24):2662-2680. [7]PattilachanTM,ChristodoulouM,RossS.Diagnosistodissection:AI'srolein earlydetectionandsurgicalinterventionforgastriccancer[J].JRobotSurg.2024Jun 20;18(1):259. [8]LevinB,LiebermanDA,McFarlandB,etal.Screeningandsurveillanceforthe earlydetectionofcolorectalcancerandadenomatouspolyps,2008:ajointguideline fromtheAmericanCancerSociety,theUSMulti-SocietyTaskForceonColorectal Cancer,andtheAmericanCollegeofRadiology[J].Gastroenterology,2008,134(5): 1570-1595. [9]WangZ,LiuY,NiuX.Applicationofartificialintelligenceforimprovingearly detectionandpredictionoftherapeuticoutcomesforgastriccancerintheeraof precisiononcology[C]//SeminarsinCancerBiology.AcademicPress,2023,93: 83-96. [10]IacucciM,SantacroceG,MajumderS,etal.Openingthedoorsofprecision medicine:noveltoolstoassessintestinalbarrierininflammatoryboweldiseaseand colitis-associatedneoplasia[J].Gut,2024,73(10):1749-1762. [11]McGenityC,ClarkeEL,JenningsC,etal.Artificialintelligenceindigital pathology:asystematicreviewandmeta-analysisofdiagnostictestaccuracy[J].npj DigitalMedicine,2024,7(1):114. [12]PasechnikovV,ChukovS,FedorovE,etal.Gastriccancer:prevention,screening andearlydiagnosis[J].Worldjournalofgastroenterology:WJG,2014,20(38):13842. [13]AllenJI,KatzkaD,RobertM,etal.AmericanGastroenterologicalAssociation Institutetechnicalreviewontheroleofuppergastrointestinalbiopsytoevaluate dyspepsiaintheadultpatientintheabsenceofvisiblemucosallesions[J]. Gastroenterology,2015,149(4):1088-1118. [14]FangS,LiuZ,QiuQ,etal.Diagnosingandgradinggastricatrophyandintestinal metaplasiausingsemi-superviseddeeplearningonpathologicalimages:development andvalidationstudy[J].GastricCancer,2024,27(2):343-354. [15]MaZY,YangZ,LiuJ,etal.Sequentialinjection-electrocoagulationvs. traditionalelectrocoagulationhaemostasisduringendoscopicsubmucosaldissection: arandomizedcontrolledtrial.SurgEndosc.2025May16. [16]LiuQ,DingL,QiuX,etal.Updatedevaluationofendoscopicsubmucosal dissectionversussurgeryforearlygastriccancer:Asystematicreviewand meta-analysis[J].InternationalJournalofSurgery,2020,73:28-41. [17]IshiokaM,YoshioT,MiyamotoY,etal.Incidenceofmetachronouscancerafter endoscopicsubmucosaldissection:acomparisonbetweenundifferentiated-typeand differentiated-typeearlygastriccancer[J].GastrointestinalEndoscopy,2021,93(3): 557-564.e1. [18]IsomotoH,ShikuwaS,YamaguchiN,etal.Endoscopicsubmucosaldissection forearlygastriccancer:alarge-scalefeasibilitystudy[J].Gut,2009,58(3):331-336. [19]LiGZ,DohertyGM,WangJ.Surgicalmanagementofgastriccancer:a review[J].JAMAsurgery,2022,157(5):446-454. [20]UshijimaT,SasakoM.Focusongastriccancer[J].Cancercell,2004,5(2): 121-125. [21]CoburnN,CosbyR,KleinL,etal.Stagingandsurgicalapproachesingastric cancer:Asystematicreview[J].Cancertreatmentreviews,2018,63:104-115. [22]BurzC,PopV,SilaghiC,etal.Prognosisandtreatmentofgastriccancer:a2024 update[J].Cancers,2024,16(9):1708. [23]GuanWL,HeY,XuRH.Gastriccancertreatment:recentprogressandfuture perspectives[J].Journalofhematology&oncology,2023,16(1):57. [24]AcsB,RantalainenM,HartmanJ.Artificialintelligenceasthenextsteptowards precisionpathology[J].Journalofinternalmedicine,2020,288(1):62-81. [25]ChangTG,ParkS,SchäfferAA,etal.Hallmarksofartificialintelligence contributionstoprecisiononcology[J].NatureCancer,2025:1-15. [26]SinghG,KamaljaA,PatilR,etal.Acomprehensiveassessmentofartificial intelligenceapplicationsforcancerdiagnosis[J].ArtificialIntelligenceReview,2024, 57(7):179. [27]HuangZ,YangE,ShenJ,etal.Apathologist–AIcollaborationframeworkfor enhancingdiagnosticaccuraciesandefficiencies[J].NatureBiomedicalEngineering, 2025,9(4):455-470. [28]KumarA.AIindigitalpathology:automatedhistopathologicalanalysisfor cancergradingandprognosticoutcomeprediction[J].IntJComputApplTechnolRes, 2022,11(11):400-12. [29]VanderLaakJ,LitjensG,CiompiF.Deeplearninginhistopathology:thepathto theclinic[J].Naturemedicine,2021,27(5):775-784. [30]AwaisM,NaseerM,KhanS,etal.FoundationModelsDefiningaNewErain Vision:ASurveyandOutlook[J].IEEETransactionsonPatternAnalysisandMachine Intelligence,2025. [31]ChenX,XieS,HeK.Anempiricalstudyoftrainingself-supervisedvision transformers.In:ProceedingsIEEE/CVFIntConfComputVis.2021:9640-9649. [32]WangX,JiangY,YangS,etal.FoundationModelforPredictingPrognosisand AdjuvantTherapyBenefitFromDigitalPathologyinGICancers[J].Journalof ClinicalOncology,2025:JCO-24-01501. [33]HörstF,RempeM,BeckerH,etal.CellViT++:Energy-EfficientandAdaptive CellSegmentationandClassificationUsingFoundationModels[J].arXivpreprint arXiv:2501.05269,2025. [34]WangX,ZhaoJ,MarosticaE,etal.Apathologyfoundationmodelforcancer diagnosisandprognosisprediction[J].Nature,2024,634(8035):970-978. [35]ChenRJ,DingT,LuMY,etal.Towardsageneral-purposefoundationmodelfor computationalpathology[J].NatMed.2024;30:850–862. [36]XuH,UsuyamaN,BaggaJ,etal.Awhole-slidefoundationmodelfordigital pathologyfromreal-worlddata[J].Nature.2024;630:181–188. [37]DingT,WagnerSJ,SongAH,etal.Multimodalwholeslidefoundationmodel forpathology[J].arXiv.2024:2411.19666. [38]SunK,ZhengY,YangX,etal.MSCL-Net:Unleashingthepowerofmulti-scale andcross-layerlearninginpathologyimageclassification[J].BiomedicalSignal ProcessingandControl,2025,108:107880. [39]WuX,ChenZ,PengC,etal.MMSRNet:Pathologicalimagesuper-resolutionby multi-taskandmulti-scalelearning[J].BiomedicalSignalProcessingandControl, 2023,81:104428. [40]LeiD,ZhangY,WangH,etal.Multi-ScaleDynamicSparseToken Multi-InstanceLearningforPathologyImageClassification[J].IEEEJournalof BiomedicalandHealthInformatics,2024. [41]SongZ,ZouS,ZhouW,etal.Clinicallyapplicablehistopathologicaldiagnosis systemforgastriccancerdetectionusingdeeplearning[J].NatCommun. 2020;11(1):4294. [42]MukherjeeS,YuanL,Hakkani-TurD,etal.ReinforcementLearningFinetunes SmallSubnetworksinLargeLanguageModels[J].arXivpreprintarXiv:2505.11711, 2025. [43]KongL,YangC,NeufangS,etal.EMORL:EnsembleMulti-Objective ReinforcementLearningforEfficientandFlexibleLLMFine-Tuning[J].arXiv preprintarXiv:2505.02579,2025. [44]VrbančičG,PodgorelecV.Transferlearningwithadaptivefine-tuning[J].IEEE Access,2020,8:196197-196211. [45]HeikalA,El-GhamryA,ElmougyS,etal.Finetuningdeeplearningmodelsfor breasttumorclassification[J].ScientificReports,2024,14(1):10753. [46]ChenJ,MeiJ,LiX,etal.TransUNet:RethinkingtheU-Netarchitecturedesign formedicalimagesegmentationthroughthelensoftransformers[J].MedicalImage Analysis,2024,97:103280. [47]ZhuZ,GongY,XuH.Clinicalandpathologicalstagingofgastriccancer: Currentperspectivesandimplications[J].EuropeanJournalofSurgicalOncology, 2020,46(10):e14-e19. [48]WeiserMR.AJCC8thedition:colorectalcancer[J].Annalsofsurgicaloncology, 2018,25:1454-1455. [49]WangY,ChaoWL,WeinbergerKQ,etal.Simpleshot:Revisiting nearest-neighborclassificationforfew-shotlearning[J].arXivpreprint arXiv:1911.04623,2019. [50]SnellJ,SwerskyK,ZemelR.Prototypicalnetworksforfew-shotlearning[J]. Advancesinneuralinformationprocessingsystems,2017,30. [51]DosovitskiyA,BeyerL,KolesnikovA,etal.AnImageisWorth16x16Words: TransformersforImageRecognitionatScale[C]//InternationalConferenceon LearningRepresentations.2020. [52]OquabM,DarcetT,MoutakanniT,etal.Dinov2:Learningrobustvisualfeatures withoutsupervision[J].arXivpreprintarXiv:2304.07193,2023. [53]ChenX,XieS,HeK.Anempiricalstudyoftrainingself-supervisedvision transformers[C]//ProceedingsoftheIEEE/CVFinternationalconferenceoncomputer vision.2021:9640-9649. [54]IlseM,TomczakJ,WellingM.Attention-baseddeepmultipleinstance learning[C]//Internationalconferenceonmachinelearning.PMLR,2018:2127-2136. [55]LuMY,WilliamsonDFK,ChenTY,etal.Data-efficientandweakly supervisedcomputationalpathologyonwhole-slideimages[J].Naturebiomedical engineering,2021,5(6):555-570. [56]ShaoZ,BianH,ChenY,etal.Transmil:Transformerbasedcorrelatedmultiple instancelearningforwholeslideimageclassification[J].Advancesinneural informationprocessingsystems,2021,34:2136-2147. [57]LiB,LiY,EliceiriKW.Dual-streammultipleinstancelearningnetworkfor wholeslideimageclassificationwithself-supervisedcontrastive learning[C]//ProceedingsoftheIEEE/CVFconferenceoncomputervisionandpattern recognition.2021:14318-14328. [58]LiJ,ChenY,ChuH,etal.Dynamicgraphrepresentationwithknowledge-aware attentionforhistopathologywholeslideimageanalysis[C]//Proceedingsofthe IEEE/CVFConferenceonComputerVisionandPatternRecognition.2024: 11323-11332. [59]ZhuC,ByrdRH,LuP,etal.Algorithm778:L-BFGS-B:Fortransubroutinesfor large-scalebound-constrainedoptimization[J].ACMTransactionsonmathematical software(TOMS),1997,23(4):550-560. [60]YuJ,WangZ,VasudevanV,etal.Coca:Contrastivecaptionersareimage-text foundationmodels[J].arXivpreprintarXiv:2205.01917,2022.
https://arxiv.org/abs/2505.21928v1
From Reasoning to Learning: A Survey on Hypothesis Dis- covery and Rule Learning with Large Language Models Kaiyu He kaiyu.he@utdallas.edu Department of Computer Science University of Texas at Dallas Zhiyu Chen Zhiyu.Chen2@utdallas.edu Department of Computer Science University of Texas at Dallas Abstract Since the advent of Large Language Models (LLMs), efforts have largely focused on improv- ing their instruction-following and deductive reasoning abilities, leaving open the question of whether these models can truly discover new knowledge. In pursuit of artificial general intelligence (AGI), there is a growing need for models that not only execute commands or retrieve information but also learn, reason, and generate new knowledge by formulat- ing novel hypotheses and theories that deepen our understanding of the world. Guided by Peirce’s framework of abduction, deduction, and induction, this survey offers a structured lens to examine LLM-based hypothesis discovery. We synthesize existing work in hypothesis generation, application, and validation, identifying both key achievements and critical gaps. By unifying these threads, we illuminate how LLMs might evolve from mere “information executors” into engines of genuine innovation, potentially transforming research, science, and real-world problem solving. 1 Introduction One major pillar of human intelligence is the capacity to discover hypotheses and learning rules. We call this capability hypothesis discovery (or rule learning). Earlier AI systems struggled with it because formal symbolicmethodslackedthecommonsensebackgroundneededforinventiveruleformation(Yuetal.,2024a). Recent advances in natural language processing (NLP) have produced LLMs pretrained on extensive text corpora that embed substantial commonsense knowledge. These models now enable tasks that demand rich background knowledge, such as formulating new hypotheses and deriving novel conclusions. Hypothesis discovery inherently relies on a blend of reasoning that includes abduction, induction, and de- duction, each defined differently by various scholars. For instance, Gilbert H. Harman considers induction to be a special case of abduction, describing it as “inference to the best explanation” (IBE) (Harman, 1965; Douven, 2021). However, while this definition is easy to understand, it oversimplifies key aspects of hypoth- esis discovery. In particular, the notion of the “best” explanation is ambiguous and often requires additional assumptions that vary by context. Moreover, this framework does not fully capture real-world scenarios, where a “best” explanation is rarely reached immediately; rather, we continually experiment, gather new observations, and refine our hypotheses. Based on these considerations, we adopt Charles Peirce’s defini- tion of hypothesis discovery and reasoning, which posits that hypothesis discovery begins with forming an explanatory hypothesis to explain observations through abduction , proceeds with iteratively apply hypoth- esis to solve problem or derive new knowledge with deduction , and validate hypothesis through induction (Frankfurt, 1958; Peirce, 1974; Burks, 1946; Minnameier, 2004) (See explanation in Figure 2). The rest of the survey is organized as follows. Section 2 presents background knowledge on hypothesis discov- ery using LLMs, including different forms of reasoning and representations involved in the process. Section 3 examines prior surveys on LLM reasoning and hypothesis discovery, highlighting their narrow emphasis on 1arXiv:2505.21935v1 [cs.AI] 28 May 2025 Reasoning for Hypothesis DiscoveryHypothesis Generation w/ Abduction § 4Method § 4.1Natural Language HypothesisPrompt Method Wiegreffe et al. (2022); Qi et al. (2024); Yang et
https://arxiv.org/abs/2505.21935v1
al. (2024a) RAG MethodHu et al. (2024); Yang et al. (2025; 2024b); Xiong et al. (2024) Chai et al. (2024) Human supported Zhao et al. (2024); Si et al. (2024); Pu et al. (2025) Formal Language HypothesisW/ Formal observation Young et al. (2022); Nguyen et al. (2023) W/ NL observationCheng et al. (2024); Wang et al. (2024); Qiu et al. (2024) il Lee et al. (2025); Movva et al. (2025) Evaluation § 4.2Natural Language HypothesisPrediction BasedSinha et al. (2019); Weston et al. (2015); Balepur et al. (2024) Shi et al. (2023); Wang et al. (2025) Ground Truth BasedYang et al. (2024a); Qi et al. (2024); Movva et al. (2025) Yang et al. (2024b) Human EvaluationZhao et al. (2024); Yang et al. (2024b); Hu et al. (2024); Qi et al. (2024) Yang et al. (2025) Formal Language HypothesisPrediction BasedRule (2020); Chollet (2019); Liu et al. (2024); Li et al. (2025a) Chen et al. (2024) Ground Truth Based Bowen et al. (2024); Hua et al. (2025); Young et al. (2022) Hypothesis Application w/ Deduction § 5Method § 5.1Natural Language HypothesisLLM as FL Parser Pan et al. (2023); Olausson et al. (2023); Kalyanpur et al. (2024) Fine-Tune Based Sun et al. (2024) Prompt-Based Sun et al. (2024); Cai et al. (2025); Ling et al. (2023) Formal Language Hypothesis N/A Evaluation § 5.2Natural Language Hypothesis Prediction Based Yu et al. (2024b); Mu et al. (2024); Cai et al. (2025); Sun et al. (2024) Formal Language Hypothesis N/A Hypothesis Validation w/ Induction § 6Method § 6.1Natural Language HypothesisPrompt-Based Lampinen et al. (2022); Sun et al. (2024) Fine-Tune Based Zhao et al. (2023); Chan et al. (2023); Sun et al. (2024) Formal Language Hypothesis Prompt-Based He & Lu (2024) Evaluation § 6.2Natural Language HypothesisTrue/False Based Lampinen et al. (2022); Rudinger et al. (2020); Zhang et al. (2025) Selection Based Bhagavatula et al. (2020); Jiang et al. (2023); Del & Fishel (2023) Formal Language Hypothesis Ground Truth Based He & Lu (2024) Entire Hypothesis Discovery § 7Passive Hypothesis Discovery § 7.1 Zhu et al. (2024); Zhou et al. (2024) Proactive Hypothesis Discovery § 7.2Li et al. (2024); Jung et al. (2022); Seals & Shalin (2024) Conti et al. (2024) Hypothesis Discovery with Real World Simulation § 7.3Xu et al. (2023); Wang et al. (2022a); Jansen et al. (2024) He et al. (2024) Figure 1: Taxonomy for Hypothesis Discovery with LLMs. Our survey categorizes work into four topics based on Peirce’s definition of hypothesis discovery: Generation (creating hypotheses that explain given observations with abduction), Application (deducing new observations from established hypotheses with de- duction), Validation (verifying and refining hypotheses against new evidence with induction), and Integrated Hypothesis Discovery (examining the dynamic interdependencies among these components in a continuous, iterative process). deductive tasks or application-specific methods. Section 4 reviews methods for forming hypotheses (Abduc- tion). Section 5 then covers approaches for applying these hypotheses (Deduction) , and Section 6 focuses on techniques for validating given hypotheses with new observations (Induction) . Finally, Section 7 explores the entire hypothesis-discovery cycle by examining the interdependencies among these reasoning steps and showing how abduction,
https://arxiv.org/abs/2505.21935v1
deduction, and induction can be iteratively used to refine more robust hypotheses. For each stage, we discuss methods, benchmarks, evaluations, and identify limitations and future directions. A high-level taxonomy guiding this survey is shown in Figure 1. 2 Background Before LLMs, most AI systems stored knowledge as handcrafted symbols and rules. That format works well for deduction, because most of the problems we need to solve with symbolic AI systems work with limited premises and countable task-specific knowledge; for example, questions in the ProofWriter (Tafjord et al., 2 Figure 2: On the left-hand side, a)illustrates Peirce’s framework for hypothesis discovery through abduction, deduction, and induction. The process begins with abduction, which generates explanatory hypotheses based on an initial set of observations. Deduction is then used to apply these hypotheses and derive predictions. Induction evaluates how well the predicted observations align with actual outcomes, updating the confidence of the hypotheses or rejecting those that are no longer valid. This process is iterative: validated hypotheses may be refined through further rounds of abduction using updated observations, gradually leading to more robust theories. On the right-hand side, b)provides a simple example that illustrates this process. 2021) and FOLIO (Han et al., 2024) benchmarks are limited to fewer than a hundred premises. However, abduction and induction are different: they call for generating and validating many tentative explanations inspired by vast commonsense or expert domain knowledge (such as weather patterns, social norms, or physics) and for updating beliefs as new observations arrive. Handling these reasoning tasks with symbolic AI meant writing and maintaining vast, interlocking rule bases, an effort so costly that few projects moved beyond toy domains (Yang et al., 2024c). Consequently, the research landscape remained dominated by deductive tasks (Yu et al., 2024a; Liu et al., 2025; Huang & Chang, 2023). LLMs have transformed this landscape. Trained on vast corpora, they implicitly absorb broad commonsense and domain knowledge, exhibiting strong reasoning abilities on complex, natural-language tasks (Yang et al., 2024c). With a simple text prompt, we can now ask them to carry out abduction or induction and even inspect their intermediate reasoning steps (Li et al., 2024; Jung et al., 2022), exposing the latent information they rely on. This advancement has made it practical to study and deploy defeasible reasoning (Yang et al., 2024c). Defeasible reasoning refers to forms of reasoning, such as abduction and induction, that yield probable conclusions that remain open to revision as new evidence emerges. This shift has fueled a wave of NLP research that places such flexible reasoning at the heart of AI progress (Liu et al., 2025; Huang & Chang, 2023). 2.1 Hypothesis Discovery Hypothesis discovery or Rule learning, the cyclical process of formulating hypotheses, gathering evidence, validating or refuting them, and ultimately establishing robust theories, lies at the heart of scientific progress (Eger et al., 2025). Early humans, for example, hypothesized that the Earth was flat based on everyday observations. Later, Eratosthenes measured shadow angles at different locations, obtaining evidence that suggestedtheEarth’ssurfacewascurved. ThisevidencechallengedtheflatEarthhypothesis, andsubsequent findings, notably Magellan’s circumnavigation, conclusively confirmed the Earth’s roundness. Even with to- day’s sophisticated
https://arxiv.org/abs/2505.21935v1
instruments, researchers continue to iterate this loop in new domains, validating and refining theories as new data emerges. Today, there is growing interest in whether LLMs can autonomously generate, apply, and validate hypotheses from natural language represented observations, mirroring this iterative process to achieve interpretable and adaptive hypothesis discovery. Although many studies have explored individual steps of hypothesis discovery, their efforts tend to be scattered across abduction, de- 3 duction, and induction, with insufficient attention to how these forms of reasoning interconnect to drive genuinely iterative, hypothesis-driven discovery. 2.2 Reasoning Reasoning is central to hypothesis discovery. Researchers have historically debated how best to categorize reasoning into clear, operational types. Different frameworks each have strengths and limitations (Harman, 1965; Douven, 2021; Bacon, 1878; Laudan, 1971; Mill, 2024; Stadler, 2011; Popper, 2005; Okoli, 2023). In this survey, we adopt Charles Peirce’s definition of reasoning (Peirce, 1974; Burks, 1946; Minnameier, 2004), emphasizing abduction, deduction, and induction as separate but interrelated processes. We choose Peirce’s framework for three main reasons. First, clarity: Unlike many other approaches, Peirce explicitly differentiates among the three reasoning types, preventing confusion, such as the common conflation of abduction and induction. Second, practicality: Peirce’s model aligns each form of reasoning directly with a distinctphaseinthehypothesisdiscoverycycle—abductionforgeneratinghypotheses, deductionforapplying these hypotheses, and induction for validating them. This clear mapping makes his framework particularly suitable for systematically studying the entire process of hypothesis discovery, rather than isolated reasoning components. Finally, granularity: Peirce’s framework breaks down the scientific discovery process into well- defined, finer-grained steps, facilitating detailed analysis and enabling more structured evaluation. Abductive Reasoning is the process of forming explanatory hypotheses to make sense of observed phe- nomena. It is the only form of reasoning that generates entirely new ideas or explanations (Peirce, 1974; Frankfurt, 1958). Given a set of observations, one uses creative thinking and recalls necessary knowledge to come up with hypotheses that plausibly explain these observations. Importantly, a single set of observations can lead to multiple possible explanations. For instance, if you come home and find the floor wet, you might form several possible explanations: perhaps a pipe leaked, or someone spilled water accidentally. Without additional evidence or testing, you can’t know for sure which explanation is correct. This illustrates how abduction helps generate potential explanations, which then must be tested further. Inductive Reasoning is the process of testing whether the hypothesis and deduced consequences really obtainandevaluatingtowhatextenttheyobtain(Minnameier,2004;Peirce,1974). Inpractice, inductionup- dates a hypothesis’s confidence based on new observations, including rejecting it outright, or selects the most convincing candidate from a set of competing hypotheses. Consider the claim “Swans are (100%)white,” or linguistically, “All swans are white,” formed after observing 99 white swans in Texas. Encountering a black swan in New York contradicts that hypothesis. Through induction, we recognize this contradiction and lower our confidence in the original claim, adjusting it to “Swans are (99%)white,” linguistically expressed as “Almost all swans are white.” In this example, although the hypothesis appears “revised,” the change is limited to its confidence level; no new explanatory perspective is introduced, and we do not actually form a new hypothesis. By contrast,
https://arxiv.org/abs/2505.21935v1
abduction can lead us to a fresh explanatory hypothesis with new observations, e.g., “Swans’ color depends on their habitat,” or “All swans in Texas are white,” which introduces new ideas and is not a case of induction. Thus, inductive reasoning verifies or refines existing hypotheses (in terms of confidence) based on accumulating evidence. Deductive Reasoning is the process of logically deriving specific conclusions from general hypotheses or rules. If the initial hypotheses are true, deduction guarantees that the derived conclusions must also be true. For instance, from the general rule “All swans are white” and the observation “This bird is a swan,” we logically conclude “This bird must be white”. While traditional deductive reasoning tasks, such as instruction-following and standard problem-solving, have been extensively studied with LLMs (Pan et al., 2023; Wei et al., 2022; Liu et al., 2025; Huang & Chang, 2023), deductive reasoning in the context of hypothesis discovery poses unique challenges. Specifically, it emphasizes inferential rule-following, requiring models to consistently apply hypotheses or rules to derive new and potentially unfamiliar conclusions, even when these hypotheses are counterfactual, unfamiliar, or incorrect. For example, when a flawed hypothesis is introduced in an unfamiliar domain, inferential rule-following requires us to strictly derive its predicted consequence, even if that consequence itself is incorrect. By comparing this consequence with experimental data, we can directly assess the hypothesis’s validity and guide its revision. Conversely, if the deductive process is unreliable, we may overlook real contradictions and thus retain invalid hypotheses or discard valid ones. Indeed, recent work shows that although LLMs can demonstrate strong deductive performance on in- 4 distribution tasks, they rely heavily on surface-level pattern matching and fail to generalize their inferential rule-following to novel or counterfactual scenarios (Pu et al., 2025; Mirzadeh et al., 2024; Kang et al., 2024; Yan et al., 2025). There are also other types of reasoning, such as analogical reasoning (Yuan et al., 2023; Jiayang et al., 2023). However, their function in hypothesis discovery is generally covered by abduction and induction. We will include these additional forms when we encounter a relevant case in the following section. 2.3 Rule Representation: Formal Language vs Natural Language Table 1: Comparison of Natural vs. Formal Language Representations for the Hypothesis “Sam is a dragon”. In natural language, commonsense knowledge is implicitly embedded, and derived knowledge relies on exten- sive commonsense, potentially resulting in different interpretations depending on background knowledge and context. In formal languages (e.g., FOL or code), the knowledge base must be defined explicitly and cannot fully capture all commonsense knowledge, however, the derived conclusions are deterministic and precise. Representation Hypothesis Knowledge Base Derived Knowledge Natural LanguageEnglish: ‘Sam is a dragon”English commonsense of ‘dragon”Sam is dangerous Chinese: ‘Sam is a dragon”Chinese commonsense of ‘dragon”Sam brings good fortune and a bountiful harvest Formal LanguageFOL: Dragon(Sam)∀x(Dragon( x)→Fly(x)) ...Fly(Sam) Code: Sam = Dragon()Class Dragon: def fly(self): ...Sam.fly() There are many ways to represent hypotheses and rules, which we broadly divide into two categories: formal languages (FL) andnatural languages (NL) . Formal languages, such as first-order logic and program- ming languages, are systematic and
https://arxiv.org/abs/2505.21935v1
rule-bound. After real-world entities are encoded as explicit literals, precise inference rules yield provably correct and sound conclusions, making these systems well suited to deductive reasoning. Yet the encoding process strips away many subtle semantic relationships and com- monsense knowledge, limiting the system’s ability to handle the creative, defeasible reasoning required for abduction andinduction (McCarthy & Hayes, 1981; Reiter, 1980; Hanks & McDermott, 1987; Liu et al., 2025; Yu et al., 2024a; Huang & Chang, 2023). Natural language preserves those nuances and aligns more closely with human cognition, so it is better suited to abductive andinductive tasks. However, its mean- ings are implicit and context-dependent, making it difficult to define a deterministic reasoning pipeline and reducing the reliability of the resulting inferences (See in Table 1). Accordingly, the following sections treat formal-language and natural-language approaches separately, emphasizing how their reasoning methods and evaluation protocols differ. 3 Related Surveys Most existing work assessing LLM reasoning, both survey syntheses and popular benchmarks such as GSM8K(Cobbe et al., 2021), centres almost exclusively on multi-step deductive tasks, leaving abduction and induction, the engines of hypothesis discovery, largely unexplored. Surveys of the field Yu et al. (2024a); Liu et al. (2025); Huang & Chang (2023) highlight the absence of systematic study and clear analytical frameworks for these modes, while benchmark analyses likewise show that abductive and inductive inference receive limited attention (Plaat et al., 2024; Li et al., 2025b). This imbalance obscures our understand- ing of whether, and to what extent, LLMs can perform the creative, evidence-based reasoning required for hypothesis-driven discovery. On theother hand, researchin theAI forScience domaintakes a distinctly horizontal, application-driven approach. This body of work emphasizes practical tasks such as generating research ideas, conducting exper- iments, and synthesizing reports, often employing domain-specific pipelines tailored to individual scientific 5 fields. However, these studies usually lack a generalizable reasoning framework applicable across different scientific contexts. Furthermore, their evaluation metrics, typically novelty, creativity, or consistency, tend to be subjective, human-centric, and thus difficult to generalize, offering limited theoretical insight into the underlying reasoning mechanisms involved in scientific discovery (Movva et al., 2025; Alkan et al., 2025; Reddy & Shojaee, 2025; Gridach et al., 2025; Bazgir et al., 2025). Our survey adopts a vertical, reasoning-centered perspective grounded in Peirce’s classical framework. It integrates three modes of reasoning into a unified view of hypothesis discovery: abduction for hypothesis generation, deduction for hypothesis application, and induction for hypothesis validation. Unlike prior surveys that emphasize primarily deductive tasks, we concentrate on the entire reasoning process involved in hypothesis discovery, explicitly covering both defeasible reasoning (abduction and induction) and deductive reasoning. By clearly defining each reasoning mode and explaining its role within each stage of the discovery process, we provide a structured basis for designing principled, model-agnostic benchmarks and evaluation tasks. Compared to existing application-oriented surveys, our framework thus offers a more abstract, system- atic, and theoretically informed approach to understanding and enhancing the role of LLMs in automated scientific discovery. 4 Hypothesis Generation Every scientific discovery begins with a set of observations, denoted as O={o1, o2, . . . , o
https://arxiv.org/abs/2505.21935v1
n}, that we aim to explain. Let hrepresent the generated explanation or hypothesis. The hypothesis generation task can be defined as generating an hsuch that: h|= (o1∧o2∧···∧ on) This notation means that hlogically entails the observations. In other words, assuming hholds, it guarantees that all observations o1∧o2∧···∧ onfollow. In this survey, we follow Peirce’s definitions for reasoning. Accordingly, the primary process used in hypothesis generation is abduction, the method of formulating explanatory hypotheses to account for observed phenomena. 4.1 Method Despite LLMs’ demonstrated prowess in tasks like summarization or code generation, devising robust meth- ods to guide them in hypothesis generation remains an active area of research. Recent work has sought to leverage LLMs’ in-context learning and natural language understanding to produce novel or domain-specific hypotheses, spurring the development of new techniques aimed at improving both the quality and appli- cability of generated hypotheses (Yang et al., 2024c). In this section, we review these methods, spanning approaches that rely solely on prompting, those that integrate external knowledge sources, and those that incorporate human expertise in the loop. 4.1.1 Natural Language Hypothesis Generation with LLMs Prompt-Based Methods : Due to the lack of large-scale, domain-specific data for hypothesis generation, most abduction approaches rely on prompt-based methods that are easy to deploy and don’t require ex- tensive additional data. For instance, when provided with observations expressed in natural language and asked to generate a plausible hypothesis that explains them, both Wiegreffe et al. (2022) and Qi et al. (2024) employ few-shot prompting to guide LLMs in generating hypotheses. Specifically, Wiegreffe et al. (2022) constructs few-shot examples using a triplet format (question, answer, explanation) . In solving a task of generating biomedical hypotheses with given observations, Qi et al. (2024) embeds a small set of indepen- dent observation-to-hypothesis pairs in the prompt. By showing how each block of biomedical background observations maps to its corresponding hypothesis, the model learns to extract relevant domain cues and generate novel biomedical hypotheses. Their findings indicate that including more examples in the prompt tends to reduce the novelty of the generated hypotheses while increasing their correctness. Furthermore, Yang et al. (2024a) propose a pipeline for hypothesis generation that involves five prompt-based modules: one to generate hypotheses, one to test deductive consistency, one to verify that the hypothesis is not merely 6 a copy of the given context, one to assess its generalizability, and one to determine whether the hypothesis is trivial. RAG-Based Methods : Labeling massive corpora for pre-training is costly, but assembling a small or medium dataset for Retrieval-Augmented Generation (RAG) is practical, and several studies follow a similar iterative three-step pattern: (i) retrieve task-specific documents, (ii) let an LLM generate or refine hypothe- ses, and (iii) iterate with LLM feedback. For instance, after a user supplies a seed paper and asks the LLM to generate a worthwhile hypothesis to pursue in research, Hu et al. (2024) query the Scholar API for related work, then repeatedly generate and critique hypotheses, gradually expanding a web of novel ideas. Yang et al. (2025) apply the same loop to 51 top-tier
https://arxiv.org/abs/2505.21935v1
chemistry papers from 2024: experts first segment each pa- per into background, inspiration, and hypothesis; an LLM-based multi-agent system (MOOSE-Chem) then retrieves relevant snippets, drafts hypotheses, and scores them for originality. A similar pipeline appears in Yang et al. (2024b), where 50 conference papers are annotated in the same three fields, augmented with thematically similar web documents and 14 survey papers so that the LLM can judge both relevance and novelty. Two variants enrich the retrieval step with structured or fine-tuned knowledge. Xiong et al. (2024) ground each hypothesis in a domain knowledge graph: entities mentioned during generation are checked against graph relations, ensuring the final claims remain fact-consistent. In contrast, Chai et al. (2024) fine-tune a T5 model (Raffel et al., 2020) on curated scientific abstracts and, during inference, retrieve citation contexts and related data; a novelty-guided loop then re-generates until the candidate is both coherent and inventive, outperforming standard transformer baselines. Human-in-the-loop Hypothesis Generation with LLM : Recent studies show that combining humans with LLM support yields higher-quality, more novel hypotheses than either party working alone. Zhao et al. (2024) report that a human+LLM pipeline surpasses both human-only and LLM-only baselines. Human annotators first draft hypotheses for uncommon observations; carefully designed prompts then guide the LLM to refine each draft by adding details and improving logical flow. Low-quality hypotheses generated by LLM are filtered with human evaluations and automated metrics such as BERTScore, and the resulting human+LLM collaboration produces the strongest hypotheses. Similarly, Si et al. (2024) involve more than 100 NLP researchers in a three-condition study: LLM-only generation, human-only generation, and LLM generation reranked by humans. Human evaluations rate the human-reranked LLM hypotheses best on Novelty, Excitement, Feasibility, and Effectiveness. Pu et al. (2025) move beyond controlled experiments by introducing IdeaSynth, a copilot-like framework that assists users throughout hypothesis formulation. When a user supplies a high-level hypothesis, IdeaSynth retrieves relevant papers through an API and summarizes key information needed for development. Users interactively edit these summaries with LLM help, adding details and improving clarity. The system then aggregates all refined nodes, employs an LLM to craft a final applicable hypothesis, and supplies suggestions for follow-up experiments. The quality of natural-language hypothesis generation largely depends on the inherent capabilities of LLMs. Because these models excel at in-context learning, prompt strategies such as Chain-of-Thought (CoT) and Reflexion (Wei et al., 2022; Shinn et al., 2023) can be applied directly to this task. However, unlike computer- vision research, which gained rapid momentum from the ImageNet benchmark, hypothesis generation lacks a comparable, widely recognized task set. The main challenge is therefore the absence of a reliable evaluation task and benchmark for natural-language hypotheses, an issue examined further in Section 4.2. 4.1.2 Formal Language Hypothesis Generation with LLM One major advantage of formal hypotheses is that once a formal language hypothesis is obtained, we can directly perform inference on it with guarantees of soundness and correctness. Depending on whether obser- vations are represented in formal or natural language, methods for proposing a formal language hypothesis need to be discussed separately. Formal Language Observations :
https://arxiv.org/abs/2505.21935v1
When observations are encoded in a formal language, dedicated formal language solvers typically yield clear, white-box solutions that outperform language models. Consequently, using an LLM for these tasks is generally not preferred. Nevertheless, a few early studies in the LLM era have explored this approach. For example, Young et al. (2022) trained a transformer model on FOL abduction 7 tasks, demonstrating that the model can generate FOL hypotheses from formal observations. Similarly, Nguyen et al. (2023) fine-tuned state-of-the-art legal transformers on FOL abduction tasks and found that models pre-trained on natural language legal abduction tasks do not show any performance improvements on FOL hypothesis generation problems. Natural Language Observations : When observations are represented in natural language, traditional symbolic solvers struggle to extract the key information needed for hypothesis generation. With LLMs, however, we can directly generate formal hypotheses. A popular formal language for this purpose is code, as it offers greater flexibility than other symbolic representations like FOL, and LLMs excel at coding. The simplest variant prompts an LLM with an observation set and asks it to produce executable functions as hypotheses that match the input-output pairs; Cheng et al. (2024) follow this pattern, treating each observation as an (x, y)example and evaluating the generated function by execution. Extending this idea, Wang et al. (2024); Qiu et al. (2024) have the LLM create multiple executable hypotheses, run them on the observations, feed the results back to the model, and iterate, discarding weak candidates and refining promising ones until one covers all examples. To encourage diversity, il Lee et al. (2025) first ask the model for a single-word “main concept,” then use that concept to steer subsequent code generation, avoiding the similarity of low-temperature outputs and the degeneration of high-temperature sampling while still producing coherent hypotheses. A complementary line of work probes the model’s internal representations. Using sparse autoencoders (SAE) (Bricken et al., 2023), Movva et al. (2025) isolate neurons activated when the LLM predicts the click rate of Twitter posts and discover that neurons associated with “surprise” or “shock” positively influence the score, supporting the hypothesis that surprising or shocking content tends to receive more clicks. 4.2 Evaluation for Hypothesis Generation Due to LLMs’ strong reasoning abilities and natural language interface, many methods have been proposed for hypothesis generation, and numerous ideas based on everyday human reasoning can be adapted for this purpose (Niu et al., 2024). However, a major challenge remains in establishing a grounded and convincing way to evaluate the quality of the generated hypotheses. 4.2.1 Natural Language Hypothesis Evaluation Although prompting LLMs to generate natural language hypotheses is straightforward, evaluating the qual- ity of these hypotheses is challenging due to the ambiguity inherent in natural language representations. Consequently, a common evaluation method involves either human evaluation or using an LLM to assess the generated hypotheses’ validity (Zhao et al., 2024; Yang et al., 2024b; Hu et al., 2024; Qi et al., 2024; Yang et al., 2025). While human evaluation can provide valuable insights without relying on predefined answers, it is inherently subjective, less reproducible, expensive, and sometimes not
https://arxiv.org/abs/2505.21935v1
entirely convincing. Therefore, alternative evaluation strategies are needed. Implicit Prediction-based Evaluation : Early benchmarks often relied on question-answering (QA) tasks that required the model to implicitly form a hypothesis to answer a question (Sinha et al., 2019; Weston et al., 2015). For example, consider the observation: “Lily is a swan, Lily is white, Bernhard is green, Gerg is a swan. What color is Greg?” To answer correctly, one must infer an implicit hypothesis, such as “All swans are white” or “Most swans are white,” based on the fact that Lily is both a swan and white. Thus, the correct answer is “white.” By verifying whether the model’s answer is “white,” one can indirectly assess its ability to form an appropriate hypothesis and perform reasoning. Similarly, recent work shows that prompting LLMs to generate an intermediate hypothesis and then using that hypothesis for inference yields higher performance on complex tasks (Balepur et al., 2024; Shi et al., 2023; Wang et al., 2025). However, this approach is problematic: the hypothesis may be formed incorrectly, the subsequent inference could be flawed, and the model might arrive at the correct answer through memorization or random guessing rather than proper abductive reasoning. Therefore, success in these tasks does not directly imply that the model possesses superior abductive capabilities, making them unsuitable for reliably evaluating hypothesis generation. 8 Ground Truth-based Evaluation : Some studies build benchmarks with labeled hypotheses so that out- puts of LLM can be matched directly against references. DEER (Yang et al., 2024a) supplies 1,200 fact–rule pairs, all written in natural language by experts across six topics-zoology, botany, geology, astronomy, his- tory, and physics. Generated hypotheses are compared with the gold rules using token-level mapping metrics like METEOR (Banerjee & Lavie, 2005). In biomedicine, Qi et al. (2024) curate a benchmark with both seen and unseen samples: the seen split contains 2,700 background–hypothesis pairs collected before January 2023, whereas the unseen split has 200 pairs collected after that date. Outputs are evaluated against the ground truth with BLEU and ROUGE (Papineni et al., 2002; Lin, 2004). On synthetic corpora such as WIKI and BILLS (Pham et al., 2024; Zhong et al., 2024), Movva et al. (2025) treat hypothesis generation as identifying the key features that drive a prediction. The model proposes feature sets, which are judged by how well they match and cover the ground-truth features, thereby quantifying the LLM’s ability to isolate causal signals. Despite these efforts, Yang et al. (2024b) note that reference-based metrics such as BLEU, ROUGE, and ME- TEOR assume a single correct answer and therefore struggle to capture the open-ended nature of hypothesis generation; developing fair, reliable metrics remains an open challenge. 4.2.2 Formal Language Hypothesis Evaluation Unlike natural language hypotheses, formal hypotheses evaluations are more grounded due to their clarity and unambiguous semantics. Ground Truth-based Evaluation : Generated formal hypotheses can be evaluated against pre-defined ground truth hypotheses. Unlike natural language evaluation, where ground truth is often written by domain experts and evaluated using token-level metrics like BLEU or ROUGE, formal hypotheses can be evaluated procedurally using solvers. This
https://arxiv.org/abs/2505.21935v1
allows us to verify correctness deterministically. For example, Bowen et al. (2024) designed formal representations for synthetic grouping tasks to evaluate formal language hypothesis generation. Huaetal.(2025)constructedtheirbenchmarkbasedondeterministicregularfunctions,providing a procedural framework for evaluating formal hypotheses. Similarly, Young et al. (2022) used first-order logic (FOL) representations, where LLMs were tasked with generating FOL hypotheses to explain given facts, and the outputs were evaluated by comparing them against ground truth hypotheses verified by solvers. Prediction-basedEvaluation : Sinceinferenceonformalhypothesesisdeterministic, acommonevaluation method is to test whether the generated hypothesis produces correct outcomes on held-out examples. For instance,Rule(2020)proposethe list function task ,whereLLMsgenerateahypothesisfunctionfromobserved (x, y) pairs, and evaluation is based on how well the hypothesis predicts hidden pairs. Similarly, Chollet (2019) introduces the Abstract Reasoning Corpus (ARC), where tasks involve transforming input grids of colored cells into output grids. The generated function is executed on test inputs, and correctness is determined by exact matches with the target output grids, including grid dimensions. Liu et al. (2024) further propose a benchmark consisting of arithmetic calculations, color token mapping, and Kalamang vocabulary tasks, all evaluated in the same way. Additionally, Li et al. (2025a) construct diverse application scenarios, including list transformations, real-world problems, code generation, and string transformations, where the generated hypothesis is executed on both seen and test observations and the final score aggregates performance across both sets. In a more realistic setting, Chen et al. (2024) extract 102 tasks from 44 peer-reviewed publications, unifying the target output for every task into a self-contained Python program file, accompanied by a set of test cases validated by human experts. LLMs are then asked to read the paper and reproduce the tasks in code, and the generated code is directly evaluated on the prepared test cases. 4.3 Discussion and Future Directions in Hypothesis Generation There exists a significant gap between formal and natural language approaches to hypothesis generation. In natural language hypothesis generation, observations typically originate from recent research papers, and generated hypotheses can potentially inspire novel research ideas with tangible real-world impacts (Eger et al., 2025). However, rigorous and reliable evaluation methods for such hypotheses remain underdeveloped. Token-based metrics, such as BLEU or ROUGE, do not effectively capture the qualitative aspects of open- 9 ended hypothesis generation (Yang et al., 2024b). Meanwhile, alternative approaches involving human or LLM-based evaluations are costly, subjective, and prone to inconsistencies. Conversely, formal language hypothesis generation benefits from grounded, objective evaluation methods. Nevertheless, existing formal tasks often involve simplified or artificial scenarios that fail to reflect the com- plexity and nuance inherent in real-world applications. Consequently, the field faces a trade-off: formal representations facilitate robust evaluation but risk omitting critical real-world nuances, while natural lan- guage representations capture real-world complexity yet lack rigorous evaluation mechanisms. To address this challenge, future research in hypothesis generation could focus on two key directions. Firstly, there is an urgent need to develop novel evaluation methodologies tailored specifically for natural language hypothesis generation. Current implicit prediction-based evaluations suffer from inherent limitations, and ground truth-based evaluations remain inadequate due to reliance on token-level similarity metrics. Alterna- tive evaluation strategies, potentially involving multi-dimensional human assessments,
https://arxiv.org/abs/2505.21935v1
structured feedback mechanisms, or hybrid evaluation frameworks integrating automated and expert evaluations, merit explo- ration. Secondly, bridging the gap between formal and natural language hypothesis generation is crucial. Leveragingcodeasanintermediaterepresentationoffersapromisingpathforward, combiningevaluativerigor with expressive capability. However, existing code-based hypothesis generation benchmarks tend to focus on oversimplified problems that lack relevance to practical scenarios. Thus, developing realistic, code-based hypothesis-generation tasks grounded in established research papers, real-world datasets, and open-source repositories presents a compelling and valuable direction for future research (Chen et al., 2024). 5 Hypothesis Application Given a hypothesis h, hypothesis application is defined as the derivation of a new observation onewsuch that: h|=onew In some cases, the hypothesis may depend on a context c, so that hcan be viewed as a function of c. In this context-dependent formulation, hypothesis application is defined as deriving a new observation onewsuch that: h(c) =onew In our work, we follow Peirce’s definitions for reasoning. Accordingly, the primary process used in hypothesis application is deduction, the method of deriving necessary consequences from a given hypothesis. Notably, when a hypothesis is expressed in a formal language, directly applying it with a deterministic solver yields a correct and sound prediction. Therefore, there is little motivation to leverage LLMs for deductive reasoning on formal hypotheses. This section, consequently, focuses on the natural language hypothesis application and evaluation. 5.1 Method LLMasFormalLanguageParser: Since formalsymbolic solvers yieldsound and correct predictions, Pan et al. (2023); Olausson et al. (2023); Kalyanpur et al. (2024) treat LLMs as formal language parsers, using them to translate natural language hypotheses into formal representations like FOL and code before applying a formal inference procedure. This translation significantly improves deductive correctness. However, these methods have primarily been evaluated on benchmarks such as ProofWriter (Tafjord et al., 2021) and FOLIO (Han et al., 2024), where the questions are already closely aligned with formal language. For example, given the input “Fact1: Eric is young, Fact2: Dave is white, Rule 10: if someone is young and not kind then they are big” , translating this into FOL is relatively straightforward. It remains unclear whether LLMs can reliably parse more complex, everyday natural language into formal representations. Fine-Tuning-Based Method: Fine-tuning is a common approach to improve model performance when corresponding training data is available. Sun et al. (2024) proposed a synthetic “StringGame” task in which ground truth hypotheses and answers are provided. Leveraging the CoT approach, a LLM is prompted to 10 generate multiple candidate hypothesis application trajectories along with their results. By comparing these results with the ground truth, the trajectories that produce correct outcomes are identified as correct and stored for fine-tuning. The resulting fine-tuned model then demonstrates improved performance in both hypothesis application and instruction following. Prompt-Based Method: Although CoT prompting has improved performance on multi-hop question answering tasks, Sun et al. (2024) found that it does not directly enhance performance in hypothesis ap- plication. Therefore, new prompting methods have been designed specifically for this purpose. Inspired by mathematical induction, Cai et al. (2025) propose quantifying the difficulty of a question so that the LLM can solve it incrementally, from simpler versions to
https://arxiv.org/abs/2505.21935v1
more complex ones, ultimately arriving at the correct answer. In another approach, Ling et al. (2023) design a pipeline that supervise the correctness of each rea- soning step during hypothesis application. First, the LLM indexes all premises; then it is asked to label the minimal set of premises required to derive new facts. This pipeline generates multiple candidate hypothesis application trajectories, and by having the LLM vote on each step, the most convincing deductive trajectory is selected. 5.2 Evaluation for Hypothesis Application Although many benchmarks and evaluation methods exist for general deductive reasoning, such as question answering and mathematical tasks like GSM-8k (Cobbe et al., 2021), these question types do not explicitly test the formation of new facts based on given hypotheses or rules. Evaluating the correctness of a natural- language deductive trajectory is challenging because annotated reasoning paths for hypothesis application are scarce, and the same result can follow from different reasoning paths. As a result, most evaluations use prediction-based checks. We assume that, given a correct hypothesis and a known ground-truth result, a valid deduction will reproduce that result. By comparing the model’s deduced outcome with the ground truth, we can judge whether its deduction is correct. For example, take the hypothesis “Coin flips are independent and identically distributed (i.i.d.) with a 50 percent chance of heads.” When asked, “After three consecutive heads, what is the probability of a tail on the fourth flip?,” a flawed model might claim the chance of a tail has increased. In fact, under the i.i.d. assumption, the probability remains 50 percent. Supplying the correct hypothesis and comparing the model’s answer to the true result lets us evaluate whether its deductive reasoning is valid. Building on this idea, Yu et al. (2024b) create the TURTLEBENCH benchmark, inspired by the “Tur- tle Soup” game in which players deduce a story’s hidden explanation by asking yes/no questions; in TURTLEBENCH, the LLM instead answers human-annotated questions with “True,” “False,” or “Not Relevant,” across 1,532 high-quality question-answer pairs sourced from an online platform to test whether it can fully follow a story and provide accurate answers. Similarly, Mu et al. (2024) introduced the RULES benchmark, comprising 14 rule-following scenarios, each paired with concise test cases and programmatic evaluation functions that objectively assess adherence to specified rules. In addition, Cai et al. (2025) presented the Holiday Puzzle benchmark, which features multiple holiday schedule scenarios ranging from simple single-week planning to multi-phase arrangements and complex date arithmetic tasks, again using test cases and evaluation functions to verify correct computation of extra holiday rest days under provided rules. Moreover, Sun et al. (2024) constructed RuleBench to evaluate not only whether models can produce correct answers based on factual rules but also their ability to apply counterfactual rules, designed to yield incorrect outcomes, and experiments show that while LLMs achieve near-perfect accuracy on factual rules, their performance drops dramatically under counterfactual rules, revealing a significant gap in counterfactual rule-following capability. 5.3 Discussion and Future Directions in Hypothesis Application While traditional deductive reasoning tasks (e.g., question answering, problem solving) in LLMs have been widely
https://arxiv.org/abs/2505.21935v1
studied, the capability for hypothesis application remains significantly underexplored. According to Sun et al. (2024), hypothesis application involves inferential rule-following, requiring models to consistently apply given hypotheses to derive novel knowledge in unfamiliar domains. Robust hypothesis application is critical to hypothesis discovery, as hypotheses must generalize to scenarios with unseen observations. 11 However, existing LLMs frequently struggle to extend hypotheses beyond familiar contexts, thus limiting the evaluation of hypothesis generation. Future research could therefore focus on rigorously evaluating LLMs’ hypothesis application, both factual and counterfactual, in novel scenarios. Developing benchmarks explicitly designed for hypothesis-driven inference in unfamiliar domains could reveal important insights into model adaptability and generalization. Additionally, current evaluations of hypothesis application mainly rely on outcome-based correctness, com- paring predicted results to ground truth given correct hypotheses. However, incorrect reasoning may still lead to correct predictions in natural-language contexts. Although Ling et al. (2023) propose improving hypothesis application by intervening in reasoning trajectories, a large-scale benchmark specifically designed to evaluate trajectory-based hypothesis application remains absent. 6 Hypothesis Validation According to Peirce, induction validates a hypothesis by updating its confidence when new evidence appears. However, in studies that focus exclusively on induction, tasks are typically one-off: a hypothesis (or set of hypotheses) and a collection of observations are provided, and there is no iterative updating of confidence. A simplified framework for hypothesis validation treats it as a multiple-choice problem: given observations O={o1, o2, . . . , o n}and a set of hypothesis H={h1, h2, . . . , h m}, the model selects the most possible hypothesis. In simpler scenarios, where only one hypothesis is provided, the model determines whether the hypothesis correctly explains the observations. In the next section, when combined with deduction and abduction, induction can subsequently be used to iteratively update the confidence in the hypothesis. Natural language representations add significant complexity to induction. In formal language settings, all necessary information is explicitly provided, and reasoning follows rigorous, well-defined steps. In contrast, validating a natural language hypothesis often requires commonsense knowledge and interpretation of nu- anced language. For example, consider the observations “Neil wanted to see the mountains of Asia” and “Neil loved being so close to the mountains in Nepal,” with candidate hypotheses “Neil booked a trip online” and “Neil took a trip to see the Rocky Mountains instead.” Here, the nuanced meaning of the term “instead” and the geographic relationships require careful analysis and may lead to different conclusions. Indeed, Zhang et al. (2020) reports that, when verifying their dataset where five annotators judged the plausibility of hand- written hypotheses, disagreements occurred in 62.34% of 1,365 explanations, underscoring the challenge of natural language hypothesis validation. 6.1 Method 6.1.1 Formal Language Hypothesis Validation He & Lu (2024) introduce the CauseJudger framework, which leverages LLMs at every stage to validate candidate hypotheses. First, an LLM transforms the natural language inputs into an FOL-based repre- sentation by integrating each candidate hypothesis into the premises. Next, an LLM filters out irrelevant premises and rules. Finally, another LLM performs forward reasoning to decide which hypothesis explains the observations. 6.1.2 Natural Language
https://arxiv.org/abs/2505.21935v1
Hypothesis Validation Prompt-Based Method: Lampinen et al. (2022); Sun et al. (2024) employ a few-shot prompting approach for hypothesis validation. In this method, case triplets, consisting of an observation, a hypothesis, and its corresponding validity, are provided to the model, which then answers a hypothesis validation question. Although this approach improves performance, Sun et al. (2024) reports that the performance boost is limited. Their experiments further indicate that fine-tuning outperforms few-shot prompting. Fine-Tuning-Based Method: Since hypothesis validation essentially constitutes a classification problem, many Natural Language Inference (NLI) datasets can be adapted into hypothesis validation tasks. Conse- quently, fine-tuning is a popular method in this context. For example, Zhao et al. (2023); Chan et al. (2023); 12 Sun et al. (2024) fine-tune models to select the correct hypothesis from a set of hypotheses based on new observations. 6.2 Evaluation for Hypothesis Validation 6.2.1 Formal Language Evaluation AlongwiththeCauseJudgerframework,He&Lu(2024)alsoproposedtheCauseLogicsdataset. Basedonthe required formal reasoning depth, the dataset is divided into four difficulty levels for hypothesis validation tasks, with 50,000 samples per level. Each hypothesis is assigned a binary ground-truth label indicating whether it correctly explains the observations. 6.2.2 Natural language Evaluation Binary-Classification-Based Evaluation: Lampinen et al. (2022) chose a subset of 40 tasks from the crowd-sourced benchmark BIG-bench (bench authors, 2023) and constructed their own benchmark specifi- cally for hypothesis validation. Each data sample consists of an observation, its corresponding hypothesis, and a ground truth label indicating whether the hypothesis truly explains the observation. Hypothesis validation using natural language is inherently challenging because the implicit information and required common-sense background are not explicitly stated. This often leads different individuals to draw different conclusions when validating a hypothesis based solely on recalled information. Rudinger et al. (2020) mitigate this issue by adopting a different strategy. Instead of asking annotators to directly judge whether an observation explains a hypothesis, they ask the model to determine if a given observation weakens or strengthens the hypothesis. Specifically, they sample observation–hypothesis pairs from existing datasets and then manually craft two types of sentences: one that acts as a “strengthener” (increasing the likelihood of the hypothesis) and one that acts as a “weakener” (decreasing the likelihood of the hypothesis). Their validation process showed that the strengthening and weakening effects are consistent across different annotators. During evaluation, the model is required to decide whether a new observation strengthens or weakens the hypothesis. This approach aligns with the paper’s goal of modeling defeasible inference by leveraging explicit contextual updates rather than relying on potentially variable human interpretations of implicit information. Furthermore, Zhang et al. (2025) extended this task to include visual observations. In their extension, given a visual observation and a natural language hypothesis, an LLM is tasked to determine whether the provided sentence serves as a strengthener or a weakener. Multiple-Choice-Based Evaluation Bhagavatula et al. (2020) introduce the ART benchmark, comprising roughly 20k narrative contexts where each sample includes two time-ordered observations, one depicting a story’s start ( o1) and the other its out- come ( o2), alongside two hypotheses: a plausible explanation ( h+) and a less plausible one ( h−), challenging
https://arxiv.org/abs/2505.21935v1
models to choose the best explanatory hypothesis and enabling adaptation to hypothesis-generation tasks evaluated against ground-truth explanations. Similarly, Jiang et al. (2023) present the BRAINTEASER benchmark of about 1.1k lateral-thinking puzzles, each offering a question with multiple-choice answers, one that defies commonsense and several conventional distractors, in both sentence (narrative) and word (meaning-alteration) formats to test creative reasoning, with additional semantic and context reconstruction variants assessing reasoning consistency and robustness across formulations. Moreover, Del & Fishel (2023) introduced the True Detective benchmark for deep hypothesis validation, featuring 191 long-form detective puzzles (≈1200 words each) from the “5 Minute Mystery” platform, where models (and humans) select the correct explanation from 4–5 options, human accuracy averages 47%, top solvers exceed 80%, and each puzzle includes golden chain-of-thought explanations detailing the reasoning steps that lead to the correct answer. 6.3 Discussion and Future Directions in Hypothesis Validation Previous literature often conflates hypothesis generation and hypothesis validation, primarily due to ambi- guity inherent in the IBE paradigm. Within IBE-based approaches, hypothesis validation typically appears 13 as an implicit intermediate step, where selecting the “best” hypothesis is frequently based on unclear or sub- jective criteria without dedicated, independent evaluation. However, adopting Peirce’s explicit distinction between abduction, deduction, and induction clearly separates validation from generation, underscoring the need for dedicated research on validating hypotheses against newly observed evidence. Current validation methodologies predominantly adopt end-to-end metrics that only assess final correctness, neglecting the reasoning processes and commonsense knowledge required to validate hypotheses in realistic settings. The subjective nature of natural language, coupled with different interpretations of observations, highlights the necessity for richer evaluative frameworks. Future benchmarks should incorporate detailed intermediate Chain-of-Thought data, capturing explicit reasoning steps humans take when validating hy- potheses, such as recalling relevant commonsense knowledge and performing nuanced inference. Evaluations should then emphasize consistency between the reasoning process and available commonsense context rather than relying solely on superficial similarity to reference answers. Such benchmarks would greatly enhance our understanding of hypothesis validation and better reflect the complexities of human-like reasoning. 7 Hypothesis Discovery Although many works introduced in the previous sections propose methods and evaluation metrics, they mainly focus on individual phases of Hypothesis Discovery —Hypothesis generation (Abduction 4) , Hy- pothesis application (Deduction 5) , and Hypothesis validation (Induction 6) . However, in real-life Hypoth- esis Discovery , these reasoning stages are not independent and must be treated holistically. Initially, we form hypotheses based on limited observations using abduction, which subsequently informs the application of these hypotheses through deduction, enabling the collection of further evidence. Concurrently, induction continuously evaluates and resolves inconsistencies arising between newly obtained observations and earlier hypotheses. This iterative interplay means that each hypothesis formulated, action taken, observation gath- ered, and inconsistency identified dynamically shapes and reshapes our evolving understanding, influencing subsequent reasoning steps and contributing to diverse interpretations of the world. Treating any single reasoning phase in isolation oversimplifies hypothesis discovery. For example, although Bowen et al. (2024) evaluated every reasoning, they handled each step separately and thus failed to assess the true rule-learning capability of LLMs. Consequently, integrating abduction, deduction,
https://arxiv.org/abs/2505.21935v1
and induction into a unified learning loopremainsbothchallengingandlargelyunderstudied, yetitistheultimategoalforconstructingend-to-end agents capable of scientific discovery. Despiteafewstudiesthatacknowledgetheinterdependenceamongreasoningtypesandallowmodelstorefine hypotheses iteratively, they still overlook two decisive aspects of real-world hypothesis discovery. First, most benchmarks remain static and passive: they hand agents a fixed set of observations deemed sufficient to reach the correct hypothesis, whereas real-life hypothesis discovery requires actively seeking additional evidence. Second, even in settings that allow proactive information gathering, the granularity of the action space is still too coarse: agents fetch observations via one-shot “recall” or “web-search” commands, whereas real scientists must strategically plan and carry out precisely staged experiments—often designing specialized equipment at each step. Recognizing these limitations, we categorize existing hypothesis-discovery research into three classes (see Fig. 3). 7.1 Passive Hypothesis Discovery In this type of study, LLMs generate, apply, and validate hypotheses iteratively. However, the observations are provided by a fixed dataset. The LLM does not need to worry about which observations it will receive. Instead, it simply reasons based on the given data, passively receiving and processing the information provided. Zhu et al. (2024) proposed the Hypotheses-to-Theories (HtT) Framework to generate formal hypotheses (e.g.,“if A then B” ) by leveraging existing benchmarks (Sinha et al., 2019; Wang et al., 2022b; Rule, 2020). In HtT, LLMs generate a hypothesis and propose learned rules to solve each question. When a new question is received, the model first formulates a preliminary hypothesis based on the context. It then proposes candidate rules that might lead to the correct answer. These candidate rules are applied to the problem 14 Figure 3: Differences and similarities among different types of hypothesis discovery tasks and verified against the ground truth. Rules that consistently yield correct predictions are retained and added to the rule library, while ineffective ones are discarded. Iteratively, after processing all questions in the benchmark, the LLM builds a rule library containing effective rules for solving the questions. Zhou et al. (2024) proposed the HypoGeniC framework. Unlike HtT, HypoGeniC is evaluated on more realistic datasets such as Shoe Sales, Deceptive Reviews (Granhag & Vrij, 2005), Headline Popularity (Matias et al., 2021), and Tweet Popularity (Tan et al., 2014). Due to the complexity of real-world data, the generated hypotheses are more nuanced and expressed in natural language. Similar to HtT, HypoGeniC begins by generating a set of candidate hypotheses from a small number of examples. As new observations areprocessed, eachhypothesisisusedtomakepredictionsandisassignedarewardbasedonitsaccuracy. The system dynamically updates the confidence of each hypothesis; those that consistently perform poorly are removed from the hypothesis bank. New hypotheses are generated from examples that existing hypotheses fail to explain, allowing the model to refine and expand its understanding over time. Both HypoGeniC and HtT simplify hypothesis discovery by relying on benchmark questions that include ground-truth answers. This configuration allows an external algorithm, not the LLMs themselves, to validate generated hypotheses and update their confidence based on the correctness of predictions. In real-world scenarios, where no ground-truth answers are available, these frameworks become inapplicable and would require substantial adaptation. 7.2 Proactive Hypothesis Discovery In real-life hypothesis discovery, we do not start with a predefined set of
https://arxiv.org/abs/2505.21935v1
observations that continuously propose new insights. Instead, once an initial hypothesis is formed, we proactively recall our memories or explore further to gather new observations that either strengthen or weaken the hypothesis, allowing us to verify and refine our ideas. Given a hypothesis, Li et al. (2024) and Jung et al. (2022) propose two proactive methods for hypothesis discovery that both leverage the LLM’s parametric memory to generate evidence that either strengthens or weakens the hypothesis. In Hypothesis Testing Prompting, the model directly uses its internal reasoning to evaluate the generated evidence, determining which pieces are more convincing, and then decides whether the hypothesis is correct based on the balance of evidence that strengthens or weakens it. In contrast, Maieutic Prompting iteratively constructs a tree of evidence by generating both strengthening and weakening explanations. It then employs the LLM to assign a belief score (reflecting the model’s confidence in the evidence) and a consistency score (measuring how well the evidence aligns with the hypothesis). Finally, a MAX-SAT solver is applied to select the subset of evidence that maximizes the overall scores, thereby determining whether to accept or reject the hypothesis. Different from relying solely on an LLM’s parametric memory to generate new evidence, Seals & Shalin (2024) propose a minimal setting for proactive hypothesis discovery. Inspired by the Wason Task from cognitive science, this task challenges LLMs to prove a formal language hypothesis of the form “if pthenq.” 15 Here, both pandqare objects described in natural language, for example, “if a person is a man, then he drinks alcohol.” The task provides four cards, each with two sides representing different attributes. Initially, one side of each card is shown, displaying p, q,¬p,and¬q, while the other side reveals the state of another attribute. To rigorously validate the hypothesis “if pthen q,” one must flip the pcard to confirm that its hidden side is q(modus ponens) and flip the ¬qcard to check that its hidden side is ¬p(modus tollens). Flipping only these two cards provides sufficient evidence for the hypothesis, while the other two cards do not offer the necessary information. Thus, in this benchmark, by proactively flipping two cards, we can determine whether the LLM can correctly identify natural language expressions of pandqand validate the hypothesis using a minimal action space. Moreover, Conti et al. (2024) propose APEx, a multimodal automatic benchmarking framework that eval- uates hypotheses about large multimodal models in a fully automated and iterative fashion. For example, to test a hypothesis such as “a model is able to identify graffiti-styled images,” APEx first leverages text- to-image retrieval and generation tools to create a tailored set of test images. It then employs a range of transformation tools to perform image augmentation, introducing variations that challenge the models’ robustness. In an iterative experimental loop, the framework executes these experiments on a library of models, analyzes the results, and refines the testing protocol accordingly. 7.3 Complete Loop: Real-World Hypothesis Discovery Simulation Other works equip LLM agents with interactive environments that more closely mirror the complexity of real-world hypothesis discovery by combining planning, acting,
https://arxiv.org/abs/2505.21935v1
and evidence collection. For example, Xu et al. (2023) construct a Minecraft–like world in which a “vandal” agent performs up to 26 types of actions (e.g., moving, eating, crafting) to achieve a hidden goal (such as collecting lava or crafting a particular item) and leaves behind tracks as evidence. A detective agent—driven by reinforcement learning to maximize information gain—then gathers those tracks and presents them to an LLM, which must answer a multiple- choice question about the vandal’s original objective. Because evidence collection relies on an RL policy rather than LLM planning, however, this setup evaluates only the model’s capacity to interpret evidence, not its ability to proactively generate and test hypotheses in a dynamic setting. Building on this approach, Wang et al. (2022a) introduce 30 scientific tasks drawn from five topics in fifth- grade curricula, ranging from measuring the friction coefficient of an inclined plane to testing electrical conductivity. Here, agents must execute long action sequences and apply deductive reasoning grounded in established theories and definitions to complete each task. Likewise, Jansen et al. (2024) design 120 experiments across eight subjects (e.g., Chemistry, Archaeology), each with three difficulty levels, and allow 14 coarse-grained actions (such as “take,” “put,” and “move”). Agents are evaluated on (1) task completion, (2) execution of key experimental steps, and (3) accurate hypothesis discovery compared to a ground truth. Whilethesevirtuallabssimulatemulti-stepproceduresandtesthypothesisapplication,theirrestrictedaction spaces support only qualitative inference and preclude the fine-grained interventions needed for quantitative rule-learning. To address these limitations, He et al. (2024) propose puzzle environments in which agents can input arbi- trary integers or letters and receive tailored feedback based on a hidden rule. In this framework, an LLM must iteratively probe the environment, uncover the underlying quantitative rule, and solve the puzzle. Performance is assessed not only by whether the agent solves the puzzle but also by human judgments of the clarity and rigor of its reasoning steps, thereby offering a finer-grained evaluation of both quantitative hypothesis generation and the quality of the model’s deductive process. 7.4 Discussion and Future Directions in Hypothesis Discovery Hypothesis discovery fundamentally differs from isolated reasoning tasks by requiring iterative learning and continuous refinement of hypotheses within dynamic, evolving contexts. Particularly in Real-World simulation scenarios, the decisions and actions taken by an LLM may lead to entirely different trajectories of observation collection, varied learning efficiencies, and alternative hypotheses. 16 Building effective benchmarks for hypothesis discovery requires constructing rich, realistic environments capable of simulating real-world complexities. These environments should contain diverse, comprehensive action spaces and varied observational feedback mechanisms. Compared to traditional static, label-based datasets, creating such benchmarks is significantly more labor-intensive, demanding at least two key com- ponents: 1, A set of rules unknown to the LLM that can be learned within the environment. 2, A sufficiently expressive action space that allows the LLM to interact with the environment, receive feedback, and gather new information. Given that current LLMs are trained on vast quantities of data, there is a risk of hypothesis leakage, where underlying rules might already be implicitly embedded in their parametric memory. For instance, benchmarks such as those
https://arxiv.org/abs/2505.21935v1
introduced by Wang et al. (2022a) often rely on relatively straightforward tasks that do not genuinely necessitate novel hypothesis formation. Conversely, tasks proposed by He et al. (2024), despite aiming to encourage creative hypothesis formation, often yield simplistic, toy-like hypotheses with limited applicability to realistic scenarios. Therefore, future research should aim to develop environments with greater complexity and realism, fostering diverse and genuinely novel hypotheses. Benchmarks should be explicitly designed to push LLMs beyond their pretrained knowledge boundaries, and must provide practical tools for validating newly generated hypotheses. Such realistic simulation environments would address critical challenges such as hypothesis leakage and task oversimplification, ultimately fostering more robust and practical hypothesis discovery capabilities within LLMs. 8 Summary In this survey, we have presented a comprehensive and structured framework for hypothesis discovery using LLMs, guided by Peirce’s reasoning paradigm of abduction, deduction, and induction. Specifically, we systematically explored current methods and benchmarks across the three core components: hypothesis generation, hypothesis application, and hypothesis validation. Our analysis identifies a significant gap between formal and natural language representations. While formal representations enable rigorous and objective evaluations, they often remain restricted to simplified, artificial scenarios lacking real-world complexity. Conversely, natural language representations effectively capture the nuanced complexities inherent in real-world reasoning tasks, yet suffer from a lack of reliable, rigorous evaluation metrics due to their inherently open-ended nature. Existing methods, including prompt-based and fine-tuning approaches, demonstrate considerable potential but frequently isolate individual reasoning components. To move forward, we advocate for the development of integrated benchmarks and realistic, dynamic environments that more closely mimic real-world scientific inquiry and hypothesis discovery processes. Such benchmarks should provide rich intermediate Chain-of- Thought data, detailed commonsense reasoning steps, and comprehensive action spaces, thereby bridging the current divide between formal and informal reasoning representations. Ultimately, establishing environments that demand proactive hypothesis generation, robust application to novel contexts, and rigorous validation against evolving evidence will be crucial. By addressing these chal- lenges, future research will significantly advance the ability of LLMs to not merely execute instructions but to autonomously generate, refine, and validate hypotheses, thus realizing their potential as true engines of discovery and innovation. References Atilla Kaan Alkan, Shashwat Sourav, Maja Jablonska, Simone Astarita, Rishabh Chakrabarty, Nikhil Garuda, Pranav Khetarpal, Maciej Pióro, Dimitrios Tanoglidis, Kartheik G. Iyer, Mugdha S. Polimera, Michael J. Smith, Tirthankar Ghosal, Marc Huertas-Company, Sandor Kruk, Kevin Schawinski, and Ioana Ciucă. A survey on hypothesis generation for scientific discovery in the era of large language models, 2025. URL https://arxiv.org/abs/2504.05496 . Francis Bacon. Novum organum . Clarendon press, 1878. 17 Nishant Balepur, Abhilasha Ravichander, and Rachel Rudinger. Artifacts or abduction: How do LLMs an- swer multiple-choice questions without the question? In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.),Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 10308–10330, Bangkok, Thailand, August 2024. Association for Computational Lin- guistics. doi: 10.18653/v1/2024.acl-long.555. URL https://aclanthology.org/2024.acl-long.555/ . Satanjeev Banerjee and Alon Lavie. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Jade Goldstein, Alon Lavie, Chin-Yew Lin, and Clare Voss (eds.), Proceedings
https://arxiv.org/abs/2505.21935v1
of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Transla- tion and/or Summarization , pp. 65–72, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics. URL https://aclanthology.org/W05-0909/ . AdibBazgir, RamachandraPraneethMadugula, andYuwenZhang. Agentichypothesis: Asurveyonhypoth- esis generation using LLM systems. In Towards Agentic AI for Science: Hypothesis Generation, Compre- hension, Quantification, and Validation , 2025. URL https://openreview.net/forum?id=UeeyfR4CUg . BIG bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.Transactions on Machine Learning Research , 2023. ISSN 2835-8856. URL https://openreview. net/forum?id=uyTL5Bvosj . Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen tau Yih, and Yejin Choi. Abductive commonsense reasoning, 2020. URL https://arxiv.org/abs/1908.05739 . Chen Bowen, Rune Sætre, and Yusuke Miyao. A comprehensive evaluation of inductive reasoning capabilities and problem solving in large language models. In Yvette Graham and Matthew Purver (eds.), Findings of the Association for Computational Linguistics: EACL 2024 , pp. 323–339, St. Julian’s, Malta, March 2024. Association for Computational Linguistics. URL https://aclanthology.org/2024.findings-eacl. 22/. Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, and Christopher Olah. Towards monose- manticity: Decomposing language models with dictionary learning. Transformer Circuits Thread , 2023. https://transformer-circuits.pub/2023/monosemantic-features/index.html. Arthur W. Burks. Peirce’s theory of abduction. Philosophy of Science , 13(4):301–306, 1946. ISSN 00318248, 1539767X. URL http://www.jstor.org/stable/185210 . Chengkun Cai, Xu Zhao, Haoliang Liu, Zhongyu Jiang, Tianfang Zhang, Zongkai Wu, Jenq-Neng Hwang, Serge Belongie, and Lei Li. The role of deductive and inductive reasoning in large language models, 2025. URL https://arxiv.org/abs/2410.02892 . Miaosen Chai, Emily Herron, Erick Cervantes, and Tirthankar Ghosal. Exploring scientific hypothesis generation with mamba. In Lotem Peled-Cohen, Nitay Calderon, Shir Lissak, and Roi Reichart (eds.), Proceedings of the 1st Workshop on NLP for Science (NLP4Science) , pp. 197–207, Miami, FL, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.nlp4science-1.17. URL https://aclanthology.org/2024.nlp4science-1.17/ . Chunkit Chan, Xin Liu, Tsz Ho Chan, Jiayang Cheng, Yangqiu Song, Ginny Wong, and Simon See. Self- consistent narrative prompts on abductive natural language inference. In Jong C. Park, Yuki Arase, Baotian Hu, Wei Lu, Derry Wijaya, Ayu Purwarianti, and Adila Alfa Krisnadhi (eds.), Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pp.1040– 1057, Nusa Dua, Bali, November 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. ijcnlp-main.67. URL https://aclanthology.org/2023.ijcnlp-main.67/ . 18 Ziru Chen, Shijie Chen, Yuting Ning, Qianheng Zhang, Boshi Wang, Botao Yu, Yifei Li, Zeyi Liao, Chen Wei, Zitong Lu, Vishal Dey, Mingyi Xue, Frazier N. Baker, Benjamin Burns, Daniel Adu-Ampratwum, XuhuiHuang, XiaNing, SongGao, YuSu, andHuanSun. Scienceagentbench: Towardrigorousassessment of language agents for data-driven scientific discovery, 2024. URL https://arxiv.org/abs/2410.05080 . Kewei Cheng, Jingfeng Yang, Haoming Jiang, Zhengyang Wang, Binxuan Huang, Ruirui Li, Shiyang Li, Zheng Li, Yifan Gao, Xian Li, Bing Yin, and Yizhou Sun. Inductive or deductive? rethinking the fundamental
https://arxiv.org/abs/2505.21935v1
reasoning abilities of llms, 2024. URL https://arxiv.org/abs/2408.00114 . François Chollet. On the measure of intelligence, 2019. URL https://arxiv.org/abs/1911.01547 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/abs/2110.14168 . Alessandro Conti, Enrico Fini, Paolo Rota, Yiming Wang, Massimiliano Mancini, and Elisa Ricci. Automatic benchmarking of large multimodal models via iterative experiment programming, 2024. URL https: //arxiv.org/abs/2406.12321 . Maksym Del and Mark Fishel. True detective: A deep abductive reasoning benchmark undoable for GPT- 3 and challenging for GPT-4. In Alexis Palmer and Jose Camacho-collados (eds.), Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023) , pp. 314–322, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.starsem-1.28. URL https://aclanthology.org/2023.starsem-1.28/ . Igor Douven. Abduction. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, Stanford University, Summer 2021 edition, 2021. Steffen Eger, Yong Cao, Jennifer D’Souza, Andreas Geiger, Christian Greisinger, Stephanie Gross, Yufang Hou,BrigitteKrenn,AnneLauscher,YizhiLi,ChenghuaLin,NafiseSadatMoosavi,WeiZhao,andTristan Miller. Transforming science with large language models: A survey on ai-assisted scientific discovery, experimentation, content generation, and evaluation, 2025. URL https://arxiv.org/abs/2502.05151 . Harry G. Frankfurt. Peirce’s notion of abduction. The Journal of Philosophy , 55(14):593–597, 1958. ISSN 0022362X. URL http://www.jstor.org/stable/2021966 . Pär Anders Granhag and Aldert Vrij. Deception detection. Psychology and law: An empirical perspective , pp. 43–92, 2005. Mourad Gridach, Jay Nanavati, Khaldoun Zine El Abidine, Lenon Mendes, and Christina Mack. Agentic ai for scientific discovery: A survey of progress, challenges, and future directions, 2025. URL https: //arxiv.org/abs/2503.08979 . Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Wenfei Zhou, James Coady, David Peng, Yujie Qiao, Luke Benson, Lucy Sun, Alex Wardle-Solano, Hannah Szabo, Ekaterina Zubova, Matthew Burtell, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Alexander R. Fabbri, Wojciech Kryscinski, Semih Yavuz, Ye Liu, Xi Victoria Lin, Shafiq Joty, Yingbo Zhou, Caiming Xiong, Rex Ying, Arman Cohan, and Dragomir Radev. Folio: Natural language reasoning with first-order logic, 2024. URL https://arxiv.org/abs/2209.00840 . Steve Hanks and Drew McDermott. Nonmonotonic logic and temporal projection. Artificial intelligence , 33 (3):379–412, 1987. Gilbert H. Harman. The inference to the best explanation. The Philosophical Review , 74(1):88–95, 1965. ISSN 00318108, 15581470. URL http://www.jstor.org/stable/2183532 . Jinwei He and Feng Lu. Causejudger: Identifying the cause with llms for abductive logical reasoning, 2024. URL https://arxiv.org/abs/2409.05559 . 19 Kaiyu He, Mian Zhang, Shuo Yan, Peilin Wu, and Zhiyu Zoey Chen. Idea: Enhancing the rule learning ability of large language model agent through induction, deduction, and abduction, 2024. URL https: //arxiv.org/abs/2408.10455 . Xiang Hu, Hongyu Fu, Jinge Wang, Yifeng Wang, Zhikun Li, Renjun Xu, Yu Lu, Yaochu Jin, Lili Pan, and Zhenzhong Lan. Nova: An iterative planning and search approach to enhance novelty and diversity of llm generated ideas, 2024. URL https://arxiv.org/abs/2410.14255 . WenyueHua,TylerWong,SunFei,LiangmingPan,AdamJardine,andWilliamYangWang. Inductionbench: Llms fail in the simplest complexity class, 2025. URL https://arxiv.org/abs/2502.15823 . Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. In Anna Rogers, Jordan
https://arxiv.org/abs/2505.21935v1
Boyd-Graber, and Naoaki Okazaki (eds.), Findings of the Association for Computational Linguistics: ACL 2023 , pp. 1049–1065, Toronto, Canada, July 2023. Association for Computational Lin- guistics. doi: 10.18653/v1/2023.findings-acl.67. URL https://aclanthology.org/2023.findings-acl. 67/. Kang il Lee, Hyukhun Koh, Dongryeol Lee, Seunghyun Yoon, Minsung Kim, and Kyomin Jung. Generating diverse hypotheses for inductive reasoning, 2025. URL https://arxiv.org/abs/2412.13422 . Peter Jansen, Marc-Alexandre Côté, Tushar Khot, Erin Bransom, Bhavana Dalvi Mishra, Bod- hisattwa Prasad Majumder, Oyvind Tafjord, and Peter Clark. Discoveryworld: A virtual environment for developing and evaluating automated scientific discovery agents. Advances in Neural Information Processing Systems , 37:10088–10116, 2024. Yifan Jiang, Filip Ilievski, Kaixin Ma, and Zhivar Sourati. Brainteaser: Lateral thinking puzzles for large language models, 2023. URL https://arxiv.org/abs/2310.05057 . Cheng Jiayang, Lin Qiu, Tsz Chan, Tianqing Fang, Weiqi Wang, Chunkit Chan, Dongyu Ru, Qipeng Guo, Hongming Zhang, Yangqiu Song, Yue Zhang, and Zheng Zhang. StoryAnalogy: Deriving story-level analogies from large language models to unlock analogical understanding. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pp. 11518–11537, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.706. URL https://aclanthology.org/2023.emnlp-main.706/ . Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, and Yejin Choi. Maieutic prompting: Logically consistent reasoning with recursive explanations. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pp. 1266–1279, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.82. URL https: //aclanthology.org/2022.emnlp-main.82/ . Aditya Kalyanpur, Kailash Karthik Saravanakumar, Victor Barres, Jennifer Chu-Carroll, David Melville, and David Ferrucci. Llm-arc: Enhancing llms with an automated reasoning critic, 2024. URL https: //arxiv.org/abs/2406.17663 . Bingyi Kang, Yang Yue, Rui Lu, Zhijie Lin, Yang Zhao, Kaixin Wang, Gao Huang, and Jiashi Feng. How far is video generation from world model: A physical law perspective. arXiv preprint arXiv:2411.02385 , 2024. AndrewLampinen,IshitaDasgupta,StephanieChan,KoryMathewson,MhTessler,AntoniaCreswell,James McClelland, Jane Wang, and Felix Hill. Can language models learn from explanations in context? In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (eds.), Findings of the Association for Computational Linguistics: EMNLP 2022 , pp. 537–563, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-emnlp.38. URL https://aclanthology. org/2022.findings-emnlp.38/ . Larry Laudan. William whewell on the consilience of inductions. The Monist , pp. 368–391, 1971. 20 Jiachun Li, Pengfei Cao, Zhuoran Jin, Yubo Chen, Kang Liu, and Jun Zhao. Mirage: Evaluating and explaining inductive reasoning process in language models, 2025a. URL https://arxiv.org/abs/2410. 09542. Yitian Li, Jidong Tian, Hao He, and Yaohui Jin. Hypothesis testing prompting improves deductive reasoning in large language models, 2024. URL https://arxiv.org/abs/2405.06707 . Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, Yingying Zhang, Fei Yin, Jiahua Dong, Zhijiang Guo, Le Song, and Cheng-Lin Liu. From system 1 to system 2: A survey of reasoning large language models, 2025b. URL https://arxiv.org/abs/2502.17419 . Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https:
https://arxiv.org/abs/2505.21935v1
//aclanthology.org/W04-1013/ . Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. Deductive verificationofchain-of-thoughtreasoning. InA.Oh, T.Naumann, A.Globerson, K.Saenko, M.Hardt, and S. Levine (eds.), Advances in Neural Information Processing Systems , volume 36, pp. 36407–36433. Cur- ran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 72393bd47a35f5b3bee4c609e7bba733-Paper-Conference.pdf . Emmy Liu, Graham Neubig, and Jacob Andreas. An incomplete loop: Instruction inference, instruction following, and in-context learning in language models, 2024. URL https://arxiv.org/abs/2404.03028 . Hanmeng Liu, Zhizhang Fu, Mengru Ding, Ruoxi Ning, Chaoli Zhang, Xiaozhang Liu, and Yue Zhang. Logical reasoning in large language models: A survey, 2025. URL https://arxiv.org/abs/2502.09100 . Jorge Nathan Matias, Kevin Munger, Marianne Aubin Le Quere, and Charles R. Ebersole. The upworthy research archive, a time series of 32,487 experiments in U.S. media. Scientific Data , 8, 2021. URL https://api.semanticscholar.org/CorpusID:236883026 . John McCarthy and Patrick J Hayes. Some philosophical problems from the standpoint of artificial intelli- gence. In Readings in artificial intelligence , pp. 431–450. Elsevier, 1981. John Stuart Mill. A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and Methods of Scientific Investigation: Vol. I . BoD–Books on Demand, 2024. Gerhard Minnameier. Peirce-suit of truth –why inference to the best explanation and abduction ought not to be confused. Erkenntnis , 60(1):75–105, 2004. doi: 10.1023/B:ERKE.0000005162.52052.7f. URL https://doi.org/10.1023/B:ERKE.0000005162.52052.7f . Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar. Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models. arXiv preprint arXiv:2410.05229 , 2024. Rajiv Movva, Kenny Peng, Nikhil Garg, Jon Kleinberg, and Emma Pierson. Sparse autoencoders for hy- pothesis generation, 2025. URL https://arxiv.org/abs/2502.04382 . Norman Mu, Sarah Chen, Zifan Wang, Sizhe Chen, David Karamardian, Lulwa Aljeraisy, Basel Alomair, Dan Hendrycks, and David Wagner. Can llms follow simple rules?, 2024. URL https://arxiv.org/abs/ 2311.04235 . Ha-Thanh Nguyen, Randy Goebel, Francesca Toni, Kostas Stathis, and Ken Satoh. How well do sota legal reasoning models support abductive reasoning?, 2023. URL https://arxiv.org/abs/2304.06912 . Qian Niu, Junyu Liu, Ziqian Bi, Pohsun Feng, Benji Peng, Keyu Chen, Ming Li, Lawrence KQ Yan, Yichao Zhang, Caitlyn Heqi Yin, Cheng Fei, Tianyang Wang, Yunze Wang, Silin Chen, and Ming Liu. Large language models and cognitive science: A comprehensive review of similarities, differences, and challenges, 2024. URL https://arxiv.org/abs/2409.02387 . 21 Chitu Okoli. Inductive, abductive and deductive theorising. International Journal of Management Concepts and Philosophy , 16(3):302–316, 2023. Theo Olausson, Alex Gu, Ben Lipkin, Cedegao Zhang, Armando Solar-Lezama, Joshua Tenenbaum, and Roger Levy. LINC: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pp. 5153–5176, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.313. URL https: //aclanthology.org/2023.emnlp-main.313/ . Liangming Pan, Alon Albalak, Xinyi Wang, and William Wang. Logic-LM: Empowering large language mod- els with symbolic solvers for faithful logical reasoning. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.),Findings of the Association for Computational Linguistics: EMNLP 2023 , pp. 3806–3824, Singa- pore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp. 248. URL https://aclanthology.org/2023.findings-emnlp.248/
https://arxiv.org/abs/2505.21935v1
. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Pierre Isabelle, Eugene Charniak, and Dekang Lin (eds.), Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics ,pp.311–318,Philadelphia,Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https: //aclanthology.org/P02-1040/ . Charles Sanders Peirce. Collected papers of charles sanders peirce , volume 5. Harvard University Press, 1974. Chau Minh Pham, Alexander Hoyle, Simeng Sun, Philip Resnik, and Mohit Iyyer. TopicGPT: A prompt- based topic modeling framework. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pp. 2956–2984, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.naacl-long.164. URL https: //aclanthology.org/2024.naacl-long.164/ . Aske Plaat, Annie Wong, Suzan Verberne, Joost Broekens, Niki van Stein, and Thomas Back. Reasoning with large language models, a survey, 2024. URL https://arxiv.org/abs/2407.11511 . Karl Popper. The logic of scientific discovery . Routledge, 2005. Kevin Pu, KJ Kevin Feng, Tovi Grossman, Tom Hope, Bhavana Dalvi Mishra, Matt Latzke, Jonathan Bragg, Joseph Chee Chang, and Pao Siangliulue. Ideasynth: Iterative research idea development through evolving and composing idea facets with literature-grounded feedback. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems , pp. 1–31, 2025. BiqingQi, KaiyanZhang, KaiTian, HaoxiangLi, Zhang-RenChen, SihangZeng, ErmoHua, HuJinfang, and Bowen Zhou. Large language models as biomedical hypothesis generators: A comprehensive evaluation, 2024. URL https://arxiv.org/abs/2407.08940 . Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, YoonKim, YejinChoi, NouhaDziri, andXiangRen. Phenomenalyetpuzzling: Testinginductivereasoning capabilities of language models with hypothesis refinement. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=bNt7oajl2a . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020. Chandan K Reddy and Parshin Shojaee. Towards scientific discovery with generative ai: Progress, oppor- tunities, and challenges. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pp. 28601–28609, 2025. 22 Raymond Reiter. A logic for default reasoning. Artificial intelligence , 13(1-2):81–132, 1980. Rachel Rudinger, Vered Shwartz, Jena D. Hwang, Chandra Bhagavatula, Maxwell Forbes, Ronan Le Bras, Noah A. Smith, and Yejin Choi. Thinking like a skeptic: Defeasible inference in natural language. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Association for Computational Linguistics: EMNLP 2020 , pp. 4661–4675, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.418. URL https://aclanthology.org/2020.findings-emnlp.418/ . Joshua Stewart Rule. The child as hacker: building more human-like models of learning . PhD thesis, Massachusetts Institute of Technology, 2020. S Seals and Valerie Shalin. Evaluating the deductive competence of large language models. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
https://arxiv.org/abs/2505.21935v1
, pp. 8614–8630, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.naacl-long.476. URL https://aclanthology.org/2024.naacl-long.476/ . Xiaoming Shi, Siqiao Xue, Kangrui Wang, Fan Zhou, James Zhang, Jun Zhou, Chenhao Tan, and Hongyuan Mei. Language models can improve event prediction by few-shot abductive reasoning. Advances in Neural Information Processing Systems , 36:29532–29557, 2023. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Lan- guage agents with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36: 8634–8652, 2023. Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. Can llms generate novel research ideas? a large-scale human study with 100+ nlp researchers, 2024. URL https://arxiv.org/abs/2409.04109 . Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L. Hamilton. CLUTRR: A diagnostic benchmark for inductive reasoning from text. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan (eds.),Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pp. 4506– 4515, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/ D19-1458. URL https://aclanthology.org/D19-1458/ . Friedrich Stadler. The road to experience and prediction from within: Hans reichenbach’s scientific corre- spondence from berlin to istanbul. Synthese , 181:137–155, 2011. Wangtao Sun, Chenxiang Zhang, XueYou Zhang, Xuanqing Yu, Ziyang Huang, Pei Chen, Haotian Xu, Shizhu He, Jun Zhao, and Kang Liu. Beyond instruction following: Evaluating inferential rule following of large language models, 2024. URL https://arxiv.org/abs/2407.08440 . Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. ProofWriter: Generating implications, proofs, and abduc- tive statements over natural language. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli (eds.), Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 , pp. 3621–3634, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-acl.317. URL https://aclanthology.org/2021.findings-acl.317/ . Chenhao Tan, Lillian Lee, and Bo Pang. The effect of wording on message propagation: Topic- and author- controlled natural experiments on twitter. In Proceedings of ACL , 2014. Fuchun Wang, Xian Zhou, Wenpeng Hu, Zhunchen Luo, Wei Luo, and Xiaoying Bai. Llm assists hypothesis generation and testing for deliberative questions. In Derek F. Wong, Zhongyu Wei, and Muyun Yang (eds.),Natural Language Processing and Chinese Computing , pp. 424–436, Singapore, 2025. Springer Nature Singapore. ISBN 978-981-97-9434-8. Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah D. Goodman. Hypothesis search: Inductive reasoning with language models, 2024. URL https://arxiv.org/abs/2309.05660 . 23 Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. Scienceworld: Is your agent smarter than a 5th grader?, 2022a. URL https://arxiv.org/abs/2203.07540 . Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. Scienceworld: Is your agent smarter than a 5th grader?, 2022b. URL https://arxiv.org/abs/2203.07540 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large lan- guage models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.),Advances in Neural Information Processing Systems , volume 35, pp. 24824–24837. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf . Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand
https://arxiv.org/abs/2505.21935v1
Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks, 2015. URL https://arxiv.org/abs/1502.05698 . Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, and Yejin Choi. Reframing human-AI collaboration for generating free-text explanations. In Marine Carpuat, Marie-Catherine de Marneffe, and Ivan Vladimir Meza Ruiz (eds.), Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pp. 632–658, Seattle, United States, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.47. URL https://aclanthology.org/2022.naacl-main.47/ . Guangzhi Xiong, Eric Xie, Amir Hassan Shariatmadari, Sikun Guo, Stefan Bekiranov, and Aidong Zhang. Improving scientific hypothesis generation with knowledge grounded large language models, 2024. URL https://arxiv.org/abs/2411.02382 . Manjie Xu, Guangyuan Jiang, Wei Liang, Chi Zhang, and Yixin Zhu. Active reasoning in an open-world environment. Advances in Neural Information Processing Systems , 36:11716–11736, 2023. Yang Yan, Yu Lu, Renjun Xu, and Zhenzhong Lan. Do phd-level llms truly grasp elementary addition? probing rule learning vs. memorization in large language models. arXiv preprint arXiv:2504.05262 , 2025. Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, and Furu Wei. Language models as inductive reasoners. In Yvette Graham and Matthew Purver (eds.), Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 209–225, St. Julian’s, Malta, March 2024a. Association for Computational Linguistics. URL https://aclanthology.org/2024.eacl-long.13/ . Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and Erik Cambria. Large language models for automated open-domain scientific hypotheses discovery. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Findings of the Association for Computational Linguistics: ACL 2024 , pp. 13545–13565, Bangkok, Thailand, August 2024b. Association for Computational Linguistics. doi: 10.18653/v1/2024. findings-acl.804. URL https://aclanthology.org/2024.findings-acl.804/ . Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, and Erik Cambria. Logical reasoning over natural language as knowledge representation: A survey, 2024c. URL https://arxiv.org/abs/2303.12023 . Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie, Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik Cambria, and Dongzhan Zhou. Moose-chem: Large language models for rediscovering unseen chemistry scientific hypotheses, 2025. URL https://arxiv.org/abs/2410.07076 . Nathan Young, Qiming Bao, Joshua Bensemann, and Michael Witbrock. AbductionRules: Training trans- formers to explain unexpected inputs. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (eds.),Findings of the Association for Computational Linguistics: ACL 2022 , pp. 218–227, Dublin, Ire- land, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.findings-acl.19. URL https://aclanthology.org/2022.findings-acl.19/ . 24 Fei Yu, Hongbo Zhang, Prayag Tiwari, and Benyou Wang. Natural language reasoning, a survey. ACM Comput. Surv. , 56(12), October 2024a. ISSN 0360-0300. doi: 10.1145/3664194. URL https://doi.org/ 10.1145/3664194 . Qingchen Yu, Shichao Song, Ke Fang, Yunfeng Shi, Zifan Zheng, Hanyu Wang, Simin Niu, and Zhiyu Li. Turtlebench: Evaluating top language models via real-world yes/no puzzles, 2024b. URL https: //arxiv.org/abs/2410.05262 . Siyu Yuan, Jiangjie Chen, Xuyang Ge, Yanghua Xiao, and Deqing Yang. Beneath surface similarity: Large language models make reasonable scientific analogies after structure abduction. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023 , pp. 2446–2460, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/
https://arxiv.org/abs/2505.21935v1
2023.findings-emnlp.160. URL https://aclanthology.org/2023.findings-emnlp.160/ . Hongming Zhang, Xinran Zhao, and Yangqiu Song. WinoWhy: A deep diagnosis of essential commonsense knowledge for answering Winograd schema challenge. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pp. 5736–5745, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/ v1/2020.acl-main.508. URL https://aclanthology.org/2020.acl-main.508/ . Yue Zhang, Liqiang Jing, and Vibhav Gogate. Defeasible visual entailment: Benchmark, evaluator, and reward-driven optimization. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pp. 25976–25984, 2025. Wenting Zhao, Justin T. Chiu, Claire Cardie, and Alexander M. Rush. Abductive commonsense reasoning exploiting mutually exclusive explanations, 2023. URL https://arxiv.org/abs/2305.14618 . Wenting Zhao, Justin Chiu, Jena Hwang, Faeze Brahman, Jack Hessel, Sanjiban Choudhury, Yejin Choi, Xiang Li, and Alane Suhr. UNcommonsense reasoning: Abductive reasoning about uncommon situa- tions. In Kevin Duh, Helena Gomez, and Steven Bethard (eds.), Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pp. 8487–8505, Mexico City, Mexico, June 2024. Association for Com- putational Linguistics. doi: 10.18653/v1/2024.naacl-long.469. URL https://aclanthology.org/2024. naacl-long.469/ . Ruiqi Zhong, Heng Wang, Dan Klein, and Jacob Steinhardt. Explaining datasets in words: Statistical models with natural language parameters. Advances in Neural Information Processing Systems , 37:79350–79380, 2024. Yangqiaoyu Zhou, Haokun Liu, Tejes Srivastava, Hongyuan Mei, and Chenhao Tan. Hypothesis generation with large language models. In Proceedings of the 1st Workshop on NLP for Science (NLP4Science) , pp. 117–139. Association for Computational Linguistics, 2024. doi: 10.18653/v1/2024.nlp4science-1.10. URL http://dx.doi.org/10.18653/v1/2024.nlp4science-1.10 . Zhaocheng Zhu, Yuan Xue, Xinyun Chen, Denny Zhou, Jian Tang, Dale Schuurmans, and Hanjun Dai. Large language models can learn rules, 2024. URL https://arxiv.org/abs/2310.07064 . 25
https://arxiv.org/abs/2505.21935v1