text
string
source
string
methods, like edit distance method [ 41] and TF-IDF method [ 10]. To overcome the limitations of unsupervised distance-based methods, researchers have proposed supervised learning methods. Ravikumar et al. [ 50] define ER as a classification problem and use SVM to solve it. How- ever, these methods are heavily based on labeled data. Recently, researchers have proposed unsupervised learning methods for ER. Lacoste-Julien et al. [ 28] propose a greedy matching method SiGMa and Wu et al. [ 59] propose ZeroER , which uses a Gaussian Mix- ture Model to learn the similarity distributions of matches and non-matches to solve ER. However, supervised learning methods require a large amount of labeled data, and unsupervised learning methods heavily rely on blocking methods, which makes them dif- ficult to transfer to our dataset entity resolution. We propose a rule- based graph inference method leveraging strong indicator fields as relational constraints. Our algorithm performs iterative graph completion through deterministic pattern matching and transitive inference, achieving accurate entity resolution without training data or predefined blocking schemes. 3 PROBLEM FORMULATION We aim to construct a paper-dataset network that captures the usage of datasets in academic papers. Formally, the paper-dataset network can be defined as a bipartite graph 𝐺=(𝑃,𝐸,𝑅), where 𝑃is the set of papers, 𝐸is the set of dataset entities, and 𝑅is the relationships between papers and datasets. Each edge 𝑟𝑖,𝑗∈𝑅 ChatPD: An LLM-driven Paper-Dataset Networking System KDD ’25, August 3–7, 2025, Toronto, ON, Canada Figure 1: System Architecture of ChatPD . connects a paper 𝑝𝑖∈𝑃to a dataset entity 𝑒𝑗∈𝐸, indicating that the paper𝑝𝑖uses the dataset entity 𝑒𝑗. Specifically, two main issues need to be addressed to construct the paper-dataset network: •Dataset information extraction : extract the dataset usage infor- mation from the texts of given papers; •Dataset entity resolution : align diverse dataset descriptions with their corresponding dataset entities, where a dataset entity rep- resents a specific dataset within the dataset database. 3.1 Dataset Information Extraction For each paper 𝑝∈𝑃, we have its text 𝑇(𝑝). The information extraction is to apply the function 𝐹(realized via a prompt-based query to an LLM) to obtain: 𝐷(𝑝)B𝐹(𝑇(𝑝))={𝑑𝑝,1,𝑑𝑝,2,...,𝑑𝑝,𝑛(𝑝)}⊆𝐷 where𝑑𝑝,𝑖is a JSON object representing the 𝑖-th dataset description in paper𝑝, and𝑛(𝑝)is the number of dataset descriptions in paper 𝑝. Here is an example of a JSON object for a dataset description: { "dataset name": ..., "paper title": ..., "dataset summary": ..., "data type": ..., "task": ..., "url": ..., ... // other metadata related to the dataset } 3.2 Dataset Entity Resolution Given the dataset descriptions 𝐷extracted from papers and an initial dataset entity database 𝐸init(derived from PwC), the objective of Entity Resolution (ER) is to find a mapping 𝑀:𝐷→𝐸, where 𝐸=𝐸init∪𝐸new. Each dataset description 𝑑∈𝐷is mapped to an entity𝑒∈𝐸if they refer to the same real-world dataset. The set 𝐸newcontains new dataset entities not present in 𝐸init. Formally, letC={𝐶1,𝐶2,...,𝐶𝑚}be a partition of 𝐷into equiv- alence classes under the relation 𝑑𝑖∼𝑑𝑗(indicating𝑑𝑖and𝑑𝑗referto the same dataset). The mapping 𝑀is defined as: ∀𝐶𝑘∈C,∀𝑑∈𝐶𝑘,𝑀(𝑑)=( 𝑒∈𝐸𝑖𝑛𝑖𝑡 if∃𝑒∈𝐸inits.t.𝑒∼𝐶𝑘, 𝑒new∈𝐸new otherwise. This ensures each cluster 𝐶𝑘aligns with an existing entity in 𝐸init when possible; otherwise, a new entity 𝑒newis registered
https://arxiv.org/abs/2505.22349v1
in 𝐸newif the cluster indeed refers to a new real-world dataset. The resolution process constructs a paper-dataset network by connecting papers 𝑝∈𝑃to their used dataset entities 𝑀(𝑑)∈𝐸for all descriptions 𝑑∈𝐷(𝑝). 4 SYSTEM DESIGN In this section, we introduce ChatPD , a novel LLM-driven system designed to automate the construction of a paper-dataset network. By leveraging LLMs to extract dataset information from academic papers and perform entity resolution, ChatPD dynamically links papers to their corresponding datasets, forming a structured net- work. As illustrated in Fig. 1, the architecture of ChatPD is built upon three pivotal modules: •Paper Collection: Aggregates papers from academic platforms to form the system’s foundational corpus. •Dataset Information Extraction: Identifies and extracts dataset- related text from academic papers, leveraging LLMs to generate semi-structured metadata (e.g., dataset names, data types, and associated tasks). •Dataset Entity Resolution: Resolves variant mentions of the same dataset by aligning them to a canonical entity, thereby construct- ing a paper-dataset bipartite graph. 4.1 Paper Collection In the first phase, we collect basic information about academic papers. ArXiv [ 3], one of the largest academic paper platforms, hosts a rich repository of preprints of research papers and is open on the web1. In the current implementation of ChatPD , we collect 1https://www.kaggle.com/datasets/Cornell-University/arxiv KDD ’25, August 3–7, 2025, Toronto, ON, Canada Anjie Xu, Ruiqing Ding, and Leye Wang Figure 2: Dataset Information Extraction Prompt papers from arXiv, focusing on Artificial Intelligence in Computer Science (cs.AI), and use the ar5iv tool [ 52] to obtain the text-format papers. We emphasize that ChatPD operates independently of academic platforms, requiring only the text of papers for analysis. For ex- ample, by leveraging open-source PDF processing tools such as PyPDF2,ChatPD can build a personalized local paper-dataset net- work directly from a user’s collection of PDF documents. Currently, we select arXiv as our primary source as it is fully open-access, and the majority of AI papers now publish preprints on this platform. 4.2 Dataset Information Extraction The Dataset Information Extraction module identifies and extracts dataset-related metadata from academic papers collected in the preceding stage. For a paper 𝑝, the module outputs a collection of dataset descriptions 𝐷(𝑝)={𝑑𝑝,1,𝑑𝑝,2,...,𝑑𝑝,𝑛(𝑝)}, where each 𝑑𝑝,𝑖represents a semi-structured JSON object encapsulating core dataset attributes. Recently, LLMs have shown great effectiveness and efficiency in analyzing text corpus [ 40]. Based on LLMs, we can directly use chat- style natural interaction to extract useful dataset information from paper texts collected. With LLMs, there are three issues needed to be carefully considered: (1) prompt design , (2)output quality control , and (3) cost optimization . 4.2.1 Prompt Design. LLMs, e.g., ChatGPT, have recently show- cased impressive performance in zero-shot or few-shot text informa- tion extraction tasks. To initiate the dataset information extraction process and generate responses in our desired structured format, we provide a specific prompt. The example of our prompt and corresponding demonstration is shown in Fig. 2. Role . Prior research has shown that specifying the role for the LLM would significantly improve the LLM’s capability of solving the task [ 61]. Following the common practice, we set the role
https://arxiv.org/abs/2505.22349v1
of the LLM as a computer science researcher, allowing it to better understand the task scenario. 2https://github.com/py-pdf/pypdfPaper Information . The prompt features a ‘ {Paper Information} ’ field designed to incorporate relevant text from the paper pertaining to the dataset. Intuitively, this field could contain the entire paper text; however, in practice, this may result in prohibitively high costs when using LLM APIs, as computational expenses scale directly with input length. We explore this cost consideration in greater detail in Sec. 4.2.3. Output Specification . We also give specific task requirements and format standards for output. Previous research has summarized key considerations for researchers when finding datasets [ 25]. We base our dataset information extraction on these key fields, such as the dataset name, data type, task, location, time, scale, and dataset providers. In addition to these key fields, we include the dataset summary, Uniform Resource Locator (URL), and other relevant in- formation fields to offer a more comprehensive dataset description. To ensure the LLM produces semi-structured data, we instruct it to generate the output in JSON format. Considering that a paper may involve multiple datasets, we also add an annotation to remind the LLM to generate a JSON format description for each dataset. 4.2.2 Output Quality Control. The ideal output would be standard JSON-formatted data for downstream processing. However, our experiments reveal that even state-of-the-art LLMs (e.g., GPT-4o) occasionally generate outputs violating JSON syntax requirements. To mitigate this issue and ensure system reliability, we implement a dedicated format validation and correction step in the pipeline. Specifically, we summarize three principal anomalies and institute corresponding rectifications via a post-processing script: •Extraneous Expressions : Entries not commencing with ‘ {’, ‘}’, or ‘"’ are excised to eliminate non-pertinent phases. •Malformed Escape Sequences : We identify characters that need to be escaped in the output and add corresponding escape char- acters for them. •Inconsistent Comma Usage : We program to correct the problem of commas at the end of the line according to the syntax of JSON. 4.2.3 Cost Optimization. As we constrain the output to a JSON format with pre-defined fields, the cost of an LLM query is mostly related to the input length in ChatPD . In particular, the length of the paper text in the query, i.e., ‘ {Paper Information} ’, dominates the input length. If we directly send the full paper text to LLM for processing, the cost would be relatively high especially when we want to scale up ChatPD to deal with millions of papers. To address this issue, we opt to input only the text of the paper sections that probably contain dataset descriptions. Academic papers usually describe the datasets used in the experimental sections, so we select sections like “ Experiment ”, “Dataset description ”, “Data ”, and other similar ones. Considering the balance between the API call cost and the LLM’s processing power, the length of the truncated input text is 1500 tokens (approximately 1125 words). Additionally, we include the title and abstract of the paper as supplementary input to provide a more comprehensive context of
https://arxiv.org/abs/2505.22349v1
the datasets. In our current implementation, the dataset information extrac- tion module employs GPT-4o-mini3, OpenAI’s most advanced and cost-effective small-scale model. After cost optimization, the ex- pense for ChatPD to process 10,000 papers would be reduced to just 3https://chat.openai.com/ ChatPD: An LLM-driven Paper-Dataset Networking System KDD ’25, August 3–7, 2025, Toronto, ON, Canada $6.3. It is important to note that ChatPD is not restricted to specific LLM services, and we have also evaluated other LLM services in our experiments. With the advancement of LLM techniques, we believe that it will soon be feasible to develop a fully local version ofChatPD on a standard PC equipped with a mid-range graphics card. Exploring the deployment of such a locally deployable LLM model will be a focus of our future work. 4.3 Dataset Entity Resolution The output of the dataset information extraction module is a set of dataset descriptions in JSON format, extracted from the paper texts. To construct the paper-dataset network, the next step is to extract dataset entities from these JSON-formatted descriptions. Specifically, there are two key challenges to address: (1)Existing Entity Matching : When a paper uses a dataset that has already been referenced in other papers (i.e., an existing dataset entity in the database), the challenge is to correctly map the JSON-formatted description to the corresponding entity. (2)New Entity Discovery : When a paper introduces a new dataset, the challenge is to identify it and register it as a new entity in the database. 4.3.1 Existing Entity Matching. To initialize the dataset entity data- base, we currently utilize the dataset entities collected by the PwC platform. Through crowdsourcing, the PwC platform has accumu- lated a substantial number of dataset entities in its database, which include rich metadata such as dataset names and URLs. Addition- ally, PwC data is publicly accessible under the CC-BY-SA-4 license.4 Our goal is to map the extracted dataset descriptions to their cor- responding entities in the PwC database, thereby constructing a paper-dataset network. In Sec. 4.2, we extract dataset-related information from paper texts, with certain fields—such as "dataset name" and "URL"—that can be used to identify the same dataset entity in the database. Our approach is based on the idea that if a dataset description shares the same name or URL as an existing dataset entity, we can conclude that the description refers to that entity with high confidence. Following the idea, we propose a ‘ dataset identity attribute- based graph inference and completion ’ algorithm to match dataset descriptions to existing entities. First, we model the ex- tracted dataset descriptions and database entities as nodes in a graph, referred to as description nodes (D-nodes) andentity nodes (E-nodes) , respectively. We then introduce identity-attribute nodes (I-nodes) to represent unique identifiers such as dataset names and URLs. Notably, we create only one I-node for each unique dataset name or URL to avoid duplication. Next, we connect each I-node to its corresponding D-nodes and/or E-nodes. Then we introduce the graph inference andcompletion one by one. Graph Inference : This graph structure enables us to infer re- lationships between D-nodes
https://arxiv.org/abs/2505.22349v1
(dataset descriptions) and E-nodes (dataset entities). For instance, if a D-node 𝑑is linked to an I-node and this same I-node is also connected to an E-node 𝑒, we can infer that𝑑corresponds to 𝑒. This process effectively matches the dataset description to an existing dataset entity in the database through their shared identifier (e.g., same dataset name or URL). 4https://github.com/paperswithcode/sota-extractorAlgorithm 1 Graph Creation and Completion 1:Input: A list of dataset descriptions 𝐷={𝑑1,𝑑2,...,𝑑𝑛}, a list of entities 𝐸= {𝑒1,𝑒2,...,𝑒𝑚} 2:Output: A graph𝐺=(𝑉,E)with completions and corrections 3: Identity attributes: 𝐴={dataset name ,dataset url} 4: Initialize nodes: 𝑉←𝐷∪𝐸∪{𝐼𝑑,𝛼|𝑑∈𝐷,𝛼∈𝐴}∪{𝐼𝑒,𝛼|𝑒∈𝐸,𝛼∈𝐴} ⊲Graph Creation 5:E←Ð 𝑑∈𝐷{(𝑑has_𝛼−−−−→𝐼𝑑,𝛼)|𝛼∈𝐴} 6:E←E∪Ð 𝑒∈𝐸{(𝐼𝑒,𝛼refers_to−−−−−→𝑒)|𝛼∈𝐴} 7:while iteration_limit is not reached do ⊲Graph Completion 8: forD-node𝑑∈𝐷do 9: forattribute𝛼∈𝐴do 10: if∃I-node𝐼𝑑,𝛼refers_to−−−−−→ E-node𝑒then 11:E←E∪{( 𝐼𝑑,𝐴\{𝛼}refers_to−−−−−→𝑒)} 12: end if 13: if|{𝐼𝑑,𝛼refers_to−−−−−→𝑒}|>1then ⊲Refinement after Completion 14: Remove the I-node 𝐼𝑑,𝛼and its edges from 𝑉andE 15: end if 16: end for 17: end for 18:end while 19:return𝐺 Using the above process, we can match a D-node to its corre- sponding E-node if they share a common I-node. However, the original E-node in the database may initially connect to only a lim- ited number of I-nodes, which restricts the coverage of this basic inference strategy. To address this limitation, we introduce a graph completion step to systematically enrich E-nodes’ connections to additional I-nodes, thereby improving inference coverage. Graph Completion : When a D-node 𝑑is matched to an E-node 𝑒, all I-nodes connected to 𝑑are also linked to 𝑒. This enriches 𝑒’s identity attributes by expanding its associated identifiers. Crucially, whenever a new I-node is connected to 𝑒, we rerun the graph inference process for 𝑒to identify any additional D-nodes that can now be matched to 𝑒through the updated connections. Consider an E-node 𝑒cocorepresenting the MS COCO dataset [ 32], which initially has two I-nodes: the name “MS COCO” and the URL “https://cocodataset.org/”. During the inference step, we identify a D-node that shares the URL I-node but has an additional name I-node, “COCO 2014”. Through the graph completion step, we link the “COCO 2014” I-node to 𝑒coco. This enriched connection enables subsequent D-nodes associated with the “COCO 2014” I-node to be matched to𝑒coco, thereby expanding the inference coverage. Considering the completion order, some I-nodes may not be connected to any E-node after the initial inference. To address this, we introduce a completion iteration to enrich the connections. In practice, we set an iteration limit to 3. Refinement after Completion : While the graph completion strategy improves inference coverage, it risks introducing erro- neous connections. A core principle is that I-nodes—representing identity attributes—should link to at most one E-node . However, af- ter completion, an I-node might connect to multiple E-nodes. This issue frequently arises with URL I-nodes. For instance, papers may cite generic data warehouse URLs like “www.kaggle.com” for used datasets, causing this I-node to link to multiple E-nodes for datasets hosted on Kaggle. Since such ambiguous I-nodes cannot reliably serve as unique identifiers, our current implementation of ChatPD removes them from the graph to preserve integrity. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Anjie Xu, Ruiqing Ding, and Leye Wang Algorithm 2 Graph
https://arxiv.org/abs/2505.22349v1
Inference for Entity Resolution 1:Input: A list of dataset descriptions 𝐷, a list of dataset entities 𝐸, the completed graph𝐺=(𝑉,E) 2:Output: A list of matched dataset descriptions and entities 𝑀 3:𝑀←{} 4:forD-node𝑑∈𝐷do 5: forattribute𝛼∈{dataset name, dataset url} do 6: if∃I-node𝐼𝑑,𝛼refers_to−−−−−→ E-node𝑒then 7: 𝑀←𝑀∪{(𝑑,𝑒)} 8: end if 9: end for 10:end for 11:return𝑀 After graph completion and refinement, we can infer the final mappings between dataset descriptions (D-nodes) and their cor- responding entities (E-nodes) in the database. The full process is formalized in Algorithm 1 and 2. 4.3.2 New Entity Discovery. Another key strength of ChatPD lies in its ability to discover novel dataset entities from academic litera- ture. For example, our analysis reveals that nearly 50% of datasets extracted by ChatPD from arXiv papers are absent from PwC’s database, highlighting these datasets’ novelty and suggesting they represent emerging resources useful for academic research. After the graph inference and completion (Sec. 4.3.1), some D- nodes may remain unmatched to any E-nodes. These unmatched D- nodes could represent novel dataset entities introduced by the cor- responding papers. However, automatically creating a new E-node for every unmatched D-node risks introducing noise, as dataset descriptions extracted by LLMs may contain inaccuracies. To ad- dress this, ChatPD enforces two criteria to determine whether an unmatched D-node needs the creation of a new E-node. 1.Identity Information Completeness . Currently, ChatPD only considers creating E-nodes for unmatched D-nodes with com- plete identity attributes, i.e., containing both a dataset name and a URL. Notably, after graph refinement (Sec. 4.3.1), all URL I-nodes as- sociated with generic data warehouse links (e.g., “www.kaggle.com”) are removed. Therefore, if an unmatched D-node retains a URL I- node, it is likely a specific ,non-generic URL , increasing confidence that the D-node represents a genuinely new dataset. 2.Multiple Paper Mentions .ChatPD prioritizes creating new E-nodes when multiple unmatched D-nodes share identical I-nodes (e.g., the same dataset name or URL). This increases confidence that the dataset is genuine and significant, as it is independently men- tioned across multiple papers. For such cases, ChatPD consolidates all D-nodes sharing same I-nodes into a single E-node, represent- ing one unified novel dataset entity. In the implementation, We can define a threshold 𝜆to govern the creation of new E-nodes: a candidate dataset must be mentioned in at least 𝜆papers. Additionally, we plan to incorporate user feedback to improve the accuracy and efficiency of dataset discovery. For example, even if a dataset lacks mentions from 𝜆papers, we still create an E-node but flag it with an uncertainty indicator . When presenting such datasets, ChatPD could ask users to verify dataset accuracy (e.g., “Is this extracted dataset correct?”). User feedback, while valuable, is not always reliable. Accurately extracting trustworthy insights from such feedback remains a significant challenge—a problem widely recognized in literature as truth discovery. We defer addressing this challenge to future research.Table 1: Dataset Usage Statistics in Annotated Papers NeurIPS KDD Total # Papers 50 69 119 # Total Datasets Used 110 186 296 # Unique Datasets Used 61 97 143 Avg. # Datasets Per Paper 2.20 2.70 2.49 5 EXPERIMENT We
https://arxiv.org/abs/2505.22349v1
evaluate ChatPD to ascertain its effectiveness in constructing the paper-dataset network following three questions: RQ1: Can ChatPD efficiently and accurately extract dataset infor- mation? RQ2: Can ChatPD effectively resolve dataset descriptions entities? RQ3: Can ChatPD discover new datasets? 5.1 Performance of Dataset Information Extraction (RQ1) 5.1.1 Experimental Setup. To compare ChatPD , we implement three comparative approaches: (1)en_core_web_trf : employing a named entity recognition model en_core_web_trf5to detect dataset entities in papers[47]. en_core_web_trf is a powerful pre-trained transformer-based model that can recognize and label a variety of entities in text, including dataset names [36]. (2)Regular Expression : using regular expressions to identify and match dataset names and their common variants in paper text based on a predefined list of dataset names (e.g., hyphenation variations like "Mini-ImageNet" and "MiniImageNet")[46]. (3)PapersWithCode (PwC) : directly using the datasets identi- fied by PwC for test papers. The dataset usage information on PwC is derived partly from annotations by community members and partly from the rule-based automated extraction script.6 For implementing LLM APIs in ChatPD , we choose GPT-3.5- turbo, GPT-4o-mini (default), Qwen2.5-7b-instruct [ 53], and DeepSeek- V3 [35] for comparison. To compare with our cost optimization strategy (Sec. 4.2.3), we also implement a variant of inputting the full paper text to LLMs. To construct the test set, we manually annotate datasets used in research papers to establish a ground truth for evaluation. Specifi- cally, we annotate dataset usage in 119 papers from top-tier confer- ences, including KDD and NeurIPS. The statistics of the annotated papers are detailed in Table 1. To ensure a fair comparison with PwC, our selected test papers all have dataset annotations on PwC. 5.1.2 Results. We evaluate the performance of dataset information extraction by calculating various metrics, including Exact Match Ra- tio,Micro Average Precision ,Micro Average Recall andMicro Average F1 score . The comparison results are shown in Fig. 3. Our results indicate that Regular Expression anden_core_web_trf struggle to effectively capture dataset information. ChatPD with GPT-3.5-turbo achieves competitive performance compared with PwC. With more advanced LLMs such as GPT-4o-mini and DeepSeek- V3,ChatPD outperforms PwC significantly across all metrics. Our method remains robust even with lightweight, locally deployable 5https://spacy.io/models/en#en_core_web_trf 6https://github.com/paperswithcode/sota-extractor ChatPD: An LLM-driven Paper-Dataset Networking System KDD ’25, August 3–7, 2025, Toronto, ON, Canada Exact Match Ratio Micro Precision Micro Recall Micro F1 Score Metrics0.00.20.40.60.81.0ScoresComparison of Model Performanceen_core_web_trf NER model Regular Expression PapersWithCode ChatPD (GPT-3.5-turbo) ChatPD (Qwen2.5-7b-instruct) ChatPD (GPT-4o-mini) ChatPD (GPT-4o-mini with Full Paper) ChatPD (DeepSeek-V3) Figure 3: Performance of Dataset Information Extraction models such as Qwen2.5-7b-instruct. By analyzing the data, we ob- serve that the unsatisfactory performance of PwC can be attributed to its rule-based extraction technique for identifying datasets from texts. This method frequently results in erroneous matches, e.g., wrongly identifying datasets that are merely referenced in the text but not actually used in the study. To evaluate the effectiveness of our cost optimization strategy, we conduct a comparison between the full-text input and our optimized 1500-token input using GPT-4o-mini. The results demonstrate that the 1500-token input achieves performance close to the full-text input, and even outperforms it in certain metrics like
https://arxiv.org/abs/2505.22349v1
Precision. Note that processing the full text would require approximately 7 times more tokens compared to our optimized method, significantly increasing costs. Given that ChatPD is designed to handle a contin- uous and large volume of papers, we believe that limiting the input to 1500 tokens strikes an effective balance between cost efficiency and performance. Overall, our experimental results show that ChatPD with cur- rent LLMs are highly effective in extracting datasets from papers, surpassing state-of-the-art solutions like PwC, highlighting the feasibility of using large language models for this task. 5.2 Performance of Dataset Description Entity Resolution (RQ2) 5.2.1 Experimental Setup. In this experiment, we aim to match dataset descriptions to existing dataset entities. Specifically, we utilize the dataset entities already stored in PwC as the reference existing entities. To establish ground truths, we manually annotate dataset descriptions extracted from papers published in top-tier conferences, such as KDD and NeurIPS, by linking them to their corresponding entities in the database. We sample 1,000 dataset descriptions randomly and link them manually to the corresponding entities. We find that only 474 dataset descriptions, only half of the samples, can be linked to the dataset entities in the PwC database. The primary reason for the unlinked descriptions is the absence of corresponding entities in the PwC database. Additionally, some descriptions were too vague, such as ‘weather dataset’, to determine their corresponding entities. We compare our Graph Completion & Inference algorithm with theName Matching method (connecting descriptions to entities with the same dataset name) and the Graph Inference algorithm (connecting dataset descriptions with the same dataset name, alias, or URL). Besides, we compare two popular entity resolution algo- rithms, SiGMa [28] and ZeroER [59].Table 2: Evaluation of Entity Resolution Methods Method Precision Recall F1 Score Name Matching 1.0000 0.5105 0.5680 SiGMa [28] 0.7319 0.6312 0.6778 ZeroER [59] 0.9984 0.5844 0.6300 Graph Inference 0.9917 0.6477 0.7007 Graph Completion & Inference 0.9826 0.8727 0.8829 Table 3: New Dataset Entities Discovered by ChatPD Dataset Entity Usage Count ChatPD PwC (2024.11.16) PwC (2025.01.16) UltraFeedback[14] 43 ✓ - - Diginetica[1] 22 ✓ - - BabyAI[9] 19 ✓ - - HELOC[16] 16 ✓ - ✓ RedPajama[11] 16 ✓ - - Camelyon17[4] 15 ✓ - - VirtualHome[48] 15 ✓ - - California Housing[45] 14 ✓ - ✓ Yoochoose[6] 14 ✓ - - Titanic[15] 13 ✓ - ✓ 5.2.2 Results. We choose precision, recall, and F1 score as the evaluation metrics. Our results are shown in Table 2. Name Matching achieves the highest precision, but cannot find the same dataset with different names, leading to the lowest recall. As a result, its F1 score is also the worst. Graph Inference utilizes the aliases and URLs provided by PwC, achieving a higher recall and F1 score than the state-of-the-art methods SiGMa andZeroER . Our Graph Completion & Inference algorithm considers the tran- sitive relationship between dataset descriptions, which can further increase the recall. It achieves the best F1 score 0.8829 , verifying its effectiveness in constructing the paper-dataset network. 5.3 New Dataset Entity Discovery (RQ3) By applying the new dataset entity discovery strategy (Section 4.3.2),
https://arxiv.org/abs/2505.22349v1
ChatPD can detect novel dataset entities referenced in academic papers. We list the top 10 most frequently used new dataset entities discovered by ChatPD that were not included in PwC’s dataset database as of November 16, 2024. We compare the coverage of these dataset entities in PwC’s database on November 16, 2024, and January 16, 2025. The results are shown in Table 3. Only three out of the ten popular new datasets were added to PwC as of January 16, 2025. Notably, the most widely used dataset, UltraFeedback [14], which has been used in over 40 papers, is still not included in PwC. This highlights that ChatPD is significantly more efficient at discovering new dataset entities compared to PwC. 6 DEPLOYMENT ChatPD has been deployed to update the paper-dataset network weekly. Users can access https://chatpd-web.github.io/chatpd- web to search for datasets used in papers by specifying the arXiv ID or dataset name, data type, task, etc. We present the basic dataset services provided by ChatPD after deployment in Appendix A.1. 6.1 Offline Results Before deployment, we conduct offline evaluations to ensure the effectiveness and efficiency of ChatPD . We randomly sample 35,310 papers in the cs.AI category on arXiv and extract dataset infor- mation from them by ChatPD . We compare the data extracted by ChatPD with that from the platform PwC to analyze the network’s size and coverage. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Anjie Xu, Ruiqing Ding, and Leye Wang Table 4: Network Size and Coverage Statistics Metric PwC ChatPD # Papers Extracted 14,353 35,310 # Dataset Descriptions Extracted 32,146 76,056 # PwC Dataset Entities Covered 3,556 3,144 # Descriptions Matched to PwC Entities 32,146 35,085 # New Dataset Entities Discovered - 444 # Descriptions Matched to New Entities - 1,217 Avg. Cost Per Paper Extracted (USD) - 0.00063 Table 5: Performance Evaluation of ChatPD in the cs.AI Cat- egory on arXiv (2024) Metric Value # Papers in arXiv cs.AI (2024) 32,959 # Papers with Accessible Text Information via ar5iv 28,901 # Papers with Dataset Information Successfully Extracted 24,719 Success Rate of Paper Processing 85.5% # Dataset Descriptions Extracted 59,664 Avg. # Dataset Usage Records Per Paper 2.41 # Descriptions Matched to PwC Entities 27,428 Table 4 provides a summary of the network size and coverage metrics for PwC andChatPD . The data indicates that ChatPD has significantly expanded the scope of the paper-dataset network compared to PwC.ChatPD has extracted dataset usage information from more than double the number of papers and dataset descriptions compared to PwC. Besides the existing PwC entities, ChatPD also find 444 new dataset entities not included in PwC. Specifically, we infer a new dataset entity if it has a useful URL and is referenced by at least 3 pa- pers (Sec. 4.3.2). Additionally, its cost-efficiency is notable, with an average cost per paper extracted being significantly low at $0.00063 using GPT-4o-mini. Through offline evaluation, we demonstrate that ChatPD con- structs a larger and more comprehensive paper-dataset network with impressive cost efficiency. 6.2 Post-Deployment Results We evaluate the performance of the deployed ChatPD by analyzing
https://arxiv.org/abs/2505.22349v1
the paper-dataset network constructed from cs.AI papers on arXiv in 2024. Our results are summarized in Table 5. Our results show that approximately 87.8% of papers have acces- sible text information via ar5iv. ChatPD successfully extracts dataset information from 85.5% of these papers, with an average of 2.41 dataset usage records per paper. Among the extracted dataset usage records, less than half of the dataset descriptions can be mapped to PwC’s dataset entities. Our offline experiments in Section 5.2 demonstrate the effectiveness of our entity resolution algorithm for mapping dataset descriptions to PwC’s dataset entities. Hence, this low matching ratio indicates that PwC’s database is still incomplete, i.e., there is still significant room for improvement in the coverage of PwC’s dataset database. We also evaluate the real-time performance of the deployed ChatPD and compare it with PwC’s results. We calculate the cov- erage of papers with extracted dataset information in the PwC database and the coverage of dataset usage records extracted by ChatPD by month. The results are shown in Fig. 4. As not all the ex- tracted dataset descriptions can find matching entities in the PwC database, we record both ‘the coverage of papers with matched Figure 4: Coverage of Papers with Extracted Dataset Infor- mation in arXiv cs.AI Category PwC entities ( ChatPD Matched Paper Coverage)’ and ‘the cover- age of papers with extracted dataset information ( ChatPD Paper Coverage)’. Our data is up to January 12, 2025. We observe that PwC’s paper coverage is higher than ChatPD ’s matched paper coverage at the beginning of 2024. However, af- ter May, ChatPD ’s coverage surpasses PwC’s. PwC’s coverage is relatively low for newly published papers due to its partial re- liance on community annotations. In contrast, ChatPD uses LLMs to automatically extract dataset information, enabling it to stably analyze dataset usage records in papers. Therefore, ChatPD ’s cover- age is significantly higher than PwC’s in the later months. In 2024, PwC’s paper coverage is 34.5%, ChatPD ’s paper coverage that can be mapped to PwC dataset entities is 38.4%, and the paper coverage with extracted dataset information is 85.5%. This demonstrates that ChatPD can stably and efficiently extract dataset information. 7 CONCLUSION In this paper, we introduce and deploy a novel Large Language Model (LLM)-driven system, ChatPD , for constructing a compre- hensive paper-dataset network. ChatPD automates the extraction of dataset information from academic papers, enabling the construc- tion of a structured network that captures the intricate relation- ships between papers and datasets. Through our entity resolution algorithm, we effectively map diverse dataset descriptions to their corresponding real-world dataset entities. We evaluate ChatPD ’s performance in dataset information extraction, dataset description entity resolution, and network construction through offline ex- periments. We deploy ChatPD on papers in the cs.AI category on arXiv and evaluate its deployment performance. We also demon- strate ChatPD ’s dataset discovery services, including table-based and graph-based queries (Appendix A.1). However, we must acknowledge that due to the current limita- tions of LLMs, there may be some errors in the fully automated construction of the paper-dataset network. We
https://arxiv.org/abs/2505.22349v1
believe that our future system can collaborate with platforms like PwC to transition from entirely manual annotation to manual calibration based on the results obtained from ChatPD . This can significantly reduce the workload of manual annotation and yield a more accurate paper- dataset network. As we continue to refine and expand our system, we are opti- mistic about its potential to transform the way researchers interact with datasets, making the landscape of academic research more interconnected and accessible than ever before. ChatPD: An LLM-driven Paper-Dataset Networking System KDD ’25, August 3–7, 2025, Toronto, ON, Canada ACKNOWLEDGEMENTS This work was supported by the National Natural Science Founda- tion of China (NSFC) under Grant No. U23A20468. REFERENCES [1]2016. Diginetica dataset for CIKM Cup 2016 challenge. https://competitions.cod alab.org/competitions/11161. [2]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren- cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al .2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023). [3]arXiv.org submitters. 2024. arXiv Dataset. https://doi.org/10.34740/KAGGLE/ DSV/7548853 [4]Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al .2018. From detection of individual metastases to classification of lymph node status at the patient level: the CAME- LYON17 challenge. IEEE Transactions on Medical Imaging (2018). [5]Mathieu Bastian, Sebastien Heymann, and Mathieu Jacomy. 2009. Gephi: an open source software for exploring and manipulating networks. In Proceedings of the international AAAI conference on web and social media , Vol. 3. 361–362. [6]David Ben-Shimon, Alexander Tsikinovsky, Michael Friedmann, Bracha Shapira, Lior Rokach, and Johannes Hoerle. 2015. Recsys challenge 2015 and the yoochoose dataset. In Proceedings of the 9th ACM Conference on Recommender Systems . 357– 358. [7]Dan Brickley, Matthew Burgess, and Natasha Noy. 2019. Google Dataset Search: Building a search engine for datasets in an open Web ecosystem. In The World Wide Web Conference . 1365–1375. [8]Adriane Chapman, Elena Simperl, Laura Koesten, George Konstantinidis, Luis- Daniel Ibáñez, Emilia Kacprzak, and Paul Groth. 2020. Dataset search: a survey. The VLDB Journal 29, 1 (2020), 251–272. [9]Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. 2018. BabyAI: A platform to study the sample efficiency of grounded language learning. Preprint arXiv:1810.08272 (2018). [10] William W Cohen. 2000. Data integration using similarity joins and a word-based information representation language. ACM Transactions on Information Systems (TOIS) 18, 3 (2000), 288–321. [11] Together Computer. 2023. RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset . https://github.com/togethercomputer/RedPajama-Data [12] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus En- zweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition . 3213–3223. [13] Jim Cowie and Wendy Lehnert. 1996. Information extraction. Commun. ACM 39, 1 (jan 1996), 80–91. https://doi.org/10.1145/234173.234209 [14] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. 2023. UltraFeedback: Boosting Language Models with High-quality Feedback. arXiv:2310.01377 [cs.CL] [15] Will Cukierski. 2012. Titanic -
https://arxiv.org/abs/2505.22349v1
Machine Learning from Disaster. https://kaggle.c om/competitions/titanic [16] FICO. 2018. Fico xml challenge. https://community.fico.com/s/explainable- machine-learning-challenge [17] C Lee Giles, Kurt D Bollacker, and Steve Lawrence. 1998. CiteSeer: An automatic citation indexing system. In Proceedings of the third ACM conference on Digital libraries . 89–98. [18] Benjamin A Goldstein, Ann Marie Navar, Michael J Pencina, and John PA Ioan- nidis. 2017. Opportunities and challenges in developing risk prediction models with electronic health records data: a systematic review. Journal of the American Medical Informatics Association: JAMIA 24, 1 (2017), 198. [19] Kathleen Gregory, Paul Groth, Andrea Scharnhorst, and Sally Wyatt. 2020. Lost or Found? Discovering Data Needed for Research. Harvard Data Science Review, 4 2020. [20] Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 mil- lion summaries with diverse extractive strategies. arXiv preprint arXiv:1804.11283 (2018). [21] Qianyue Hao, Jingyang Fan, Fengli Xu, Jian Yuan, and Yong Li. 2024. HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction. InAdvances in Neural Information Processing Systems , A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang (Eds.), Vol. 37. Curran Associates, Inc., 48189–48223. https://proceedings.neurips.cc/paper_files/paper /2024/file/5635925cf9d2274f338eb0dd5971e845-Paper-Conference.pdf [22] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 (2017).[23] Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator ar- chitecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition . 4401–4410. [24] Dagmar Kern and Brigitte Mathiak. 2015. Are there any differences in data set retrieval compared to well-known literature retrieval?. In Research and Advanced Technology for Digital Libraries: 19th International Conference on Theory and Practice of Digital Libraries, TPDL 2015, Poznań, Poland, September 14-18, 2015, Proceedings 19 . Springer, 197–208. [25] Laura Koesten, Elena Simperl, Tom Blount, Emilia Kacprzak, and Jeni Tennison. 2020. Everything you always wanted to know about a dataset: Studies in data summarisation. International journal of human-computer studies 135 (2020), 102367. [26] Laura M Koesten, Emilia Kacprzak, Jenifer FA Tennison, and Elena Simperl. 2017. The Trials and Tribulations of Working with Structured Data: -a Study on Information Seeking Behaviour. In Proceedings of the 2017 CHI conference on human factors in computing systems . 1277–1289. [27] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al .2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7 (2019), 453–466. [28] Simon Lacoste-Julien, Konstantina Palla, Alex Davies, Gjergji Kasneci, Thore Graepel, and Zoubin Ghahramani. 2013. Sigma: Simple greedy matching for align- ing large knowledge bases. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining . 572–580. [29] Yinghao Li, Colin Lockard, Prashant Shiralkar, and Chao Zhang. 2023. Extracting Shopping Interest-Related Product Types from the Web. In Findings of the Associ- ation for Computational Linguistics: ACL 2023 , Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 7509–7525. https://doi.org/10.18653/v1/2023.findings-acl.474 [30] Yinghao Li, Le Song,
https://arxiv.org/abs/2505.22349v1
and Chao Zhang. 2022. Sparse Conditional Hidden Markov Model for Weakly Supervised Named Entity Recognition. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Wash- ington DC, USA) (KDD ’22) . Association for Computing Machinery, New York, NY, USA, 978–988. https://doi.org/10.1145/3534678.3539247 [31] Yongqi Li, Yu Yu, and Tieyun Qian. 2023. Type-Aware Decomposed Frame- work for Few-Shot Named Entity Recognition. In Findings of the Association for Computational Linguistics: EMNLP 2023 , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 8911–8927. https://doi.org/10.18653/v1/2023.findings-emnlp.598 [32] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 . Springer, 740– 755. [33] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130 (2017). [34] Pierre Lison, Jeremy Barnes, Aliaksandr Hubin, and Samia Touileb. 2020. Named Entity Recognition without Labelled Data: A Weak Supervision Approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (Eds.). Association for Computational Linguistics, Online, 1518–1533. https://doi.org/ 10.18653/v1/2020.acl-main.139 [35] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Cheng- gang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al .2024. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 (2024). [36] Yinhan Liu. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 364 (2019). [37] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision . 3730–3738. [38] Tingting Ma, Huiqiang Jiang, Qianhui Wu, Tiejun Zhao, and Chin-Yew Lin. 2022. Decomposed Meta-Learning for Few-Shot Named Entity Recognition. In Findings of the Association for Computational Linguistics: ACL 2022 , Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguistics, Dublin, Ireland, 1584–1596. https://doi.org/10.18653/v1/2022.findi ngs-acl.124 [39] Fernando Martínez-Plumed, Pablo Barredo, Sean O Heigeartaigh, and Jose Hernandez-Orallo. 2021. Research community dynamics behind popular AI benchmarks. Nature Machine Intelligence 3, 7 (2021), 581–589. [40] Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. 2023. Recent advances in natural language processing via large pre-trained language models: A survey. Comput. Surveys 56, 2 (2023), 1–40. [41] Alvaro E Monge, Charles Elkan, et al .1996. The field matching problem: algo- rithms and applications.. In Kdd, Vol. 2. 267–270. [42] Janna Neumann and Jan Brase. 2014. DataCite and DOI names for research data. Journal of computer-aided molecular design 28 (2014), 1035–1041. KDD ’25, August 3–7, 2025, Toronto, ON, Canada Anjie Xu, Ruiqing Ding, and Leye Wang [43] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human-generated machine reading comprehension dataset. (2016). [44] Folorunsho Olaiya and Adesesan Barnabas Adeyemo. 2012. Application of data mining techniques in weather
https://arxiv.org/abs/2505.22349v1
prediction and climate change studies. International Journal of Information Engineering and Electronic Business 4, 1 (2012), 51. [45] Kelley Pace and Ronald Barry. 1997. Sparse spatial autoregressions. Statistics & Probability Letters 33, 3 (1997), 291–297. https://EconPapers.repec.org/RePEc: eee:stapro:v:33:y:1997:i:3:p:291-297 [46] Huitong Pan, Qi Zhang, Eduard Dragut, Cornelia Caragea, and Longin Jan Latecki. 2023. Dmdd: A large-scale dataset for dataset mentions detection. Transactions of the Association for Computational Linguistics 11 (2023), 1132–1146. [47] Animesh Prasad, Chenglei Si, and Min-Yen Kan. 2019. Dataset mention extrac- tion and classification. In Proceedings of the Workshop on Extracting Structured Knowledge from Scientific Publications . 31–36. [48] Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. 2018. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 8494–8502. [49] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016). [50] Pradeep Ravikumar and William Cohen. 2012. A hierarchical graphical model for record linkage. arXiv preprint arXiv:1207.4180 (2012). [51] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI magazine 29, 3 (2008), 93–93. [52] Heinrich Stamerjohanns, Michael Kohlhase, Deyan Ginev, Catalin David, and Bruce Miller. 2010. Transforming large collections of scientific publications to XML. Mathematics in Computer Science 3 (2010), 299–307. [53] Qwen Team. 2024. Qwen2.5: A Party of Foundation Models. https://qwenlm.git hub.io/blog/qwen2.5/ [54] Hanghang Tong, Christos Faloutsos, and Jia-Yu Pan. 2008. Random walk with restart: fast solutions and applications. Knowledge and Information Systems 14 (2008), 327–346. [55] Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830 (2016). [56] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 (2018). [57] Mu-Chun Wang, Zixuan Liu, and Sheng Wang. 2022. Textomics: a dataset for genomics data summary generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . 4878–4891. [58] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652 (2021). [59] Renzhi Wu, Sanya Chaba, Saurabh Sawlani, Xu Chu, and Saravanan Thirumu- ruganathan. 2020. Zeroer: Entity resolution using zero labeled examples. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data . 1149–1164. [60] Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017). [61] Zheng Zhang, Jie Gao, Ranjodh Singh Dhaliwal, and Toby Jia-Jun Li. 2023. Visar: A human-ai argumentative writing assistant with visual programming and rapid draft prototyping. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology . 1–30. A APPENDIX A.1 Dataset Discovery Service Building upon the constructed paper-dataset network, our sys- tem is poised
https://arxiv.org/abs/2505.22349v1
to offer a suite of services designed to enhance the research community’s ability to discover and utilize datasets effec- tively. These services are not only aimed at simplifying the dataset search process but also at providing insights into dataset relevance, usage trends, and their applicability to various research tasks. Here we divide the services supported by the current network into two categories: table-based query and graph-based query. A.1.1 Table-Based Query. The table-based query is a traditional way in the database to query the dataset information. Users can Figure 5: Visualization of Paper-Dataset Network (F-MNIST means Fashion-MNIST, NQ means Natural Questions). search for datasets by specifying criteria such as the dataset name, associated tasks, data types, or the research domains they are in- terested in. The search results are augmented with information on how and where the datasets have been used in literature, offering researchers valuable context. Demo: Which datasets occurred in New York? Urban planners, for example, may want to know which datasets are used in New York. They can use something like Structured Query Language (SQL) query to get the answer. SELECT *FROMdataset WHERElocation LIKE '%New␣York% ' Here is the result of the query: { " a r x i v i d " : " 2 1 0 8 . 0 4 4 6 2 " , " t i t l e " : " Deep R e i n f o r c e m e n t L e a r n i n g f o r Demand Driven S e r v i c e s i n L o g i s t i c s and T r a n s p o r t a t i o n Systems : A Survey " , " d a t a s e t name " : "New York C i t y TLC D a t a s e t " , " d a t a s e t summary " : " C o n t a i n s t r a v e l r e c o r d s f o r v a r i o u s s e r v i c e s , i n c l u d i n g Yellow t a x i s , Green t a x i s , and FHV ( For −Hire V e h i c l e ) from 2 0 0 9 t o 2 0 2 0 . " , " t a s k " : " D i s p a t c h i n g " , " d a t a type " : " T r a v e l r e c o r d s " , " l o c a t i o n " : "New York C i t y , USA" , " time " : " 2 0 0 9 −2 0 2 0 " , " s c a l e " :
https://arxiv.org/abs/2505.22349v1
" Large " , " d a t a s e t p r o v i d e r " : "New York C i t y TLC " , " d a t a s e t u r l " : " h t t p s : / /www. nyc . gov / s i t e / t l c / about / t l c −t r i p −record −d a t a . page " , " d a t a s e t p u b l i c l y a v a i l a b l e " : " Yes " , " o t h e r u s e f u l i n f o r m a t i o n about t h i s d a t a s e t " : " This d a t a s e t i s commonly used f o r v a r i o u s s t u d i e s i n t r a n s p o r t a t i o n and urban m o b i l i t y . " } , . . . A.1.2 Graph-Based Query. Our paper-dataset network is a kind of data in graph form, which has rich information. We use the Gephi[ 5], a popular network visualization tool, to visualize the con- structed paper-dataset network. The network is a bipartite graph, where each node represents a paper or a dataset, and each edge rep- resents the usage of a dataset in a paper. We sampled Cityscapes[ 12], CelebA[ 37], FFHQ[ 23], and Fashion-MNIST[ 60] for Computer Vi- sion, GLUE[ 56], SQuAD[ 49], and Natural Questions[ 27] for Natural Language Processing, and Yelp[ 33], PubMed[ 51], CiteSeer[ 17] for graph-based analyses. As shown in Fig. 5, we can notice that differ- ent domain datasets are naturally clustered together. This is also in line with common sense, that is, papers in different research direc- tions will use multiple datasets in the field to conduct experiments, verifying the generalization of the proposed methods. Inspired by this, we can use graph algorithms to find similar datasets to a given dataset. ChatPD: An LLM-driven Paper-Dataset Networking System KDD ’25, August 3–7, 2025, Toronto, ON, Canada Table 6: Top 5 Datasets Similar to the SQuAD Dataset Similarity Score GLUE 0.0198 Natural Questions 0.0110 NewsQA 0.0098 TribiaQA 0.0097 MS MARCO 0.0064 One effective way to find similar datasets with the paper-dataset network is by employing a Random Walk with Restart (RWR) algorithm[ 54]. The RWR algorithm is a graph-based algorithm that simulates the process of a random walker traversing the graph. The walker starts at a given node and moves to a neighboring node with a certain probability. The walker can also restart at the original node with a certain probability. The RWR algorithm is widely used in graph-based recommendation systems and is effective in finding similar nodes in a graph. Demo: Which datasets
https://arxiv.org/abs/2505.22349v1
are similar to dataset SQuAD? The Stanford Question Answering Dataset (SQuAD) is a dataset of question-answer pairs, which is widely used in the field of natural language processing. We can use the RWR algorithm to find datasets similar to the SQuAD dataset. The top 5 similar datasets are shown in Table 6. Similar to the SQuAD dataset, the GLUE [ 56], Natural Ques- tions [ 27], NewsQA [ 55], MS TriviaQA [ 22], and MS MARCO [ 43] datasets are widely used for training and evaluating machine read- ing comprehension models. Researchers often employ the SQuAD dataset in conjunction with these datasets to verify whether their models’ understanding and reasoning capabilities generalize across diverse benchmarks. A.2 Agentic Framework for Dataset Information Extraction ChatPD currently selects sections of the paper that are relevant to dataset information extraction based on rules. However, this approach may overlook datasets that are used in sections other than those explicitly designated for dataset information extraction, such as the Related Work section. To address this issue, we propose an Agentic Framework for dataset information extraction, inspired by the agent module in the HLM-Cite [ 21] system, to better locate and extract dataset- related content scattered across different sections of the paper. The framework is designed as follows: •Summarizer: Summarizes the main content of each section of the paper. •Selector: Selects sections that are likely to contain dataset- related information based on their summaries. •Extractor: Analyzes the selected sections to extract struc- tured dataset details such as names, tasks, and data types. We evaluate the performance of the Agentic Framework on the dataset information extraction task and supplement the cost comparison in Table 7. We use GPT-4o-mini as the LLM backend in ChatPD and use the different input methods for comparison. We notice that the Agentic Framework performs better than the previous method in all metrics except for Micro Precision, indicat- ing that the Agentic Framework can better locate dataset-relatedTable 7: Comparison of Input Strategies for Dataset Informa- tion Extraction Input Strategy EMR P R F1 Cost PapersWithCode 0.420 0.826 0.659 0.733 - ChatPD (1.5k tokens) 0.689 0.987 0.780 0.872 0.00063 ChatPD (Full Paper) 0.723 0.966 0.850 0.904 0.00447 ChatPD (Agentic) 0.773 0.896 0.946 0.920 0.00739 Note: EMR = Exact Match Ratio, P = Micro Precision, R = Micro Recall, F1 = Micro F1 Score. Cost is measured in USD per paper. content and extract more comprehensive dataset usage information. By analyzing the bad cases, we find that the Agentic Framework sometimes mistakenly identifies datasets that were mentioned (but not actually used) in the Related Work section as the dataset used by the paper, which leads to a decrease in Micro Precision. The Agentic Framework demonstrates strong overall perfor- mance and promising potential, but currently costs about 11.7 times more than the original method. In the future, we will explore a more efficient Agentic Framework to optimize the current ChatPD .
https://arxiv.org/abs/2505.22349v1
1 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdataVME: a Satellite Imagery Dataset and Benchmark for Detecting Vehicles in the Middle East and Beyond Noora al-Emadi 1,2 ✉, Ingmar Weber 3, Yin Yang2 & Ferda Ofli 1 Detecting vehicles in satellite images is crucial for traffic management, urban planning, and disaster response. However, current models struggle with real-world diversity, particularly across different regions. This challenge is amplified by geographic bias in existing datasets, which often focus on specific areas and overlook regions like the Middle East. To address this gap, we present the Vehicles in the Middle East (VME) dataset, designed explicitly for vehicle detection in high-resolution satellite images from Middle Eastern countries. Sourced from Maxar, the VME dataset spans 54 cities across 12 countries, comprising over 4,000 image tiles and more than 100,000 vehicles, annotated using both manual and semi-automated methods. Additionally, we introduce the largest benchmark dataset for Car Detection in Satellite Imagery (CDSI), combining images from multiple sources to enhance global car detection. Our experiments demonstrate that models trained on existing datasets perform poorly on Middle Eastern images, while the VME dataset significantly improves detection accuracy in this region. Moreover, state-of-the-art models trained on CDSI achieve substantial improvements in global car detection. Background & Summary Satellite imagery has become an essential instrument for a wide range of applications from agriculture1 and environmental monitoring2 to urban development3,4 and disaster response5. A recent review of object detection in satellite imagery highlights the difficulty of creating a general-purpose model that can handle thousands of diverse object categories and varying real-world conditions6. Instead, the study recommends focusing on task-specific models in narrower application areas, where success is more likely if large, well-annotated datasets are available. Therefore, our study focuses on vehicle detection in satellite imagery, a critical task with diverse real-world applications such as analyzing traffic flow and patterns for traffic management 7,8, monitoring parking lot occupancy rates to support urban planning9, and modeling spatial-temporal changes in vehicle counts as a proxy for internal displacement monitoring10. To this end, we first present a novel labeled dataset called Vehicles in the Middle East (VME) to attenuate the under-representation of the region. We then construct the largest benchmark dataset, called Car Detection in Satellite Imagery (CDSI), by consolidating images from multiple existing satellite imagery datasets for enhanced global car detection. Detecting vehicles in satellite imagery is challenging because each vehicle covers only a few pixels, clas - sifying them as tiny objects. As a result, the surrounding context becomes crucial for accurately delineating these small objects. Several studies have been conducted on tiny object detection in satellite imagery 11–13. A review comprehensively analyzed these methods based on five factors: data augmentation, multi-scale feature learning, context-based detection, training strategy, and GAN-based detection, and showed that these factors play a role in enhancing the detection performance in tiny objects 14. Another systematic study on small object detection was conducted by reviewing existing literature on algorithms and datasets15. Two large-scale bench - marks, SODA-D and SODA-A, were constructed for driving scenarios and aerial scenes. Several algorithms were evaluated
https://arxiv.org/abs/2505.22353v1
on top of these benchmarks with in-depth analyses, resulting in discussions on backbone effectiveness, 1Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar. 2college of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar. 3Saarland Informatics Campus, Saarland University, Saarbrücken, Germany. ✉e-mail: nalemadi@hbku.edu.qaData D EScr IptoropEN 2 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/hierarchical feature representation efficiency, and one-stage detector performance for small object detection. In addition, several studies were performed on vehicle and car detection16–19. These studies17,18 focus on the devel - opment of new vehicle detection models, as well as the enhancement of existing ones, utilizing publicly available datasets such as DOTA 20, VEDAI21, and xView22, fMoW23, V AID24, and AI-TOD25. However, existing models for vehicle detection face challenges when applied to diverse real-world scenarios involving the analysis of satellite images from previously unexplored geographic regions26. For example, the visual context of a car on the road in Abu Kamal City, Syria (Fig. 1a) and Alexandria City, Egypt (Fig. 1b) present clear differences compared to a car on the road in Sydney, Australia (Fig. 1c) and Mexico City, Mexico (Fig. 1d). A noticeable contrast is evident in the appearance of built structures and land cover, stemming from unique dif - ferences in the natural landscape, climate, economic development, urban planning, and architectural design in Middle Eastern countries. This contrast becomes more pronounced thanks to the rapid pace of urban develop - ment in Middle Eastern countries driven by large-scale smart city projects as opposed to the more incremental urban upgrades seen in the US and Europe 27,28. Therefore, with the prevalence of datasets focusing on specific regions, a gap related to geographic bias has emerged, particularly in the Middle East as highlighted in Fig. 2. To bridge this gap, the VME dataset, collected from Maxar, spans 54 cities in 12 countries in the Middle East and comprises more than 4,000 high-resolution image tiles of 512 × 512 pixels with more than 100k vehicle instances. The ground-truth annotations were gen - erated using a combination of manual annotation and semi-automated techniques through a crowdsourcing company. Additionally, the CDSI dataset constitutes the largest benchmark for car detection by expanding VME with images from other existing satellite imagery datasets, such as xView 22, DOTA-v2.020, VEDAI21, DIOR29, and FAIR1M-2.030. We conduct comprehensive experiments using advanced object detection models, such as TOOD31 and DINO32, and present baseline results on both individual and combined datasets. The VME baseline evaluation demonstrates a remarkable 56.3% improvement in mAP for car detection in the Middle East compared to mod - els trained on existing datasets. Additionally, the model trained on the CDSI dataset, due to its greater diversity and scale, significantly enhances mAP50, with improvements ranging from 19.6% to 84.6% across all models trained on individual datasets. This newly developed dataset serves as a valuable resource for researchers and professionals in remote sensing, promoting progress in vehicle detection and satellite imagery analysis. Methods This section provides details about our novel VME dataset such as the different categories, image resolution, area coverage, and annotation format. Then, we
https://arxiv.org/abs/2505.22353v1
elaborate on the new benchmark dataset (CDSI), where we collect car-related objects from the publicly available datasets and combine them with the VME dataset. VME Dataset. We constructed the VME dataset by collecting satellite images of different cities in the Middle Eastern countries such as Syria, Libya, Iraq, Jordan, Egypt, Qatar, Saudi Arabia, United Arab Emirates, Oman, Kuwait, and Bahrain. We included the most popular cities including the capitals of these countries. The city-level geographic distribution of the collected images in the VME dataset is highlighted with purple circles in Fig. 2, which includes underrepresented geographic regions for vehicle detection in satellite imagery, compared to the blue circles representing the distribution of images in the xView dataset. We note that the remaining datasets do not provide any geographical information at the country or city level and, hence, cannot be accurately represented on the map. Image Collection. For each city in our dataset, we identified the geographic area of interest (AOI) and collected high-resolution satellite images from Maxar Technologies, which provides access to a large archive of the world’s most recent pan-sharpened natural color images at a spatial resolution of up to 30 cm through a paid subscrip - tion to their SecureWatch platform. To this end, we searched the archive for satellite images with (i) RGB color, (ii) less than 20% cloud coverage, (iii) a ground sampling distance of at most 50 cm (i.e., images at 30 cm, 40 cm, and 50 cm spatial resolution), and (iv) off-Nadir angle less than 30 degrees. We downloaded a total of 2,714 image snapshots across all 54 city AOIs. The resulting images are large with an average dimension in the range of 22,475 × 24,043. Since this image size is too large for processing and labe - ling directly, we generated random crops of image tiles with 512 × 512 pixels. This initially yielded a total of 22,125 image tiles. We ensured the resulting tiles did not have any missing or undefined pixels. Furthermore, to Fig. 1 The distinct visual context of cars on the road in the Middle Eastern cities: (a) Abu Kamal, Syria, and (b) Alexandria, Egypt & other cities around the world: (c) Sydney, Australia22, and (d) Mexico City, Mexico22. 3 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/keep the annotation budget under control, we manually discarded the tiles that did not have obvious objects to annotate, such as images with completely green or desert areas. As a result of this filtering, we had 4,303 image tiles to be annotated in the next step. Image Annotation. After inspecting the taxonomies of the existing satellite imagery datasets for vehicle-related classes, we defined a three-class taxonomy comprising car , bus, and truck classes in our dataset. We decided to collect oriented bounding box (OBB) annotations as certain applications, such as traffic management, can leverage the direction information as well. We employed Co-one (https://www.co-one.co/), an AI and crowdsourcing-based data platform that promises 95% annotation accuracy. The annotation process started with preparing the guideline handbook, which outlined the project overview, technical guidelines, targeted cate -
https://arxiv.org/abs/2505.22353v1
gories with definitions and examples, rules and tips for the annotation process, and the deliverable format. Then, the data annotation process was conducted with a crowdsource of 6000+ people, where each group focused on a specific category. Finally, an annotation review process was implemented to detect mislabeled annotations via a cross-validation system; and an expert was employed to correct such cases. After the annotation quality review process, final annotations were delivered. We provided images in lossless PNG format, and received OBB annotations in YOLO format as text (*.txt) files. Each annotation file contains the image name, and each line in the file represents a targeted object as follows: x 1, y1, x2, y2, x3, y3, x4, y4, category_id, where (x1, y1) is the top left, (x2, y2) is the top right, (x3, y3) is the bottom right, and (x4, y4) is the bottom left point of OBB, and category_id indicates the class index as 0, 1, 2 corresponding to car , bus, and truck , respectively. Additionally, we obtained standard horizontal bounding box (HBB) annotations based on the minimum and maximum x and y coordi - nates of the OBB annotations with their category. To better help the community utilize the dataset, we provide both the oriented and horizontal bounding box annotation files. Final Dataset. Out of 4,303 images annotated, 21 images were deemed damaged or corrupted and excluded from the dataset. Hence, the final dataset contains 4,282 images with a total of 113,737 objects comprising 101,564 cars, 5,327 buses, and 6,846 trucks while 241 images do not contain any instances of the target object classes and are tagged as no_label . The distribution of classes is shown in Fig. 3a. Also, Fig. 3b,c,d highlight the area distribution of cars, buses, and trucks in pixels, respectively. It is observed that all of the car instances fall under the small object range (i.e., area (pixels) < 32 2) defined in the MS-COCO evaluation, specifically within the first half of the range (i.e., area (pixels) < 512) which is considered tiny objects. On the other hand, both the bus and truck instances fall mostly within the small object range (area (pixels) < 32 2), with almost negligible overlap into the medium object range (322 < area (pixels) < 962). We provide training, validation, and test sets of the dataset following a random split with a ratio of 5/8, 1/8, and 2/8, respectively. Table 1 presents the statistics for all VME categories, outlining the details of each split. CDSI Dataset. This section introduces the related object detection datasets in satellite imagery, such as xView, DOTA-v2.0, VEDAI, FAIR1M-2.0, and DIOR. Also, it describes the filtering and consolidation process of the CDSI dataset. Fig. 2 The geographical distribution of VME, and xView, denoted as purple and blue circles, respectively. There is no geographical information reported for the remaining datasets. 4 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/Existing Datasets. We explored a large list of publicly available datasets for object detection to employ in our study. We excluded those with low-altitude, drone-based, and UAV-based datasets and
https://arxiv.org/abs/2505.22353v1
the datasets with high ground sample distance (GSD) ranges or hidden contexts, such as COWC33, PaCaBa34, PSU35, and VisDrone36. Even some of the new datasets are not yet released, e.g., VehSat37 and EAGLE38. The following are the datasets we employed in our study. xView22 is considered one of the largest publicly available datasets containing 846 images collected from Maxar at various locations around the world, as shown in Fig. 2. The images are available with 30cm/pixel spa - tial resolution, and the average dimension of the images is 3316 × 2911. The dataset has 60 object classes with 1 million object instances annotated using horizontal bounding boxes for all splits, while the ground truth of testing split is not available. The xView repository (https://challenge.xviewdataset.org/data-download) provides the training and validation images in TIF format and the annotations in GeoJSON format. DOTA-v2.0 20 contains 2,423 images gathered from Google Earth, different satellites supplied by the Resources Satellite Data and Application Center in China, and aerial images supplied by CycloMedia B.V . The size of the images ranges from 800 to 20,000 pixels, and their spatial resolution varies between 0.1m/pixel to Fig. 3 Statistical properties of the object categories in the VME dataset. (a) Distribution of VME categories, (b) Area distribution of cars, (c) Area distribution of buses, (d) Area distribution of trucks. CategoriesTraining Validation Test # ann. # imgs # ann. # imgs # ann. # imgs Car 63,051 2,449 12,055 510 26,458 988 Bus 3,079 964 674 183 1,574 394 Truck 4,140 1,041 1,004 225 1,702 428 All 70,270 2,505 13,733 525 29,734 1,011 Table 1 . Number of images and annotations in each category across training, validation, and test splits of the VME dataset. 5 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/4.5m/pixel. The dataset contains 18 object classes and objects are annotated using both oriented and horizontal bounding boxes. The dataset is presented in three versions, where the final version (v2.0) contains a total of 1,793,658 object instances of all splits, while the ground truth of the testing split is not available. The DOTA images were released with no geographical information. We obtained DOTA-v2.0 from its repository (https://captain-whu.github.io/DOTA/index.html); the images are in PNG format and its annotations are in YOLO (TXT) format. Acquiring DOTA-v2.0 requires the users to download DOTA-v1.0 first, then get the v2.0. VEDAI 21 was built specifically for detecting vehicles in satellite imagery, such as boats, planes, tractors, cars, and vans. The dataset provides two sets of 1,246 images in colored and infrared format, each set at a different spatial resolution (12.5cm/pixel or 25cm/pixel), and hence, image dimensions (1024 × 1024 or 512 × 512 pix - els). The annotation format used for the dataset is the oriented bounding box. No geographic information is revealed in VEDAI. In our study, we downloaded colored images with a spatial resolution of 25cm/pixel (i.e., image dimensions of 512 × 512 pixels) from the VEDAI repository (https://downloads.greyc.fr/vedai/). The annotations are stored in TXT files, reporting the four corners of OBBs with the category. DIOR 29 is another large-scale benchmark dataset for
https://arxiv.org/abs/2505.22353v1
object detection in optical satellite images. It consists of 23,463 images annotated for 20 object categories and 192,512 object instances using horizontal bounding boxes. Spatial resolution of images is between 0.5m/pixel and 30m/pixel. The dataset claims to cover more than 80 countries, but the specific list of countries has not been released. The dataset can be downloaded from DIOR repository ( https://gcheng-nwpu.github.io/ ); which delivers the images in JPG format and the annotation files in PASCAL-VOC (XML) format. FAIR1M-2.0 30 contains more than 20,000 images with more than 1 million instances of fine-grained object categories. The images are gathered from Google Earth and the Gaofen satellites, with spatial resolutions between 0.3m/pixel and 0.8m/pixel. The object annotations were collected for five main categories and 37 sub-categories using oriented bounding boxes. It is stated that the dataset covers different continents; but the country- or city-level details about the image locations are not published. The dataset can be obtained from FAIR1M repos - itory ( https://gaofen-challenge.com/benchmark); its annotation files are presented in PASCAL-VOC (XML) format, and the images are offered in TIF format. Category Mapping. To construct a unified benchmark dataset for car detection in satellite imagery, we inves - tigated the taxonomies of the aforementioned datasets. Each dataset labels car-related objects differently, using terms like “small car, ” “small vehicle, ” “vehicle, ” “car, ” or “van. ” Thus, we visually inspected these categories to ensure they correspond to the same “car” object we are targeting. For instance, “small car” in xView and “small vehicle” in DOTA-v2.0 refer to standard cars, while in DIOR “vehicle” covers a broader range of vehicles (e.g., cars, trucks, buses, and vans), with “car” being a subset of this general category. Figure 4 illustrates the car-related objects across datasets that we target for constructing the CDSI dataset. We conclude that these classes can be mapped to the same object type, i.e., car, with certain conditions such as filtering by typical car size. Specifically, car-related categories were mapped to the car category in CDSI for objects with an HBB area of less than 400 pixels, as detailed in Table 2. To avoid the challenges associated with training an object detection model using only a single class, we ensured the model encountered hard negatives–objects similar in size to cars. To achieve this, we opted to group all other small objects into a single category called “other. ” Similarly, instances from all other categories with an HBB area of less than 400 pixels were mapped to the other small object category in CDSI, as reported in Table 2. As a result, the CDSI dataset consists of two classes: “car” and “other. ” The details about data processing and filtering steps are explained next. Data Processing and Filtering. Each dataset uses a different annotation style (e.g., OBB, HBB, or both) and adopts different data representation and file format (e.g., XML files with PASCAL-VOC format, TXT files with YOLO format, JSON files with MS-COCO format, etc.). To consolidate all of the datasets, we designed a data processing pipeline, illustrated in Fig. 5, with the
https://arxiv.org/abs/2505.22353v1
following steps: Fig. 4 Example images with car-related objects in (a) xView22, (b) DOTA-v2.020, (c) VEDAI21, (d) DIOR29, (e) FAIR1M-2.030, (f) VME (our) datasets. 6 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/• Annotation standardization: We standardize all the annotations from different datasets to HBB style. Then, we convert standardized annotations into MS-COCO format which is defined by four values in pixels (x_min, y_min, width, height). • Car-related object size filtering: Given that we are interested in a GSD range of 30-50cm per pixel, we assume that an object with an area greater than 400 pixels is unlikely a car. To verify this assumption, we analyze the car size distributions in all datasets as in Fig. 6. This analysis reveals that an area size of less than 400 pixels reports for more than 90% of all car-related object instances across all datasets. Therefore, we decided to filter out all object instances with an area larger than 400 pixels; even if they were originally labeled as cars. During our visual inspection, we discovered that these cases often relate to labeling errors or images with spatial res-olutions exceeding the targeted GSD range. • Relabeling small objects: Using the same threshold, we repeat Step 2 to identify all other small object instances with an area less than 400 pixels and label them as “other” category. • Training setups: Depending on the experimental setup, the car-related object instances are merged with the other small object instances to construct car-other setup for the model training. In contrast, only car-related objects are employed to form car setup for the model training (refer to the “Technical Validation” section for details). Final Dataset. Table 2 provides general information and summary statistics about all datasets (individual or consolidated) before and after the data processing and filtering pipeline. The final combined dataset, i.e., CDSI, contains a total of 23,250 images with 896,760 car-related object instances and 185,619 other small object instances. Note that we also created a version of CDSI, denoted as CDSI* , where we excluded VME dataset from the consolidation process to highlight the contribution of VME dataset. With regards to the training, validation, and test sets, we first created random splits of all images in each individual dataset after filtering with a ratio of 5/8, 1/8, and 2/8 (as in the VME dataset). We then combined the resulting splits from different datasets to form the final data splits for both CDSI and CDSI* datasets. For instance, the CDSI training set is simply a union of training sets of all datasets, and the same rule applies for validation and test sets. Data r ecords The repository available at Zenodo39 consists of (a) the VME dataset, including satellite images and annotation files, and (b) the scripts and instructions for creating the CDSI dataset. Overview of the repository files and their formats. The repository is structured into four components as follows: • annotations_OBB: This folder holds TXT files in YOLO format with Oriented Bounding Box (OBB) annota- tions. Each annotation file is named after the corresponding image name, with each line
https://arxiv.org/abs/2505.22353v1
describing a targeted object as follows: x 1, y1, x2, y2, x3, y3, x4, y4, category_id, where (x1, y1) is the top left, (x2, y2) is the top right, (x3, y3) is the bottom right, and (x4, y4) is the bottom left point of OBB, and c ategory_id indicates the class index as 0, 1, 2 corresponding to car , bus, and truck , respectively. The annotation files of images that do not include any of the targeted objects are empty. • annotations_HBB: This folder contains HBB annotations in separate JSON files for training, validation, and test splits, formatted according to the MS-COCO standard defined by four values in pixels (x_min, y_min, width, height). • satellite_images: This folder contains VME images in PNG format, each with a resolution of 512 × 512 pixels. • CDSI_construction_scripts: This directory contains all the necessary instructions for constructing the CDSI dataset, including: (a) guidelines for downloading each dataset from its respective repository, (b) scripts for converting each dataset to the MS-COCO format, located within the corresponding dataset folders, and (c) instructions for combining the datasets. The training, validation, and test splits are provided in the CDSI_con - struction_scripts/data_utils folder. Each split file lists the images from each dataset used in the car detection experiments for both detectors. Additional information on the environment setup and required packages is available in the README.md file.Dataset AnnotationImage Resolution (m/pixel)Original CategoriesAll ObjectsAll ImagesCar-related ClassesCar-related ObjectsOther Small ObjectsRetained Images xView HBB 0.3 60 601,718 846 small car 210,184 55,752 752 DOTA-v2.0 HBB/OBB 0.1 to 4.5 18 349,675 2,423 small vehicle 175,160 37,037 1,300 VEDAI OBB 0.25 10 3,754 1,246 car, van 1,422 1,292 1,057 DIOR HBB/OBB 0.5 to 30 20 192,512 23,463 vehicle 23,964 33,521 5,327 FAIR1M-2.0 OBB 0.3 to 0.8 37 594,482 24,775 small car, van 384,488 48,416 10,795 CDSI* (our) HBB 0.1 to 30 2 971,236 19,321 car 795,218 176,018 19,321 VME (our) OBB/HBB 0.3, 0.4, 0.5 3 113,737 4,041 car 101,542 9,601 4,019 CDSI (our) HBB 0.1 to 30 2 1,082,379 23,250 car 896,760 185,619 23,250 Table 2 . Statistics of car-related and other small object categories in different datasets. CDSI* indicates the version of CDSI without VME. 7 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/technical Validation In this section, we perform a formal assessment of the quality of the VME annotations. Additionally, we provide details on benchmarks conducted across diverse setups and present analytical results to demonstrate the relia - bility and validity of the VME and CDSI datasets. VME Annotation Quality. We implemented quality control to ensure the accuracy and consistency of the VME dataset annotations. We randomly selected around 5% of images across all 54 cities and resolu- tions and labeled these images in-house (by the lead author) to establish ground truth. This process yielded 5,664 ground truth annotations in 215 images. Next, we compared these labels with the annotations from the crowdsource-based platform to calculate True Positives (TP), False Positives (FP), and False Negatives (FN). We identified 5,496 TP , 7 FP , and 168 FN annotations. We then used these values to
https://arxiv.org/abs/2505.22353v1
compute precision, recall, and F1 scores as 0.999, 0.970, and 0.984, respectively. Although some objects were missed, the crucial factor is that the Fig. 5 Dataset consolidation pipeline and final experimental setups. Fig. 6 Distribution of car sizes in (a) xView, (b) DOTA-v2.0, (c) VEDAI, (d) FAIR1M-2.0, (e) DIOR, and ( f) VME (our) datasets. 8 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/identified objects are indeed the targeted ones, making the minimization of False Positives a priority. This process demonstrates that the annotations are highly accurate. Detection Benchmarks. This section describes the benchmark setup and the application of state-of-the-art detection models to evaluate the technical quality and scientific significance of the VME and CDSI datasets. To this end, we explored three different setups to assess how varying number of images and objects (not necessarily cars) in a dataset affects the detection performance. In the first setup, we use the original datasets with their full taxonomy (i.e., all categories) to train object detection models. In the second setup, we use datasets containing only the images with instances of car and other small object categories. And, in the last setup, we use datasets containing only the images with car instances. To facilitate model training in the first setup, we created distinct training, validation, and test splits based on all the images in the original datasets using a ratio of 5/8, 1/8, and 2/8, respectively. In the second and third setups, these initial data splits were reduced to subsets containing only those images with relevant object instances. It is important to note that, at training time, we utilized each dataset’s training and validation sets. However, at test time, we evaluated all trained models on the car-only test sets to obtain comparable performance scores. Table 3 presents the number of images and annotations across data splits and training setups (i.e., original , car-other , and car ) for different datasets. We conducted experiments using a state-of-the-art framework, Slicing Aided Hyper Inference (SAHI)40. SAHI is developed particularly for small object detection and provides a generalized slicing-aided inference and fine-tuning channel for detecting small objects. In SAHI, various object detectors were examined such as Fully Convolutional One-Stage Object Detection (FCOS) 41, VarifocalNet (VFNET)42 and Task-aligned One-stage Object Detection (TOOD)31. In our study, we adopted the best-performing inference setup reported in SAHI, which is Slicing Aided Fine-tuning, Full-Inference, and Patch Overlap (SAHI +FI+PO) setting with TOOD detector from MMDetection library43. Additionally, we performed experiments with a more recent object detec - tor called DINO32 with Swin-L option from the MMDetection library following the SAHI +FI+PO inference setting. We trained a total of 22 models using the TOOD detector with a batch size of 16 for 24 epochs with SGD optimizer. For the original setup, we started training with a learning rate of 0.01 whereas, for the other setups, we started with a learning rate of 0.005. In all training setups, the learning rate was configured to change at epochs 9, 16, 22 with a learning rate decay equal to 0.1. Similarly, we trained a total of 22
https://arxiv.org/abs/2505.22353v1
models using the DINO Swin-L detector with a batch size of 2 for 36 epochs with AdamW optimizer with an initial learning rate of 0.0001, which was configured to change at epochs 27 and 33 with learning rate decay equal to 0.1. We ran all of our experi - ments on an NVIDIA A100 80GB GPU. VME Benchmark. As we introduce our novel dataset for the first time, we perform experiments to provide baseline results. For this purpose, we train and test models with the original VME categories, utilizing both TOOD and DINO Swin-L detectors. Table 4 presents the class-specific and overall results obtained on the orig - inal VME test set. TOOD achieved an overall mAP50 score of 58.5% whereas DINO Swin-L achieved 62.7%. Notably, DINO Swin-L outperforms TOOD by 7.2%, with improvements of 6.2%, 5.9%, and 10.2% in mAP50 scores of car, bus, and truck categories, respectively. These baseline results highlight the challenging nature of the vehicle detection task and verifies our dataset’s reliability for this challenging task. Given these results, we believe our novel dataset focused on Middle Eastern cities will play a key role in advancing vehicle detection in similar regions. Figure 7 illustrates some examples of detection results from the baseline model applied to images from the Middle East sampled from the VME dataset. FP and FN are highlighted with yellow and magenta circles, respec - tively. To prevent clutter, detections for each object category are visualized separately. The results demonstrate the model’s high detection accuracy, with occasional FP detections and rare FN occurrences, reflecting strong recall performance. These findings underscore the model’s robustness while identifying opportunities for reduc - ing FP rates. CDSI Benchmark. This section provides a comprehensive benchmark across various datasets and setups, emphasizing the enhanced value introduced by the CDSI dataset. Additional analyses, including error evalua - tion and data visualization, further illustrate the strengths and limitations of the CDSI benchmark.DatasetTraining Validation Test original car-other car original car-other car car # cat. # imgs # ann. # imgs # ann. # imgs # ann. # imgs # ann. # imgs # ann. # imgs # ann. # imgs # ann. xView 60 524 380923 468 170280 432 135398 110 94786 101 46178 91 38146 167 36640 DOTA-v2.0 18 1513 212720 796 123409 496 100250 317 55907 189 37907 113 33009 193 41901 VEDAI 10 772 2276 655 1650 451 900 162 477 137 367 95 178 183 344 DIOR 20 14547 119216 3302 34935 1958 15461 3050 23002 693 7034 411 2902 790 5601 FAIR1M-2.0 37 14756 311558 6180 226757 5474 202187 1732 81613 1031 63513 870 57013 2969 125288 CDSI* (our) 2 — — 11401 557031 8811 454196 — — 2151 154999 1580 131248 4302 209774 VME (our) 3 2505 70270 2495 68721 2449 63038 525 13733 521 13364 509 12051 988 26453 CDSI (our) 2 — — 13896 625752 11260 517234 — — 2672 168363 2089 143299 5290 236227 Table 3 . Statistics of the training, validation, and test splits in each experimental setup across datasets. 9 Scientific Data | (2025)
https://arxiv.org/abs/2505.22353v1
12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/Table 5 summarizes the results achieved by both detectors, TOOD and DINO Swin-L, on CDSI and its con - stituents. Each row corresponds to a model trained on a particular dataset with a specific setup, e.g., all catego - ries, car-other , and car . We evaluate each trained model on its own test set to quantify its in-domain performance as well as on the VME and CDSI test sets to assess its generalization capabilities. As highlighted before, we use car-only test sets in all cases for comparable results, which we discuss next. First, we observe that all the models trained on individual datasets exhibit poor performance on the VME dataset. Furthermore, the car detection performance does not improve even after combining all the existing datasets together (i.e., CDSI* ). In essence, the models trained on existing datasets cannot effectively detect cars in images from the Middle East. In Fig. 8, the predictions of the VME car setup model are compared with the predictions of the models trained on xView and DOTA-v2.0 car setup on example images from the Middle East sampled from the VME dataset. The com - parison shows that the models trained on xView and DOTA-v2.0 car setup struggle with detecting cars properly sometimes even in easy scenarios like cars on paved roads (top row). Upon examining the CDSI dataset, the table presents the evaluation of predicting the CDSI test set across the trained models on each dataset individually, as well as the trained model on the CDSI. The results highlight the significance of the trained model on images from diverse sources, particularly in the context of detecting cars in satellite imagery. Additionally, the findings underscore the impact of incorporating the VME dataset in the trained model on car setup, revealing that the exclusion of VME in the trained model (CDSI* ) leads to a decrease in mAP50 by 6% and 4.3% in TOOD and DINO Swin-L, respectively, when predicting on the CDSI test set. To gain a deeper understanding of the significance of combining datasets (CDSI), we employed the Prithvi Foundation Model, a collaboration between IBM and NASA 44,45, which was pretrained on large-scale remote sensing data, including Harmonised Landsat Sentinel 2 (HLS). We utilized the IBM-NASA-Geospatial pre - trained model with t-SNE (t-Distributed Stochastic Neighbor Embedding)46, an unsupervised non-linear tech - nique for visualizing feature embeddings, to explore how satellite images are represented in low-dimensional space based on their high-dimensional data. The t-SNE visualization helps in understanding the similarity between points, in this case, different satellite images from various datasets. The results, shown in Fig. 9, illus - trate that the features of the FAIR1M-2.0 dataset are distinctly separate from the others. Additionally, xView shares some features with DOTA-v2.0 and DIOR, while VME shares certain features with DIOR and VEDAI. This outcome emphasizes the extent of training a model on a combined dataset for car detection (CDSI) and its implications for the field of car detection.SetupTOOD DINO Car Bus Truck All Car Bus Truck All mAP mAP50 mAP mAP50 mAP mAP50 mAP mAP50
https://arxiv.org/abs/2505.22353v1
mAP mAP50 mAP mAP50 mAP mAP50 mAP mAP50 all cat. 42 80.8 30.3 48.9 27.9 45.9 33.4 58.5 45.1 85.8 32.8 51.8 29.8 50.6 35.9 62.7 Table 4 . VME baseline results obtained by training and testing the object detection models on the original VME data splits with all object categories as presented in Table 1. All mAP results are presented in percentage (%). Fig. 7 Detections on VME images employing VME baseline model trained on all categories . Y ellow and magenta circles indicate examples of false positives and false negatives, respectively. 10 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/To delve deeper into the details of Table 5, we analyze the performance across various setups using indi - vidually trained datasets, VME and CDSI, and assess how their predictions performed on their respective Dataset SetupTOOD DINO Own Test Set VME Test Set CDSI Test Set Own Test Set VME Test Set CDSI Test Set mAP mAP50 mAP mAP50 mAP mAP50 mAP mAP50 mAP mAP50 mAP mAP50 xViewall cat. 19.8 48.9 13.9 32.5 24.6 52.8 21.8 53.7 23.9 56.4 27.4 61.4 car-other 19.6 49.3 16.3 39.6 21.7 51.5 19.8 50.1 22.4 53.4 27.2 60.5 car 20.2 49.8 15.8 38.2 24.5 55.2 20.3 51.4 23.9 56.4 27.4 61.4 DOTA-v2.0all cat. 15.9 32.0 18.4 42.6 27.4 58.3 16.6 33.0 22.9 50.8 28.3 60.1 car-other 18.1 35.8 17.6 44.4 26.6 58.2 17.7 36.0 17.9 39.5 28.5 60.6 car 18.2 36.3 21.2 53.3 27.3 59.3 19.0 37.1 23.9 55.3 28.2 62.2 VEDAIall cat. 63.5 89.7 2.3 4.8 14.5 37.6 55.1 89.1 1.7 4.2 15.0 40.1 car-other 61.7 92.0 9.0 4.0 13.1 33.9 56.8 91.0 1.4 3.1 16.9 41.9 car 47.1 81.7 2.4 5.1 15.7 40.2 52.3 87.7 1.9 4.7 15.4 40.3 DIORall cat. 29.2 59.4 12.1 32.5 20.2 47.9 28.9 58.1 17.2 41.7 23.8 53.3 car-other 36.0 71.5 13.6 35.7 23.7 55.1 34.4 69.3 17.3 40.7 24.8 58.4 car 30.9 65.9 12.0 33.5 20.9 51.9 31.5 65.8 20.2 47.7 26.7 60.1 FAIR1M-2.0all cat. 48.1 83.1 3.6 6.4 28.6 50.9 50.8 85.2 4.2 7.3 30.3 52.7 car-other 52.0 90.3 5.5 10.9 32.5 59.3 52.0 90.7 5.5 9.7 32.2 59.0 car 51.7 89.7 7.3 15.8 33.1 61.1 52.2 90.3 8.2 16.7 33.0 61.2 CDSI*car-other 39.8 72.7 20.7 47.2 37.4 69.1 39.8 72.9 24.4 51.0 37.9 69.8 car 39.9 72.5 22.0 50.0 37.9 69.6 39.9 72.8 29.3 62.3 38.7 71.3 VMEall cat. 39.8 76.3 39.8 76.3 25.5 53.0 44.5 84.0 44.5 84.0 29.6 60.7 car-other 41.5 80.8 41.5 80.8 25.9 55.1 45.6 86.2 45.6 86.2 28.8 59.8 car 42.2 81.2 42.2 81.2 25.9 54.5 45.8 86.5 45.8 86.5 27.9 58.8 CDSIcar-other 40.6 73.8 43.3 82.3 40.6 73.8 40.6 74.5 46.1 86.8 40.6 74.5 car 40.5 73.8 43.0 81.9 40.5 73.8 40.7 74.4 45.7 86.4 40.7 74.4 Table 5 . Experimental results achieved by TOOD and DINO Swin-L detectors trained on various datasets under different setups. The evaluation results are obtained on each dataset’s own car-only test set, VME car- only test set, and CDSI car-only test set with SAHI+FI+PO inference setting. All mAP results are presented in
https://arxiv.org/abs/2505.22353v1
percentage (%). Fig. 8 Comparison of detections on VME images employing the model trained on VME car setup versus detections of the models trained on xView and DOTA-v2.0 car setup. 11 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/car-only test sets, VME car-only test set and CDSI test set. Overall, the car setup performed better with the TOOD detector in most setups, except for VEDAI and DIOR, which produced better results in the car-other setup in terms of mAP50(%). When focusing on TOOD models trained on other datasets in the car setup and evaluated on the VME test set, results reveal poor performance. Notably, low mAP50 scores were observed for models trained on VEDAI (5.1%) and FAIR1M-2.0 (15.8%), likely due to VEDAI’s limited number of images and annotations, which struggled with car detection in images with varied resolutions and higher car densities. Despite FAIR1M-2.0 being the largest dataset in terms of car-related objects and images, Fig. 9 indicates that its image features differ significantly from those of the VME dataset. A similar pattern is seen in all categories and car-other setups for all models. On the other hand, DINO Swin-L shows slight improvement across all trained models, mirroring the pattern observed with TOOD. Notably, the model trained on CDSI in the car-other setup achieved the highest mAP50 score (86.8%) on the VME test set. To investigate the root causes of errors, we perform an analysis of the detection results47 from the DINO Swin-L model trained on VME and CDSI using the car-other setup. Figures 10 and 11 show a breakdown of errors for the car class for VME and CDSI, respectively. The error analysis provides various insights to identify areas for improvement, including: 1) IoU thresholds of 0.75, 2) IoU thresholds of 0.50, 3) post-localization error removal, 4) false positives within supercategories, 5) category confusion, 6) background false positives, and 7) false negatives, represented as C75, C50, Loc, Sim, Oth, BG, and FN, respectively. Note that the area under each Precision-Recall curve is shown in brackets in the legend. In the case of VME (Fig. 10), overall AP at IoU=.75 is 0.432 (C75), and simply lowering IoU= 0.5 increases the AP to 0.861 (C50), whereas perfect localization could increase AP to 0.898 (Loc). We observe some error due to the confusion between the car and other categories and removing such class confusions would only raise AP slightly to 0.909 (Oth). However, we see a bigger room for improvement by eliminating background false positives (i.e., confusions with other small background objects), which boosts the AP to 0.99 (BG). Surprisingly, in the case of VME, the model does not suffer too much from false negatives (i.e., missed detections). On the other hand, for the model trained on CDSI (Fig. 11), we see similar trends in general regarding the errors due to category confusions and background false positives. However, resolving such issues can boost AP to a maximum of 0.851 (BG), which means the rest of the errors are missing detections. The missed detections in the model trained on CDSI are due to the
https://arxiv.org/abs/2505.22353v1
diversity in object instances and variations in image characteristics collected from different regions. In summary, both plots illus - trate that the errors are dominated by imperfect localization and background confusions. Usage Notes The VME dataset and the script for creating the CDSI dataset are available at Zenodo39. VME images are avail - able in resolutions ranging from 30 to 50 cm per pixel. However, the climate conditions in the Middle East, including haze and airborne dust, can affect the clarity of these images. As a result, some images may have a blurry appearance or exhibit reflections. Fig. 9 t-SNE visualization of the proposed CDSI dataset. 12 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/code availability The data preprocessing script for constructing CDSI dataset, which is written in Python, is available on Zenodo39 and GitHub repository (https://github.com/nalemadi/VME_CDSI_dataset_benchmark) under CDSI_ construction_scripts folder. The file README.md provides detailed instructions for building the CDSI dataset, which includes downloading the datasets, converting each to MS-COCO format, and explaining the combination mechanism. Each subfolder is named after its corresponding dataset and contains a conversion script to MS-COCO format. All the required Python packages are listed in the requirements.txt file located within the CDSI_ construction_scripts folder. Received: 9 October 2024; Accepted: 30 January 2025; Published: xx xx xxxx references 1. Nguyen, T. T. et al . Monitoring agriculture areas with satellite images and deep learning. Applied Soft Computing 95, 106565 (2020). 2. Wang, Y ., Cai, G., Y ang, L., Zhang, N. & Du, M. Monitoring of urban ecological environment including air quality using satellite imagery. Plos one 17, e0266759 (2022). 3. Albert, A., Kaur, J. & Gonzalez, M. C. Using convolutional networks and satellite imagery to identify patterns in urban environments at a large scale. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 1357–1366 (2017). 4. Huang, X. et al . Urban building classification (ubc)-a dataset for individual building detection and classification from satellite imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1413–1421 (2022). 5. Higuchi, A. Toward more integrated utilizations of geostationary satellite data for disaster management and risk mitigation. Remote Sensing 13, 1553 (2021). 6. Gui, S., Song, S., Qin, R. & Tang, Y . Remote sensing object detection in the deep learning era–a review. Remote Sensing 16, 327 (2024). Fig. 10 Error analysis for the car category of the DINO Swin-L detector trained on VME using the car-other setup. Fig. 11 Error analysis for the car category of the DINO Swin-L detector trained on CDSI using the car-other setup. 13 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/ 7. Drouyer, S. & de Franchis, C. Highway traffic monitoring on medium resolution satellite images. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, 1228–1231 (IEEE, 2019). 8. Chen, Y ., Qin, R., Zhang, G. & Albanwan, H. Spatial temporal analysis of traffic patterns during the covid-19 epidemic by vehicle detection using planet remote-sensing satellite images. Remote Sensing 13, 208 (2021). 9. Golej, P ., Horak, J., Kukuliac, P . &
https://arxiv.org/abs/2505.22353v1
Orlikova, L. Vehicle detection using panchromatic high-resolution satellite images as a support for urban planning. case study of prague’s centre.GeoScape 16 (2022). 10. Rufener, M.-C., Ofli, F., Fatehkia, M. & Weber, I. Estimation of internal displacement in ukraine from satellite-based car detections. Sci. Reports 14, 31638 (2024). 11. Liu, H.-I. et al . A denoising fpn with transformer r-cnn for tiny object detection. IEEE Transactions on Geoscience and Remote Sensing (2024). 12. Verma, T. et al. Soar: Advancements in small body object detection for aerial imagery using state space models and programmable gradients. Preprint at https://doi.org/10.48550/arXiv.2405.01699 (2024). 13. Zhu, J. et al. Transformer based remote sensing object detection with enhanced multispectral feature extraction. IEEE Geoscience and Remote Sensing Letters (2023). 14. Tong, K., Wu, Y . & Zhou, F. Recent advances in small object detection based on deep learning: A review. Image and Vision Computing 97, 103910 (2020). 15. Cheng, G. et al. Towards large-scale small object detection: Survey and benchmarks. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023). 16. Gao, P ., Tian, T., Li, L., Ma, J. & Tian, J. De-cyclegan: An object enhancement network for weak vehicle detection in satellite images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 3403–3414 (2021). 17. Du, Q., Celik, T., Wang, Q. & Li, H.-C. Fully convolutional lightweight pyramid network for vehicle detection in aerial images. IEEE Geoscience and Remote Sensing Letters (2021). 18. Li, X. et al. Vehicle detection in very-high-resolution remote sensing images based on an anchor-free detection model with a more precise foveal area. ISPRS International Journal of Geo-Information 10, 549 (2021). 19. Shi, F., Zhang, T. & Zhang, T. Orientation-aware vehicle detection in aerial images via an anchor-free object detection approach. IEEE Transactions on Geoscience and Remote Sensing 59, 5221–5233 (2020). 20. Ding, J. et al. Object detection in aerial images: A large-scale benchmark and challenges. IEEE transactions on pattern analysis and machine intelligence 44, 7778–7796, https://doi.org/10.1109/TPAMI.2021.3117983 (2021). 21. Razakarivony, S. & Jurie, F. Vehicle detection in aerial imagery: A small target detection benchmark. Journal of Visual Communication and Image Representation 34, 187–203, https://doi.org/10.1016/j.jvcir.2015.11.002 (2016). 22. Lam, D. et al. xView: Objects in context in overhead imagery. Preprint at https://doi.org/10.48550/arXiv.1802.07856 (2018). 23. Christie, G., Fendley, N., Wilson, J. & Mukherjee, R. Functional map of the world. In CVPR (2018). 24. Lin, H.-Y ., Tu, K.-C. & Li, C.-Y . Vaid: An aerial image dataset for vehicle detection and classification. IEEE Access 8, 212209–212219 (2020). 25. Wang, J., Y ang, W ., Guo, H., Zhang, R. & Xia, G.-S. Tiny object detection in aerial images. In 2020 25th international conference on pattern recognition (ICPR), 3791–3798 (IEEE, 2021). 26. Minetto, R., Segundo, M. P ., Rotich, G. & Sarkar, S. Measuring human and economic activity from satellite imagery to support city- scale decision-making during covid-19 pandemic. IEEE Transactions on Big Data 7, 56–68 (2020). 27. ZIGURAT Institute of Technology, Z. 7 Impressive Smart City Projects in the Middle East. https://www.e-zigurat.com/en/blog/smart-city-projects-middle-east/ Accessed on 2024-09-17 (2023). 28. George, R. The Rise of Gulf Smart Cities. Wilson Center.
https://arxiv.org/abs/2505.22353v1
Accessed on 2024-09-18 https://www.wilsoncenter.org/article/rise-gulf- smart-cities Accessed on 2024-09-18 (2024). 29. Li, K., Wan, G., Cheng, G., Meng, L. & Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS Journal of Photogrammetry and Remote Sensing 159, 296–307, https://doi.org/10.1016/j.isprsjprs.2019.11.023 (2020). 30. Sun, X. et al. FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery. ISPRS Journal of Photogrammetry and Remote Sensing 184, 116–130, https://doi.org/10.1016/j.isprsjprs.2021.12.004 (2022). 31. Feng, C., Zhong, Y ., Gao, Y ., Scott, M. R. & Huang, W . Tood: Task-aligned one-stage object detection. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV) , 3490–3499 (IEEE Computer Society, 2021). 32. Zhang, H. et al. DINO: DETR with improved denoising anchor boxes for end-to-end object detection. In The Eleventh International Conference on Learning Representations (ICLR) (2023). 33. Mundhenk, T. N., Konjevod, G., Sakla, W . A. & Boakye, K. A large contextual dataset for classification, detection and counting of cars with deep learning. In European Conference on Computer Vision , 785–800 (Springer, 2016). 34. Zambanini, S., Loghin, A.-M., Pfeifer, N., Soley, E. M. & Sablatnig, R. Detection of parking cars in stereo satellite images. Remote Sensing 12, 2170 (2020). 35. Ammar, A., Koubaa, A., Ahmed, M., Saad, A. & Benjdira, B. Vehicle detection from aerial images using deep learning: A comparative study. Electronics 10, 820 (2021). 36. Zhu, P . et al . Detection and tracking meet drones challenge. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 7380–7399 (2021). 37. Drouyer, S. VehSat: a large-scale dataset for vehicle detection in satellite images. In IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium, 268–271, https://doi.org/10.1109/IGARSS39084.2020.9323289 ISSN: 2153-7003 (2020). 38. Azimi, S. M., Bahmanyar, R., Henry, C. & Kurz, F. Eagle: Large-scale vehicle detection dataset in real-world scenarios using aerial imagery. In 2020 25th International Conference on Pattern Recognition (ICPR), 6920–6927 (IEEE, 2021). 39. Al-Emadi, N., Weber, I., Y ang, Y . & Ofli, F. VME: A Satellite Imagery Dataset and Benchmark for Detecting Vehicles in the Middle East and Beyond. https://doi.org/10.5281/zenodo.14185684 (2024). 40. Akyon, F. C., Altinuc, S. O. & Temizel, A. Slicing aided hyper inference and fine-tuning for small object detection. In 2022 IEEE International Conference on Image Processing (ICIP), 966–970 (IEEE, 2022). 41. Tian, Z., Shen, C., Chen, H. & He, T. FCOS: Fully Convolutional One-Stage Object Detection. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV) , 9626–9635, https://doi.org/10.1109/ICCV .2019.00972 (IEEE, 2019). 42. Zhang, H., Wang, Y ., Dayoub, F. & Sunderhauf, N. Varifocalnet: An iou-aware dense object detector. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 8514–8523 (2021). 43. Chen, K. et al . MMDetection: Open mmlab detection toolbox and benchmark. Preprint at https://doi.org/10.48550/ arXiv.1906.07155 (2019). 44. Jakubik, J. et al . Foundation models for generalist geospatial artificial intelligence. Preprint at https://doi.org/10.48550/ arXiv.2310.18660 (2023). 45. Jakubik, J. et al. HLS Foundation, Prithvi-100M, https://doi.org/10.57967/hf/0952 (2023). 46. Van der Maaten, L. & Hinton, G. Visualizing data using t-sne. Journal of machine learning research 9 (2008). 47. Hoiem, D., Chodpathumwan, Y . & Dai, Q.
https://arxiv.org/abs/2505.22353v1
Diagnosing error in object detectors. In European conference on computer vision , 340–353 (Springer, 2012). 14 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/Acknowledgements This publication was made possible by GSRA grant, I.D. # GSRA7-1-0421-20022, from the Qatar National Research Fund (a member of Qatar Foundation). We sincerely thank our colleague Masoomali Fatehkia (Qatar Computing Research Institute, HBKU) for assisting with image collection. Ingmar Weber is supported by funding from the Alexander von Humboldt Foundation and its founder, the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung). author contributions N.A. conceived the dataset collection and preparation, dataset validation, and pre-processing, conducted the experiments, and wrote the manuscript. I.W . and F.O. facilitated access to Middle East satellite imagery. F.O. provided analysis techniques. I.W ., Y .Y ., and F.O. supervised and guided the study. All authors reviewed the manuscript. Competing interests The authors declare no competing interests. additional information Correspondence and requests for materials should be addressed to N.A.-E. Reprints and permissions information is available at www.nature.com/reprints.Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribu- tion and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed mate-rial. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. © The Author(s) 2025
https://arxiv.org/abs/2505.22353v1
arXiv:2505.22356v1 [cs.LG] 28 May 2025Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Ang´eline Pouget1Mohammad Yaghini2Stephan Rabanser2Nicolas Papernot2 Abstract Deploying machine learning models in safety- critical domains poses a key challenge: ensur- ing reliable model performance on downstream user data without access to ground truth labels for direct validation. We propose the suitabil- ity filter , a novel framework designed to detect performance deterioration by utilizing suitabil- ity signals —model output features that are sen- sitive to covariate shifts and indicative of poten- tial prediction errors. The suitability filter evalu- ates whether classifier accuracy on unlabeled user data shows significant degradation compared to the accuracy measured on the labeled test dataset. Specifically, it ensures that this degradation does not exceed a pre-specified margin, which repre- sents the maximum acceptable drop in accuracy. To achieve reliable performance evaluation, we aggregate suitability signals for both test and user data and compare these empirical distributions using statistical hypothesis testing, thus providing insights into decision uncertainty. Our modular method adapts to various models and domains. Empirical evaluations across different classifica- tion tasks demonstrate that the suitability filter reliably detects performance deviations due to co- variate shift. This enables proactive mitigation of potential failures in high-stakes applications. 1. Introduction Machine learning (ML) models often operate in dynamic, uncertain environments. After a model is tested on a holdout set, a satisfactory evaluation result typically leads to produc- tion deployment. However, if test and deployment covariate 1ETH Zurich, work performed while interning at the Univer- sity of Toronto and the Vector Institute2University of Toronto and Vector Institute. Correspondence to: Ang ´eline Pouget <ange- line.pouget@gmail.com >. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). 0.0 0.2 0.4 0.6 0.8 1.0 Per-Sample Prediction Correctness Probability0.00.51.01.52.02.53.03.54.0Estimated Probability Density mSuitability Filter Test Data UNSUITABLE User Data SUITABLE User Data SUITABLE SUITABLE within margin UNSUITABLEFigure 1. A model Missuitable for use on Duif its accuracy does not fall below the accuracy on Dtestby more than a pre- defined margin m.The suitability filter calculates per-sample pre- diction correctness probabilities for both test and user datasets and compares the two distributions through statistical non-inferiority testing. The dashed vertical lines represent the mean values of the distributions corresponding to the estimated accuracies. distributions differ, performance can drop and cause harm. For example, credit risk models trained on limited historical data may fail in new contexts, disproportionately harming underserved communities through unfair denials or higher interest rates (Kozodoi et al., 2022). Ideally, deployed pre- dictions could be compared directly to ground truth for real- time performance monitoring. However, ground truth may be unavailable (e.g., limited expert labeling (Culverhouse et al., 2003)), unobservable (e.g., counterfactual outcomes in healthcare (Tal, 2023)), or only available much later (e.g., re- cidivism in law enforcement (Travaini et al., 2022)), thereby causing significant monitoring challenges in deployment. In this work, we tackle the challenge of determining whether classification accuracy on unlabeled user data degrades sig- nificantly compared to a labeled holdout dataset—an issue not directly addressed by existing methods. Our
https://arxiv.org/abs/2505.22356v1
approach combines insights from distribution shift detection, unsuper- vised accuracy estimation, selective prediction, and dataset inference into a novel performance deterioration detector. Central to our solution is the suitability filter , an auxil- 1 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings iary function fs:X → { SUITABLE ,INCONCLUSIVE }. Given an unlabeled user dataset Du∼ D target sampled from the target deployment domain, and a labeled test dataset Dtest∼ D source sampled from the original training domain, the filter assesses whether classifier accuracy on Dufalls below that on Dtestby more than a predefined margin m. Our work proposes both (i) a framework for the suitability filter; as well as (ii) a well-performing default instantiation of the filter using domain-agnostic suitability signals that are broadly applicable across classifiers, independent of the model architecture or the training algorithm. To arrive at its decision, the suitability filter relies on suit- ability signals (model output features such as maximum logit/softmax or predictive entropy). These signals are sensi- tive to covariate shifts and can indicate potential prediction errors. In particular, we design a per-sample prediction cor- rectness estimator leveraging these suitability signals. This allows us to assess consistency in the model’s predictive behavior on both DuandDtestby aggregating sample-level suitability signals. As a result, we are able to detect subtle shifts indicative of changes in model performance. As il- lustrated in Figure 1, we then compare the means of these distributions (i.e., the estimated accuracies) to arrive at a suitability decision. Our decisions rely on statistical test- ing to assess whether the estimated difference in means is significant, thus offering a measure of predictive uncertainty. To ensure the reliability of suitability decisions, we study the statistical guarantees for the suitability filter. Specifically, we identify the theoretical conditions that ensure a bounded false positive rate for end-to-end suitability decisions. We also consider the practical scenarios where such condition may not hold and provide a relaxation of this theoretical condition. This allows model providers to ensure reliability of suitability decisions in spite of theoretical limitations. Building on these theoretical insights, we empirically show that the filter consistently detects performance deviations arising from various covariate shifts, including temporal, ge- ographical, and subpopulation shifts. Specifically, we assess the effectiveness of our approach using real-world datasets from the WILDS benchmark (Koh et al., 2021). These in- clude FMoW-WILDS for land use classification (Christie et al., 2018), CivilComments-WILDS for text toxicity classification (Borkan et al., 2019), and RxRx1-WILDS for genetic perturbation classification (Taylor et al., 2019). Furthermore, we explore how accuracy differences between user and test datasets impact the filter’s sensitivity, analyze calibration techniques to control false positives, and con- duct ablations on suitability signals, sample sizes, margins, significance levels, and classifier options. In summary, our key contributions are the following: 1.We introduce suitability filters as a principled way of de-tecting model performance deterioration during deploy- ment. Our filters detect covariate shift via an unlabeled representative dataset provided by the model user. 2.We propose a statistical testing framework to build suit- ability filters that aggregate various signals and output
https://arxiv.org/abs/2505.22356v1
a suitability decision. Leveraging formal hypothesis test- ing, our approach enables control of the false positive rate via a user-interpretable significance level. 3.We theoretically analyze the end-to-end false positive rates of our suitability filters and provide sufficient condi- tions for bounded false positive rates. We then consider a practical relaxation of this condition, and suggest an adjustment to the prediction margin that maintains our end-to-end bounded false error guarantees. 4.We demonstrate the practical applicability of suitability filters across 29kexperiments on realistic data shift sce- narios from the WILDS benchmark. On FMoW-WILDS , for example, we are able to detect performance deterio- ration of more than 3%with100% accuracy as can be seen in Figure 4. 2. Related Work Our work builds on insights from distribution shifts, accu- racy estimation, selective prediction, and dataset inference. Distribution Shift Detection. Distribution shift detec- tion methods aim to identify changes between training and deployment data distributions (Qui ˜nonero-Candela et al., 2022), generally requiring access to ground truth labels. Early research emphasizes detecting shifts in high- dimensional data using approaches such as statistical testing on model confidence distributions (Rabanser et al., 2019) or leveraging model ensembles (Ovadia et al., 2019; Arpit et al., 2022). Recent efforts increasingly prioritize inter- preting shifts (Kulinski & Inouye, 2023; Koh et al., 2021; Gulrajani & Lopez-Paz, 2021) and mitigating their impact on model performance (Cha et al., 2022; Zhou et al., 2021; Wiles et al., 2021; Zhou et al., 2022; Wang et al., 2022). Some works argue that while small shifts are unavoidable, the focus should be on harmful shifts that lead to signifi- cant performance degradation (Podkopaev & Ramdas, 2021; Ginsberg et al., 2022). These approaches aim to detect covariate shifts and subsequently assess their impact on per- formance. To do so, they rely on ground truth labels or model ensembles to evaluate harmfulness. This assumption makes these techniques unsuitable for our setting where we aim to detect performance degradation without label access. Unsupervised Accuracy Estimation. Unsupervised ac- curacy estimation, also known as AutoEval (Automatic 2 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Model Evaluation (Deng & Zheng, 2021)), aims to esti- mate a model’s classification accuracy (a continuous met- ric) on unseen data without relying on ground truth labels. Early approaches in this field primarily centered on model confidence, calculated as the maximum value of the soft- max output applied to the classifier’s logits, and related metrics which we demonstrate to be valuable suitability signals (Hendrycks & Gimpel, 2016; Garg et al., 2022; Kivim ¨aki et al., 2024; Bialek et al., 2024; Guillory et al., 2021; Lu et al., 2023; Wang et al., 2023; Hendrycks & Di- etterich, 2018; Deng et al., 2023). Our work differs from these approaches in three key ways: we focus on reliably detecting performance deterioration (a binary decision) in relation to a labeled test dataset using statistical testing. Selective Classification. Selective classification tech- niques aim to detect and reject inputs a model would likely misclassify, while maintaining high coverage and accepting as many samples as possible (Chow, 1957; El-Yaniv et
https://arxiv.org/abs/2505.22356v1
al., 2010). In contrast to selective classification, we do not re- ject or accept individual input data samples. Instead, we leverage sample-level signals and aggregate them to provide a statistically grounded suitability decision for the entire dataset. Initial selective classification methods for neural networks base the rejection mechanism on the model pre- diction confidence (Hendrycks & Gimpel, 2016; Geifman & El-Yaniv, 2017), a signal that we also leverage in our work. Dataset Inference. Our approach is inspired by dataset inference (Maini et al., 2021), a technique used to deter- mine whether a model was trained on a particular dataset. Similarly to dataset inference, we compare suitability dis- tributions between two different data samples through sta- tistical hypothesis testing. However, in contrast to dataset inference, we focus only on evaluation and aim to detect possible performance deterioration, essentially reversing the null and alternative hypotheses. Moreover, dataset inference relies on representative data from both sample domains— the original source and the deployed target domain—to train a confidence regressor. Instead, we assume that label access is only available in data sampled from the source domain. 3. Problem Formulation Our suitability filter framework distinguishes between the model provider , who trains and tests the classifier on the source distribution, and the model user , who applies the model to (possibly distributionally shifted) target data. Model Provider. LetY={1, . . . , k }denote the label space, representing the set of all possible output labels for a classification problem with kclasses. We define our predic- tor as a model M:X → Y mapping inputs from a covariate spaceXto classification decisions. A model provider trainssuch a model Monlabeled data sampled from a source dis- tribution Dsource over domain X. Specifically, the provider usually partitions the data into two disjoint subsets: a train- ing dataset Dtrain∼ D source, which is used to optimize the parameters of Mand a test dataset Dtest∼ D source , which is reserved to evaluate the performance of Mon unseen data. To ensure an unbiased evaluation of model performance, these two datasets are disjoint, i.e., Dtrain∩Dtest=∅. Model User. A model user interested in deploying model Mon their data provides an unlabeled , representative data sample Du∼ D target. In most scenarios of practical interest, Dtargetdiffers from Dsource, i.e.,Dtarget̸=Dsource. Note that ifDuwere to be drawn from the same distribution as Dtest, the model’s performance on both datasets would be identical in expectation, eliminating the need for the suitability filter. The user might be a third party looking to use the model, or the model provider and user could be the same party. Suitability Filter. The suitability filter assesses whether the performance of classifier Mon unlabeled user data Du degrades relative to the known performance on the labeled test dataset Dtest. In our work, we focus on model accuracy as the performance metric. We define suitability as follows: Definition 3.1 (Suitability) .Given a classifier M:X → Y, a test data sample Dtest∼ D source, a user data sample Du∼ D target, and a performance deviation margin m∈R, we define Mas suitable for use on Duif and only
https://arxiv.org/abs/2505.22356v1
if the estimated accuracy of MonDudeviates at most by mfrom the accuracy on Dtest. Formally: 1 |Du|X x∈DuI{M(x) =O(x)} ≥ 1 |Dtest|X (x,y)∈DtestI{M(x) =y} −m.(1) Here,I{·}is the indicator function and O(x)represents an oracle that provides the true label yfor any input x(the ground truth label is unavailable for samples x∈Du). Definition 3.2 (Suitability Filter) .Given a model M, a test dataset Dtest, a user dataset Du, a performance metric gand a performance margin mas in Definition 3.1, we define a suitability filter to be a function fs:X → { SUITABLE , INCONCLUSIVE }that outputs SUITABLE if and only if Mis suitable for use on Duaccording to Definition 3.1 with high probability and INCONCLUSIVE otherwise. 4. Method The suitability filter is introduced as a statistical hypothesis test designed to assess if the performance of a model on user data Dudeviates from its performance on a test dataset Dtestby more than a specified margin m. By aggregating 3 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Du∼ D target Dtest∼ D source Dsf∼ D source Dtrain∼ D sourceClassifier M:X → Y Classifier M:X → YPrediction correctness probability estimator C:Rs→[0,1] Prediction correctness probability estimator C:Rs→[0,1] Classifier trained onDtrain: M:X → Y Signal s2∈RSignal s1∈R ... Signal ss∈RLogit w⊤s(x;M) +b 0.51 pc σ(·) Per-sample prediction correctness probability estimator trained on Dsf:C:Rs→[0,1]. Aggregated into pcover all data points.pcDensity pcDensityStatistical Hypothesis Test m pcDensity H0:µtarget< µ source−mINCONCLUSIVE ifp > α SUITABLE ifp≤α Classifier Mis suitable for deployment on Du. Acc(M, D u)≥Acc(M, D test)−m holds with high probability.w1 w2 wN Figure 2. Schematic overview of the suitability filter . The suitability filter assesses whether model performance on a user sample Du deviates from its performance on the test dataset Dtest. This is achieved by combining different suitability signals {s1, . . . , s s}to estimate per-sample prediction correctness and comparing the distribution of these estimates between the two datasets using a statistical test. a diverse set of suitability signals predictive of classifier correctness, the test compares predicted accuracy between DuandDtestusing a non-inferiority test to ensure the mean performance difference does not exceed the performance margin m(Wellek, 2002; Walker & Nowacki, 2011). We present an schematic overview of our approach in Figure 2. 4.1. Suitability Signals The first step in constructing the suitability filter is to select a set of signals {s1, . . . , s S}that are predictive of per-sample classifier prediction correctness. These signals are inher- ently dependent on the model Mand capture information about its predictions and confidence levels. As discussed in Section 2, a variety of signals have been proposed in the literature on unsupervised accuracy estimation, selective classification, and uncertainty quantification. Such signals include but are not limited to the maximum logit/softmax scores, the energy of the logits, or the predictive entropy. The exact signals used in this work have been selected to ensure the broad applicability of the suitability filter across diverse settings as outlined in more detail in our experiments (Section 5) and in Appendix A.4.2 and A.4.3. We note that any signal that can be
https://arxiv.org/abs/2505.22356v1
computed for an individual sample and is predictive of prediction correctness can be incorpo- rated into our framework, allowing for flexible extension based on the specific task, dataset, or model M. 4.2. Per-Sample Prediction Correctness Estimator To learn a per-sample prediction correctness estimator, we require the model provider to have a separate, labeled hold- out dataset Dsf∼ D source. While ultimately the goal is to assess performance on the unlabeled Du∼ D target provided by the user, the hold-out dataset Dsfserves as a proxy totrain the parameters of the prediction correctness estimator. This dataset is essential because it enables the suitability filter to learn the relationship between different signals and classifier prediction correctness. Dsfhas to be separate from bothDtrainandDtestto avoid overfitting to these samples. For each sample x∈Dsf, the selected signals {s1, . . . , s S}, which are functions of both the sample and the model M, are evaluated, normalized, and aggregated into a single feature vector s(x;M) = [ s1(x;M), s2(x;M), . . . , s S(x;M)]∈ RS. The suitability filter framework leverages this feature vector s(x;M)to predict whether the model Mcorrectly classifies the input x. This is achieved by training a pre- diction correctness classifier C:RS→ {0,1}that es- timates the per-sample probability of prediction correct- nesspc(x)on the hold-out dataset Dsf. In particular, we want to minimize the binary cross-entropy loss between the true correctness label c=I{M(x) =y}andpc(x)for each (x, y)∈Dsf. We instantiate Cas a logistic regres- sor1which models the prediction correctness probability pc(x) =σ(w⊤s(x;M) +b), where σ(z) =1 1+e−zis the sigmoid function. We can then leverage Cto estimate per-sample prediction correctness for user data samples x∈Du(since calculating pc(x)does not require ground truth label access) as well as the test data Dtest. Next, we discuss steps to verify and ensure that Cgeneralizes effectively to Du∼ D target. Calibration. To ensure that the mean estimated probabil- ity of prediction correctness directly reflects accuracy, pc(x) 1Note that while more flexible model classes can be used for the correctness estimator C, we did not find any empirical evi- dence that they provide a consistent performance improvement over logistic regression (see Appendix A.4.4 for details). 4 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings must be well-calibrated for both samples from Dsource and Dtarget. However, absent specific assumptions about the dif- ferences between Dsource andDtarget, achieving this desired calibration is impossible in practice (David et al., 2010). One reasonable assumption under which such calibration issues can be mitigated is if the potential target distributions consist of subpopulations of the source distribution. In credit scoring, for instance, the target distribution may include subpopulations S, such as minority groups or individuals with limited credit histories, who are underrepresented in the training data. In such scenarios, multicalibration techniques can ensure that Cprovides accurate predictions for every subpopulation S ∈ C , thereby improving reliability across all possible Dtarget (H´ebert-Johnson et al., 2018). Here, C denotes a collection of computationally identifiable subsets of the support of Dsource andi∼ S is a sample drawn from Dsource conditioned on membership in S. When no assumptions
https://arxiv.org/abs/2505.22356v1
about Dsource andDtarget can be made, achieving reliable calibration is challenging. Calibrating ConDsf(e.g., using Platt’s method (Platt et al., 1999) or temperature scaling (Guo et al., 2017)) ensures that the classification correctness estimator Cprovides reliable esti- mates of the probability that model Mcorrectly classifies samples in Dsf∼ D source. While, in theory, this calibra- tion extends to Dtest∼ D source , we generally cannot assume calibration on Du∼ D target. Our approach to addressing this issue combines ongoing quality assurance checks with appropriate margin adjustments and will be discussed in more detail in Section 4.4. 4.3. Non-Inferiority Testing Non-inferiority testing is a statistical method used to as- sess whether the performance of a new treatment, model, or method is not significantly worse than a reference or con- trol by more than a pre-specified margin m(Wellek, 2002; Walker & Nowacki, 2011). Unlike other statistical tests, which typically test for a difference between distributions, this test aims to confirm that the new method is not inferior by more than a margin of m. Consequently, the null hypoth- esis is that the method is inferior, in contrast to the usual null hypothesis of no difference. Correctness Distributions. If the per-sample prediction correctness estimator Cis well-calibrated, the mean of the estimated prediction correctness probabilities across a dataset approximates the accuracy of the model Mon that dataset. Formally, let pc[Dtest]andpc[Du]denote the vectors of estimated prediction correctness probabilities for the test dataset Dtestand the user dataset Du, respectively: pc[Dtest]:= pc(x1), . . . , p c(x|Dtest|) ∈[0,1]|Dtest|(2) pc[Du]:= pc(x1), . . . , p c(x|Du|) ∈[0,1]|Du|(3)Here, pc(xi)represents the estimated probability of predic- tion correctness for each sample xi. Hypothesis Setup. We define the true means of the esti- mated prediction correctness probabilities for data drawn from the source and target distributions as follows: µsource :=Ex∼D source[pc(x)] (4) µtarget:=Ex∼D target[pc(x)] (5) The primary goal of the non-inferiority test is to compare the true mean predicted correctness between the two distribu- tions and determine whether µtarget is not lower than µsource by more than a pre-specified margin m. This is formally expressed as the following hypothesis testing setup: H0:µtarget< µ source−m (6) H1:µtarget≥µsource−m (7) The null hypothesis H0posits that the estimated perfor- mance on the user dataset is worse than on the test dataset by more than the margin m. The alternative hypothesis H1 asserts that the estimated performance on the user dataset is either better than, equivalent to or not worse than that on the test dataset within the allowed margin m. We conduct the statistical non-inferiority test using a one-sided Welch’s t-test (see Appendix A.1.1). 4.4. Suitability Decision Finally, the decision on the suitability of the model for the user dataset is based on the outcome of this non-inferiority test. If the test indicates non-inferiority, we conclude that the model’s performance on Duis acceptable and we output SUITABLE . If the test fails to reject the null hypothesis, the model is either unsuitable for the user dataset or the number of samples provided was insufficient to determine suitability and hence we return INCONCLUSIVE . To ensure the reliability
https://arxiv.org/abs/2505.22356v1
of these suitability decisions, we next discuss statistical guarantees and the conditions under which they hold for the end-to-end suitability decision. Statistical Guarantees. To account for miscalibration er- rors, we define δ-calibration as follows: Definition 4.1 (δ-Calibration) .Letpc(x)denote the esti- mated probability of prediction correctness for a sample x with predicted label M(x)and true label y. Assuming that pc(x)has a well-defined probability density function fc(ν) over[0,1], we say pc(x)isδ-calibrated if P[M(x) =y|pc(x) =ν] =ν+ϵ(ν), (8) ∀ν∈[0,1]with calibration errorR1 0ϵ(ν)fc(ν)dν=δfor 0≤ |δ| ≪1. 5 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Under the assumption of testing two independent and nor- mally distributed samples, the non-inferiority test ensures a controlled false positive rate (FPR), bounding the probabil- ity of incorrectly concluding non-inferiority. Theorem 4.2 (Non-Inferiority Test Guarantee) .Letµsource andµtarget represent the true mean prediction correctness for the source and target distributions, respectively. Assuming that these samples are independent and normally distributed, a non-inferiority test based on Welch’s t-test at significance levelαguarantees that the probability of rejecting the null hypothesis H0:µtarget < µ source−m(i.e., concluding µtarget≥µsource−m) when H0is true is controlled at α: P(Reject H0|H0is true )≤α, (9) where mis the non-inferiority margin (Lehmann et al., 1986; Wellek, 2002). The following results extend this guarantee to the end-to- end suitability filter under δ-calibration for the correctness estimator Cwith respect to both Dsource andDtarget. All expectations and probabilities are over samples (x, y)∼ X × Y unless specified otherwise. Lemma 4.3 (Expectation of Correctness) .Under δ- calibration as defined in Definition 4.1, the deviation be- tween the true probability of prediction correctness and the estimation by classifer Cis given by: E[pc(x)]−P[M(x) =y] =δ. (10) Proof in Appendix A.1.2. We use Lemma 4.3 to derive the end-to-end guarantee for the suitability filter. Corollary 4.4 (Bounded False Positive Rate for Suitability Filter under δ-Calibration) .Given a prediction correctness estimator Cthat is δ-calibrated on both the source and target distributions with δsource andδtarget, respectively, let us define m′:=m+δsource−δtarget and conduct a non- inferiority test with H0:µtarget< µ source−m′. The proba- bility of incorrectly rejecting H0(i.e., returning SUITABLE) when the model accuracy on Dtarget is lower than on Dsource by more than a margin mis upper bounded by the signifi- cance level α. Proof in Appendix A.1.3. The following remark details the limits of these guarantees. Remark 4.5 (Impossibility of Bounded False Positive Rate without δ-Calibration) .If the calibration deviations δsource andδtargetare not provided or are not much smaller than 1, it is not possible to choose m′according to Corollary 4.4. Hence, without δ-calibration, no guarantees on the false positive rate of the suitability filter can be provided. Practical Considerations. Under perfect calibration, the calibration errors vanish, i.e., δsource =δtarget= 0, eliminat- ing the need for any margin correction. However, achievingMargin adjustment m′=m+ ∆ test−∆u. 0.4 0.6 0.80.40.60.8 0.600.72Du 0.80.85Dtest mm′ ∆u∆test perfect accuracy alignment Ground Truth AccuracyEstimated Accuracy from CSuitable user data Du 0.4 0.6 0.80.40.60.8 0.600.72Du 0.80.85Dtest mm′ ∆u∆test perfect accuracy alignment Ground Truth AccuracyUnsuitable user data Du Figure 3. Margin adjustment under accuracy estimation error . In each panel, the solid gray
https://arxiv.org/abs/2505.22356v1
line is the perfect-calibration diagonal, the dashed black/gray lines mark the original margin mand its corrected value m′, respectively. The blue/orange arrows indicate the estimation errors on the test set ( ∆test) and user data ( ∆u), respectively. In the left panel, the user data Duis deemed suitable; in the right panel it is deemed unsuitable. perfect calibration in practice is rare. In most real-world deployments, accurately determining the calibration errors δsource and especially δtarget can be difficult. Consequently, adjusting the margin as proposed in Corollary 4.4 may be challenging. To address this, we draw inspiration from best practices in quality assurance and propose that the model owner periodically collects a small labeled dataset, ˆDu, from a potential user of the system. Given access to the test dataset Dtest, the model owner can compute both the esti- mated accuracies ( µu,µtest) as approximated by C, as well as ground truth accuracies ( Accu,Acc test). This enables an empirical evaluation of accuracy estimation errors, ∆uand ∆test, which correspond to δtarget andδsource , respectively: ∆ =1 NNX i=1pc(xi)−I{M(xi) =yi} (11) Following the margin adjustment in Corollary 4.4, the up- dated margin is: m′=m+ ∆ test−∆u. (12) The intuition behind this adjustment is that the decisions output by the suitability filter reflect the expected ground truth suitability decisions even in the presence of prediction errors as can also be seen in Figure 3. Regular recalibration and careful margin tuning ensure that Ccontinues to provide reliable estimates, even in the presence of distribution shifts or evolving deployment conditions. 5. Experimental Evaluation To evaluate the performance of our proposed suitability filter, we conduct a series of experiments with different datasets, 6 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Table 1. Evaluating detection performance of the proposed suitability filter on FMoW-WILDS ,RxRx1-WILDS and CivilComments-WILDS form= 0 with both ID and OOD user data. We report the area under the curve for ROC and PR (capturing the tradeoffs at various significance thresholds), as well as accuracy and the true false positive rate at α= 0.05. We also report 95% confidence intervals based on 3models Mtrained on the same Dtrainwith different random seeds. DATASET ACC FPR ROC PR FMOW-WILDS ID 81.8±3.1% 0 .027±0.033 0 .969±0.023 0 .967±0.029 FMOW-WILDS OOD 91.9±2.5% 0 .018±0.017 0 .965±0.016 0 .891±0.035 RXRX1-WILDS ID 100.0±0.0% 0 .000±0.000 1 .000±0.000 1 .000±0.000 RXRX1-WILDS OOD 97.5±7.2% 0 .031±0.088 0 .997±0.006 0 .989±0.024 CI V I L CO M M E N T S -WILDS ID 93.3±5.3% 0 .002±0.007 0 .997±0.008 0 .971±0.067 model architectures, and naturally occurring distribution shift types from the WILDS benchmark (Koh et al., 2021). 5.1. General Evaluation Setup We evaluate the suitability filter on FMoW-WILDS (Christie et al., 2018), CivilComments-WILDS (Borkan et al., 2019) and RxRx1-WILDS (Taylor et al., 2019). For each dataset, we follow the recommended training paradigm to train a model Musing empirical risk minimization and the pre-defined Dtrain∼ D source. We then further split the provided in-distribution (ID) and out-of-distribution (OOD) validation and test splits into folds as detailed in Appendix A.2.2 (16 ID and 30
https://arxiv.org/abs/2505.22356v1
OOD folds for FMoW-WILDS , 4 ID and 8 OOD folds for RxRx1-WILDS , and 16 ID folds forCivilComments-WILDS ). We conduct two types of experiments: first, each ID fold is used as the user dataset ( Du), and the remaining ID data is split into 15 subsets, used as DtestandDsf. This yields 16 ×15×14 exper- iments for FMoW-WILDS, 4 ×15×14 for RxRx1-WILDS, and 16 ×15×14 for CivilComments-WILDS. Second, each OOD fold is used as Du, and the ID data is split into 15 subsets, used for DtestandDsf. This yields 30 ×15×14 experi- ments for FMoW-WILDS and 8 ×15×14 for RxRx1-WILDS. We define the binary suitability ground truth as Acc(M, D u)≥Acc(M, D test)−m. While statistical guarantees are discussed under margin adjustments in Sec- tion 4.4, achieving the necessary calibration error estimates in practice is challenging. In particular, obtaining a reliable approximation for δtargetrequires access to a small labeled user dataset ˆDu, which may not always be available. More- over, even if ˆDuis collected, its representativeness of the true deployment distribution Dtarget is uncertain, introduc- ing potential biases in the accuracy estimation error ∆u. To account for this in our experiments, we set m′=m for the non-inferiority test, effectively using the predefined margin without additional corrections. We discuss this in more detail in Appendix A.4.1. We evaluate suitability de- cisions by computing the ROC AUC across significance levels, capturing the trade-off between true and false posi-tives. Additionally, we report PR AUC, accuracy, and false positive rate at the common α= 0.05threshold. 5.2. Suitability Signals We use the following suitability signals in our instantiation of the suitability filter (more details in Appendix A.2.1): -conf max: Maximum confidence from softmax. -conf std: Standard deviation of softmax outputs, indicating confidence variability. -conf entropy : Entropy of the softmax outputs, measuring prediction uncertainty. -conf ratio : Ratio of top two class probabilities. -topkconf sum: Sum of the top 10% class proba- bilities, indicating concentration of probability mass. -logit mean : Mean of the logits, representing the overall output magnitude. -logit max: Maximum logit value, corresponding to the highest predicted class. -logit std: Standard deviation of logits, showing the spread of model outputs. -logit diff top2 : Difference between the top two logits, indicating confidence in distinguishing classes. -loss : Cross-entropy loss w.r.t. the predicted class. -margin loss : Difference in cross-entropy loss be- tween the predicted class and next best class. -energy : Logits energy, computed as the negative log-sum-exponential, measuring model certainty. 5.3. Results As our work introduces a novel problem setting with no ex- isting baselines for direct comparison, the primary objective of the following is to provide an intuition for the conditions under which our approach works effectively, its limitations, and the factors influencing its performance. Table 1 summarizes the performance of the proposed suit- ability filter across three benchmark datasets from the WILDS collection: FMoW-WILDS ,RxRx1-WILDS , and 7 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings -7% to -6% -6% to -5% -5% to -4% -4% to -3% -3% to -2% -2% to -1% -1% to 0%
https://arxiv.org/abs/2505.22356v1
0% to 1% 1% to 2% 2% to 3% 3% to 4% 4% to 5% 5% to 6% 6% to 7% Acc(M,Du)Acc(M,Dtest) 020406080100SUITABLE Decisions (%)SUITABLE Decisions Across Varying Accuracy Differences False Positives True Positives Figure 4. Sensitivity of suitability decisions to accuracy dif- ferences between user and test data on FMoW-WILDS .The plot, summarizing results from nearly 29kindividual experiments, shows the percentage of SUITABLE decisions for α= 0.05and m= 0 across various accuracy difference bins. We combine both ID and OOD suitability filter experiments based on 3models trained with different random seeds. CivilComments-WILDS . Although ID and OOD results cannot be directly compared due to the differing numbers of ground truth positives and negatives, interesting trends still emerge. On FMoW-WILDS for example, we observe higher accuracy and a lower FPR at the 5%significance level for OOD user data, while ROC AUC and PR AUC are higher for ID user data. This discrepancy may stem from class imbalance: across OOD experiments, we have nearly three times as many true negatives as true positives, making it easier to achieve high accuracy despite it generally being harder to maintain discriminative performance in an OOD setting. The latter is also confirmed for RxRx1-WILDS , where we see decreased performance on OOD user data compared to ID user data. Another noteworthy observation is the high overall performance on RxRx1-WILDS . The reason for this is that we observe large differences in model performance on RxRx1-WILDS depending on the fold con- sidered, as can be seen in Table 3 (Appendix). This variation helps the suitability filter detect performance deterioration more easily, as larger performance differences enhance its ability to identify changes. This sensitivity of suitability decisions to differences in ac- curacy between the user and test datasets is also illustrated in Figure 4 on FMoW-WILDS form= 0. The ideal relation- ship would be a step function, where SUITABLE decisions occur only when user dataset accuracy exceeds test accuracy. However, achieving this requires a perfect estimate of accu- racy on Du, which is impossible without ground truth labels. In practice, we observe that the slope of the suitability deci- sion curve is flatter than the ideal step function. There are a few erroneous SUITABLE decisions when the accuracy difference is below 0%, indicating occasional false positives. However, for differences <−3%(indicating a performance deterioration of at least 3%, this is the case for 8.4kexperi- ments out of nearly 29kin total), our proposed suitability filter achieves 100% accuracy. Additionally, some false neg-atives are observed in the range [0%,3%], reflecting scenar- ios where the empirical evidence provided by DuandDtest is insufficient to reject the inferiority null hypothesis at the chosen significance level α= 0.05. However, for accuracy difference buckets exceeding 3%, the percentage of SUIT- ABLE decisions consistently exceeds 80% and increases to100% above 6%of accuracy difference, demonstrating the robustness of the approach in scenarios with sufficiently large accuracy differences. Additional experiments, results, and interpretations can be found in Appendix A.4. 6. Discussion Conclusion. We introduce the suitability filter, a novel framework for evaluating whether model performance on unlabeled downstream data in
https://arxiv.org/abs/2505.22356v1
real-world deployment set- tings deteriorates compared to its performance on test data. We present an instantiation for classification accuracy that leverages statistical hypothesis testing. We provide theo- retical guarantees on the false positive rate of suitability decisions and propose a margin adjustment strategy to ac- count for calibration errors. Through extensive experiments on real-world datasets from the WILDS benchmark, we demonstrate the effectiveness of suitability filters across di- verse covariate shifts. Our findings highlight the potential of suitability filters as a practical tool for model monitoring, enabling more reliable and interpretable deployment deci- sions in dynamic environments. Suitability filters provide an effective way to expose model capabilities and limitations and thus enable auditable service level agreements (SLAs). Possible Extensions. The suitability filter framework’s modularity makes it adaptable to various contexts. For fair- ness assessments, for instance, ensuring comparable accu- racy across groups can be achieved by substituting the non- inferiority test with an equivalence test (Wellek, 2002) to evaluate if performance differences fall within a predefined margin. If the goal extends beyond snapshot evaluations to continuous monitoring, this can be achieved by applying multiple hypothesis testing corrections to the p-values. Sim- ilarly, the framework can support sequential testing, where a decision is made iteratively: a user provides an initial sample, and more data can be requested if no conclusion is reached, using methods such as O’Brien-Fleming (O’Brien & Fleming, 1979) or Pocock (Pocock, 2013) for control- ling error rates. For a more detailed discussion of these extensions, we refer the interested reader to Appendix A.3. Limitations. Our method is designed to detect accuracy degradations due to covariate shifts and does not address other types of distribution shift, such as label shift. This is due to the assumption that we generally only have access to unlabeled data from the target distribution Dtarget. Future work could extend this by incorporating information from a 8 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (potentially small) number of labeled samples from the tar- get distribution. Moreover, our current approach is limited to classification due to the choice of signals. Though our set of suitability signals, designed to be applicable across data types, model architectures and training paradigms, provides a useful baseline, choosing signals tailored to the specific deployment setting would likely improve suitability filter performance. While our framework is general and could be used with different performance metrics, our current instanti- ation and experimental evaluation are limited to accuracy. It thus focuses on scenarios where average-case performance is the primary concern and does not address safety-critical applications where ensuring good performance on a per- instance (or worst-case) basis is often crucial. Lastly, it should also be noted that one of the key underlying assump- tions of our framework is non-adversarial behavior from both model providers and users, who are expected to pro- vide representative data. This assumption is justified by the user’s goal of identifying a suitable model for their task, but it implies vulnerability to deliberate adversarial manipula- tion designed to bypass the filter. Code Availability The source code for the
https://arxiv.org/abs/2505.22356v1
suitability filter framework and the experiments presented in this paper is pub- licly available on GitHub at https://github.com/ cleverhans-lab/suitability . Acknowledgements We thank Anvith Thudi, Mike Menart, David Glukhov, and other members of the Cleverhans group for their feedback on this work. We would like to acknowledge our spon- sors, who support our research with financial and in-kind contributions: Apple, CIFAR through the Canada CIFAR AI Chair, Meta, Microsoft, NSERC through the Discovery Grant and an Alliance Grant with ServiceNow and DRDC, the Ontario Early Researcher Award, the Schmidt Sciences foundation through the AI2050 Early Career Fellow pro- gram. Resources used in preparing this research were pro- vided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. Impact Statement This work proposes a suitability filter framework for eval- uating machine learning models in real-world deployment settings, specifically designed to detect performance degra- dations caused by covariate shifts. The primary goal is to improve the robustness and fairness of machine learning models by offering tools to assess their suitability with- out requiring ground-truth labels. By facilitating reliablemodel evaluation, this work has the potential to enhance trustworthiness in automated systems especially in safety- critical deployment contexts. However, there are ethical considerations to note. The methodology assumes access to well-calibrated prediction correctness estimators, which might not hold in all scenarios, potentially leading to in- correct evaluations. Additionally, while the framework is adaptable, improper parameter choices or misinterpretations of results could exacerbate existing biases in datasets or models. Careful application and thorough understanding of the framework are critical to mitigating these risks. Future societal consequences of this work include its potential to improve fairness by enabling consistent performance eval- uation across diverse subpopulations. However, misuse or overreliance on such automated evaluation frameworks without human oversight could have adverse effects. We encourage practitioners to complement this framework with domain expertise and ethical considerations during deploy- ment. This paper aims to advance the field of Machine Learning by providing tools for model evaluation in dy- namic deployment contexts. While we believe the societal implications are largely positive, we acknowledge the impor- tance of responsibly applying this methodology to prevent unintended harm. References Arpit, D., Wang, H., Zhou, Y ., and Xiong, C. Ensemble of averages: Improving model selection and boosting per- formance in domain generalization. Advances in Neural Information Processing Systems , 35:8265–8277, 2022. Baek, C., Jiang, Y ., Raghunathan, A., and Kolter, J. Z. Agreement-on-the-line: Predicting the performance of neural networks under distribution shift. Advances in Neu- ral Information Processing Systems , 35:19274–19289, 2022. Benjamini, Y . and Hochberg, Y . Controlling the false dis- covery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological) , 57(1):289–300, 1995. Bialek, J., Kuberski, W., Perrakis, N., and Bifet, A. Esti- mating model performance under covariate shift without labels. 2024. Borkan, D., Dixon, L., Sorensen, J., Thain, N., and Vasser- man, L. Nuanced metrics for measuring unintended bias with real data for text classification. In Companion pro- ceedings of the 2019
https://arxiv.org/abs/2505.22356v1
world wide web conference , pp. 491–500, 2019. Cha, J., Lee, K., Park, S., and Chun, S. Domain gener- alization by mutual-information regularization with pre- 9 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings trained models. In European conference on computer vision , pp. 440–457. Springer, 2022. Chen, J., Liu, F., Avci, B., Wu, X., Liang, Y ., and Jha, S. Detecting errors and estimating accuracy on unlabeled data with self-training ensembles. Advances in Neural In- formation Processing Systems , 34:14980–14992, 2021a. Chen, M., Goel, K., Sohoni, N. S., Poms, F., Fatahalian, K., and R ´e, C. Mandoline: Model evaluation under dis- tribution shift. In International conference on machine learning , pp. 1617–1629. PMLR, 2021b. Chow, C.-K. An optimum character recognition system using decision functions. IRE Transactions on Electronic Computers , (4):247–254, 1957. Christie, G., Fendley, N., Wilson, J., and Mukherjee, R. Functional map of the world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp. 6172–6180, 2018. Chuang, C.-Y ., Torralba, A., and Jegelka, S. Estimating gen- eralization under distribution shifts via domain-invariant representations. In Proceedings of the 37th International Conference on Machine Learning , pp. 1984–1994, 2020. Culverhouse, P. F., Williams, R., Reguera, B., Herry, V ., and Gonz ´alez-Gil, S. Do experts make mistakes? a comparison of human and machine indentification of di- noflagellates. Marine ecology progress series , 247:17–25, 2003. David, S. B., Lu, T., Luu, T., and P ´al, D. Impossibility theorems for domain adaptation. In Proceedings of the Thirteenth International Conference on Artificial Intelli- gence and Statistics , pp. 129–136. JMLR Workshop and Conference Proceedings, 2010. Deng, W. and Zheng, L. Are labels always necessary for classifier accuracy evaluation? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pp. 15069–15078, 2021. Deng, W., Gould, S., and Zheng, L. What does rotation prediction tell us about classifier accuracy under varying testing environments? In International Conference on Machine Learning , pp. 2579–2589. PMLR, 2021. Deng, W., Suh, Y ., Gould, S., and Zheng, L. Confidence and dispersity speak: Characterizing prediction matrix for unsupervised accuracy estimation. In International Con- ference on Machine Learning , pp. 7658–7674. PMLR, 2023. Donmez, P., Lebanon, G., and Balasubramanian, K. Unsu- pervised supervised learning i: Estimating classification and regression errors without labels. Journal of Machine Learning Research , 11(4), 2010.El-Yaniv, R. et al. On the foundations of noise-free selective classification. Journal of Machine Learning Research , 11 (5), 2010. Elsahar, H. and Gall ´e, M. To annotate or not? predicting performance drop under domain shift. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP) , pp. 2163–2173, 2019. Fan, W. and Davidson, I. Reverse testing: an efficient framework to select amongst classifiers under sample selection bias. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining , pp. 147–156, 2006. Feng, J., Sondhi, A., Perry, J., and Simon, N. Selective prediction-set models with coverage rate guarantees. Bio- metrics ,
https://arxiv.org/abs/2505.22356v1
79(2):811–825, 2023. Gal, Y . and Ghahramani, Z. Dropout as a bayesian approx- imation: Representing model uncertainty in deep learn- ing. In international conference on machine learning , pp. 1050–1059. PMLR, 2016. Gangrade, A., Kag, A., and Saligrama, V . Selective classi- fication via one-sided prediction. In International Con- ference on Artificial Intelligence and Statistics , pp. 2179– 2187. PMLR, 2021. Garg, S., Balakrishnan, S., Lipton, Z. C., Neyshabur, B., and Sedghi, H. Leveraging unlabeled data to pre- dict out-of-distribution performance. arXiv preprint arXiv:2201.04234 , 2022. Geifman, Y . and El-Yaniv, R. Selective classification for deep neural networks. Advances in neural information processing systems , 30, 2017. Geifman, Y . and El-Yaniv, R. Selectivenet: A deep neural network with an integrated reject option. In International conference on machine learning , pp. 2151–2159. PMLR, 2019. Geifman, Y ., Uziel, G., and El-Yaniv, R. Bias-reduced un- certainty estimation for deep neural classifiers. In Inter- national Conference on Learning Representations , 2019. Ginsberg, T., Liang, Z., and Krishnan, R. G. A learning based hypothesis test for harmful covariate shift. arXiv preprint arXiv:2212.02742 , 2022. Ginsberg, T., Liang, Z., and Krishnan, R. G. A learning based hypothesis test for harmful covariate shift. In The Eleventh International Conference on Learning Repre- sentations , 2023. 10 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Guan, L. and Yuan, X. Instance segmentation model evalua- tion and rapid deployment for autonomous driving using domain differences. IEEE Transactions on Intelligent Transportation Systems , 24(4):4050–4059, 2023. Guillory, D., Shankar, V ., Ebrahimi, S., Darrell, T., and Schmidt, L. Predicting with confidence on unseen distri- butions. In Proceedings of the IEEE/CVF international conference on computer vision , pp. 1134–1144, 2021. Gulrajani, I. and Lopez-Paz, D. In search of lost domain generalization. In International Conference on Learning Representations , 2021. Guo, C., Pleiss, G., Sun, Y ., and Weinberger, K. Q. On calibration of modern neural networks. In International conference on machine learning , pp. 1321–1330. PMLR, 2017. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. H´ebert-Johnson, U., Kim, M., Reingold, O., and Rothblum, G. Multicalibration: Calibration for the (computationally- identifiable) masses. In International Conference on Ma- chine Learning , pp. 1939–1948. PMLR, 2018. Hendrycks, D. and Dietterich, T. Benchmarking neural network robustness to common corruptions and perturba- tions. In International Conference on Learning Represen- tations , 2018. Hendrycks, D. and Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136 , 2016. Hu, Q., Guo, Y ., Xie, X., Cordy, M., Papadakis, M., Ma, L., and Le Traon, Y . Aries: Efficient testing of deep neural networks via labeling-free accuracy estimation. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE) , pp. 1776–1787. IEEE, 2023. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 4700–4708, 2017. Huang,
https://arxiv.org/abs/2505.22356v1
L., Zhang, C., and Zhang, H. Self-adaptive training: beyond empirical risk minimization. Advances in neural information processing systems , 33:19365–19376, 2020. Jaffe, A., Nadler, B., and Kluger, Y . Estimating the ac- curacies of multiple classifiers without labeled data. In Artificial Intelligence and Statistics , pp. 407–415. PMLR, 2015.Jiang, Y ., Nagarajan, V ., Baek, C., and Kolter, J. Z. As- sessing generalization of sgd via disagreement. arXiv preprint arXiv:2106.13799 , 2021. Kingma, D. P. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. Kivim ¨aki, J., Białek, J., Nurminen, J. K., and Kuberski, W. Confidence-based estimators for predictive performance in model monitoring. arXiv preprint arXiv:2407.08649 , 2024. Koh, P. W., Sagawa, S., Marklund, H., Xie, S. M., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R. L., Gao, I., et al. Wilds: A benchmark of in-the- wild distribution shifts. In International conference on machine learning , pp. 5637–5664. PMLR, 2021. Kozodoi, N., Jacob, J., and Lessmann, S. Fairness in credit scoring: Assessment, implementation and profit implica- tions. European Journal of Operational Research , 297 (3):1083–1094, 2022. Kulinski, S. and Inouye, D. I. Towards explaining distri- bution shifts. In International Conference on Machine Learning , pp. 17931–17952. PMLR, 2023. Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems , 30, 2017. Lehmann, E. L., Romano, J. P., and Casella, G. Testing statistical hypotheses , volume 3. Springer, 1986. Li, L., Wang, H., Zha, L., Huang, Q., Wu, S., Chen, G., and Zhao, J. Learning a data-driven policy network for pre- training automated feature engineering. In The Eleventh International Conference on Learning Representations , 2023. Liu, Z., Wang, Z., Liang, P. P., Salakhutdinov, R. R., Morency, L.-P., and Ueda, M. Deep gamblers: Learn- ing to abstain with portfolio theory. Advances in Neural Information Processing Systems , 32, 2019. Loshchilov, I., Hutter, F., et al. Fixing weight decay regu- larization in adam. arXiv preprint arXiv:1711.05101 , 5, 2017. Lu, Y ., Wang, Z., Zhai, R., Kolouri, S., Campbell, J., and Sycara, K. Predicting out-of-distribution er- ror with confidence optimal transport. arXiv preprint arXiv:2302.05018 , 2023. Madani, O., Pennock, D., and Flake, G. Co-validation: Using model disagreement on unlabeled data to validate classification algorithms. Advances in neural information processing systems , 17, 2004. 11 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Maggio, S., Bouvier, V ., and Dreyfus-Schmidt, L. Per- formance prediction under dataset shift. In 2022 26th International Conference on Pattern Recognition (ICPR) , pp. 2466–2474. IEEE, 2022. Maini, P., Yaghini, M., and Papernot, N. Dataset inference: Ownership resolution in machine learning. arXiv preprint arXiv:2104.10706 , 2021. Miao, S., Zheng, L., Liu, J., and Jin, H. K-means clustering based feature consistency alignment for label-free model evaluation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pp. 3299– 3307, 2023. O’Brien, P. C. and Fleming, T. R. A multiple testing proce- dure for clinical trials. Biometrics , pp. 549–556, 1979. Ovadia, Y ., Fertig, E., Ren, J., Nado, Z., Sculley,
https://arxiv.org/abs/2505.22356v1
D., Nowozin, S., Dillon, J., Lakshminarayanan, B., and Snoek, J. Can you trust your model’s uncertainty? evalu- ating predictive uncertainty under dataset shift. Advances in neural information processing systems , 32, 2019. Peng, R., Duan, Q., Wang, H., Ma, J., Jiang, Y ., Tu, Y ., Jiang, X., and Zhao, J. Came: Contrastive automated model evaluation. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision , pp. 20121–20132, 2023. Peng, R., Zou, H., Wang, H., Zeng, Y ., Huang, Z., and Zhao, J. Energy-based automated model evaluation. arXiv preprint arXiv:2401.12689 , 2024. Platanios, E., Poon, H., Mitchell, T. M., and Horvitz, E. J. Estimating accuracy from unlabeled data: A probabilistic logic approach. Advances in neural information process- ing systems , 30, 2017. Platanios, E. A., Dubey, A., and Mitchell, T. Estimating accuracy from unlabeled data: A bayesian approach. In International Conference on Machine Learning , pp. 1416– 1425. PMLR, 2016. Platt, J. et al. Probabilistic outputs for support vector ma- chines and comparisons to regularized likelihood meth- ods. Advances in large margin classifiers , 10(3):61–74, 1999. Pocock, S. J. Clinical trials: a practical approach . John Wiley & Sons, 2013. Podkopaev, A. and Ramdas, A. Tracking the risk of a deployed model and detecting harmful distribution shifts. arXiv preprint arXiv:2110.06177 , 2021. Qui˜nonero-Candela, J., Sugiyama, M., Schwaighofer, A., and Lawrence, N. D. Dataset shift in machine learning . Mit Press, 2022.Rabanser, S., G ¨unnemann, S., and Lipton, Z. Failing loudly: An empirical study of methods for detecting dataset shift. Advances in Neural Information Processing Systems , 32, 2019. Rabanser, S., Thudi, A., Hamidieh, K., Dziedzic, A., and Papernot, N. Selective classification via neural network training dynamics. arXiv preprint arXiv:2205.13532 , 2022. Redyuk, S., Schelter, S., Rukat, T., Markl, V ., and Biess- mann, F. Learning to validate the predictions of black box machine learning models on unseen data. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics , pp. 1–4, 2019. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bern- stein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision , 115: 211–252, 2015. Sanh, V . Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 , 2019. Schelter, S., Rukat, T., and Bießmann, F. Learning to val- idate the predictions of black box classifiers on unseen data. In Proceedings of the 2020 ACM SIGMOD Inter- national Conference on Management of Data , pp. 1289– 1299, 2020. Sun, X., Hou, Y ., Li, H., and Zheng, L. Label-free model evaluation with semi-structured dataset representations. arXiv preprint arXiv:2112.00694 , 2021. Tal, E. Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society , pp. 312–321, 2023. Taylor, J., Earnshaw, B., Mabey, B., Victors, M., and Yosin- ski, J. Rxrx1: An image set for cellular morphological variation across many experimental batches. In Interna- tional Conference on Learning Representations (ICLR) , volume 22, pp. 23, 2019. Travaini, G. V ., Pacchioni,
https://arxiv.org/abs/2505.22356v1
F., Bellumore, S., Bosia, M., and De Micco, F. Machine learning and criminal jus- tice: A systematic review of advanced methodology for recidivism risk prediction. International journal of en- vironmental research and public health , 19(17):10594, 2022. Tu, W., Deng, W., Gedeon, T., and Zheng, L. A bag-of- prototypes representation for dataset-level applications. InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , pp. 2881–2892, 2023. 12 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Unterthiner, T., Keysers, D., Gelly, S., Bousquet, O., and Tolstikhin, I. Predicting neural network accuracy from weights. arXiv preprint arXiv:2002.11448 , 2020. Walker, E. and Nowacki, A. S. Understanding equivalence and noninferiority testing. Journal of general internal medicine , 26:192–196, 2011. Wang, J., Lan, C., Liu, C., Ouyang, Y ., Qin, T., Lu, W., Chen, Y ., Zeng, W., and Philip, S. Y . Generalizing to unseen domains: A survey on domain generalization. IEEE transactions on knowledge and data engineering , 35(8):8052–8072, 2022. Wang, J., Chen, J., and Su, B. Toward auto-evaluation with confidence-based category relation-aware regression. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pp. 1–5. IEEE, 2023. Welch, B. L. The generalization of ‘student’s’problem when several different population varlances are involved. Biometrika , 34(1-2):28–35, 1947. Wellek, S. Testing statistical hypotheses of equivalence . Chapman and Hall/CRC, 2002. Wiles, O., Gowal, S., Stimberg, F., Alvise-Rebuffi, S., Ktena, I., Dvijotham, K., and Cemgil, T. A fine-grained analysis on distribution shift. arXiv preprint arXiv:2110.11328 , 2021. Xie, R., Wei, H., Feng, L., Cao, Y ., and An, B. On the importance of feature separability in predicting out-of- distribution error. Advances in Neural Information Pro- cessing Systems , 36, 2024. Yu, Y ., Yang, Z., Wei, A., Ma, Y ., and Steinhardt, J. Pre- dicting out-of-distribution error with the projection norm. InInternational Conference on Machine Learning , pp. 25721–25746. PMLR, 2022. Zhou, K., Yang, Y ., Qiao, Y ., and Xiang, T. Domain gener- alization with mixstyle. In International Conference on Learning Representations , 2021. Zhou, K., Liu, Z., Qiao, Y ., Xiang, T., and Loy, C. C. Do- main generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence , 45(4):4396– 4415, 2022. 13 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings A. Appendix A.1. Statistical Hypothesis Testing A.1.1. W ELCH ’S T-TEST Welch’s t-test is a modification of the standard Student’s t-test that adjusts for unequal variances and unequal sample sizes between the two groups (Welch, 1947). The test statistic for the suitability filter Welch’s t-test is given by: t=ˆµDtest−ˆµDuq ˆσ2 test |Dtest|+ˆσ2u |Du|(13) In the above, ˆµDu=1 |Du|P x∈Dupc(x) +mis the margin-adjusted sample mean of pc[Du]andˆµDtest= 1 |Dtest|P x∈Dtestpc(x)is the sample mean of pc[Dtest].ˆσ2 uandˆσ2 testare the sample variances of pc[Du]andpc[Dtest], respectively. The test statistic follows a t-distribution with degrees of freedom (df) given by: df= ˆσ2 test |Dtest|+ˆσ2 u |Du|2  ˆσ2test |Dtest|2 |Dtest|−1+ ˆσ2u |Du|2 |Du|−1(14) The degrees of freedom are dependent on the size of provided user and test samples and are used to determine
https://arxiv.org/abs/2505.22356v1
the appropriate critical value for the t-distribution. Since non-inferiority testing is inherently a one-sided test, after calculating the two-sample t-test statistic, the p-value is divided by 2to reflect the one-sided nature of the non-inferiority test. This adjusted p-value is then compared to the chosen significance level αto determine whether the null hypothesis H0can be rejected. A.1.2. P ROOF OF LEMMA 4.3 Proof. In the following, all expectations and probabilities are over samples (x, y)∼ X × Y unless specified otherwise. We begin by noting that the predicted correctness probability for a given sample xis denoted as pc(x). We assume that pc(x)has a well-defined probability density function fc(ν)over the interval [0,1]. This means that the predicted correctness probabilities pc(x)for samples xare distributed according to fc(ν), where ν∈[0,1]represents the possible values of prediction correctness. We can hence represent the expected value of the correctness probability pc(x)under the distribution defined by fc(ν)as: E[pc(x)] =Z1 0νfc(ν)dν. (15) The true probability of prediction correctness for a model Mon a sample xwith predicted label M(x)and true label y is denoted as P[M(x) =y]. This can be expressed as the integral over all possible predicted correctness probabilities ν weighted by the conditional probability of correctness given pc(x) =νand the probability density fc(ν). This decomposition follows from the law of total probability, where the predicted correctness probability pc(x)serves as an intermediate variable. We can hence write: P[M(x) =y] =Z1 0P[M(x) =y|pc(x) =ν]fc(ν)dν. (16) Due to the inherent uncertainty and error in the calibration process, the predicted probability pc(x)is not necessarily equal to the true probability Px[M(x) =y]. We model this calibration error as ϵ(ν), which represents the deviation of the predicted correctness from the true correctness for each possible correctness value ν. Following Equation 8, we can decompose the true probability of prediction correctness as: Px[M(x) =y] =Z1 0νfc(ν)dν+Z1 0ϵ(ν)fc(ν)dν=Ex[pc(x)] +Z1 0ϵ(ν)fc(ν)dν. (17) The first term represents the expected value of the predicted correctness, while the second term represents the expected error introduced by the calibration process. Now, we make the assumption that the calibration error is equal to some small value δwith0≤ |δ| ≪1. This assumption is referred to as δ-calibration as defined in Definition 4.1 (CHANGE THIS). Under this assumption, we have that: Z1 0ϵ(ν)fc(ν)dν=δ. (18) 14 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Combining this result with Equation 17, we find for the difference between the true probability and the expected correctness that: E[pc(x)]−P[M(x) =y] =δ. (19) This completes the proof of Lemma 4.3. A.1.3. P ROOF OF COROLLARY 4.4 Proof. For simplicity, let us use the following notation: Acc source :=Px∼D source[M(x) =O(x)], (20) Acc target:=Px∼D target[M(x) =O(x)], (21) µsource :=Ex∼D source[pc(x)], (22) µtarget:=Ex∼D target[pc(x)]. (23) Here,O(x)represents an oracle that provides the true label yfor any input x. LetMbe a model with correctness estimator Cthat is δ-calibrated on both the source and target distributions. It hence follows from Lemma 4.3 that the predictions output by the correctness estimator Csatisfy: µsource−Acc source =δsource and µtarget−Acc target=δtarget. (24) We are interested in upper-bounding the false positive rate of the end-to-end suitability filter, i.e.,
https://arxiv.org/abs/2505.22356v1
the probability of rejecting the null hypothesis H0at significance level αand returning SUITABLE when, in reality: Acc target<Acc source−m. (25) We can define m′:=m+δsource−δtarget and leverage Equation 24 to write: Acc target<Acc source−m⇐⇒ µtarget−δtarget< µ source−δsource−m⇐⇒ µtarget< µ source−m′. (26) With this margin m′, the corresponding null hypothesis for the non-inferiority test H0is: H0:µtarget< µ source−m′. (27) Assuming normalcy and independence and applying Theorem 4.2 at significance level α, it is guaranteed that for the true mean prediction correctness µsource andµtarget of the source and target distributions, the probability of rejecting the null hypothesis H0:µtarget< µ source−m′(i.e., concluding µtarget≥µsource−m′) when H0is true is controlled at α: P(Reject H0|H0is true )≤α. (28) This ensures that the difference in mean prediction correctness between the source and target distributions is bounded by the margin m′with high probability. Let us now derive the implication for the end-to-end suitability filter. By definition: P(Reject H0|Acc target<Acc source−m) =P(Reject H0∩Acc target<Acc source−m) P(Acc target<Acc source−m) =P(Reject H0∩µtarget< µ source−m′) P(µtarget< µ source−m′) =P(Reject H0|H0is true ) ≤α.(29) The inequality follows from the guarantees of the non-inferiority test as outlined in Equation 28. All other transformations are applications of Bayes’ theorem and Equation 26. 15 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Under perfect calibration, δtarget=δsource = 0and thus no adjustments to the performance deviation margin mare needed to achieve a bounded false positive rate. Hence, under perfect calibration, the probability of rejecting the null hypothesis for a non-inferiority test with margin mgiven that the model accuracy on Dtarget is lower than on Dsource by more than mis upper bounded by the chosen significance level α. If we do incur miscalibration and observe |δtarget|>0or|δsource|>0, we have to adjust the performance deviation margin maccordingly to reflect this. As shown, when choosing m′:=m+δsource−δtarget, the end-to-end suitability filter false positive rate remains bounded. This concludes the proof of Corollary 4.4. A.2. Additional Experiment Details A.2.1. S UITABILITY SIGNALS General Suitability Signals. LetM∈ M be a classifier mapping inputs x∈ X to probabilities over kclasses Y={1, . . . , k }. Denote the logits of M(x)asz∈Rkand the softmax outputs as p=softmax (z), where pi=eziPk j=1ezj. The following sample-level signals are derived from zandp: - Maximum confidence ( conf max): conf max= max i∈{1,...,k}pi (30) The maximum predicted probability. - Confidence standard deviation ( conf std): conf std=vuut1 kkX i=1(pi−¯p)2,¯p=1 kkX i=1pi (31) The standard deviation of the softmax probabilities. - Confidence entropy ( conf entropy ): conf entropy =−kX i=1pilog(pi+ϵ) (32) The Shannon entropy of the predicted probabilities, measuring uncertainty. We add ϵ= 10−10for numerical stability. - Confidence ratio ( conf ratio ): conf ratio =p(1) p(2)+ϵ(33) The ratio of the highest to the second-highest predicted probabilities, where p(1)andp(2)are the largest and second- largest pi, respectively. We add ϵ= 10−10for numerical stability. - Sum of top 10% confidences ( topkconf sum): topkconf sum=X i∈Kpi,K=indices of top- ⌈0.1k⌉probabilities (34) The sum of the largest 10% of all predicted probabilities. - Mean logit ( logit mean ): logit mean =1 kkX i=1zi (35) The mean of the logits. 16 Suitability
https://arxiv.org/abs/2505.22356v1
Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings - Maximum logit ( logit max): logit max= max i∈{1,...,k}zi (36) The maximum logit value. - Logit standard deviation ( logit std): logit std=vuut1 kkX i=1(zi−¯z)2,¯z=1 kkX i=1zi (37) The standard deviation of the logits. - Difference between two largest logits ( logit diff top2 ): logit diff top2 =z(1)−z(2) (38) The difference between the two largest logits, where z(1)andz(2)are the largest and second-largest zi, respectively. - Loss with respect to predicted label ( loss ): loss =−log(p(1)+ϵ) (39) The cross-entropy loss with respect to the predicted label, p(1)is the largest pi. - Difference in loss between top two classes ( margin loss ): margin loss =−log(p(1)+ϵ) + log( p(2)+ϵ) (40) The difference in cross-entropy loss between the top two predicted probabilities. We add ϵ= 10−10for numerical stability. - Energy ( energy ): energy =−logkX i=1ezi(41) The energy function derived from the logits, measuring prediction certainty. Alternative Suitability Signals. In our work, we deliberately rely on suitability signals that avoid assumptions about architecture, training, or data domains and are applicable to any classifier. However, many other signals shown to be indicative of model performance have been proposed in the literature. In unsupervised accuracy estimation, more recent approaches measure disagreement between predictions by different models (Madani et al., 2004; Donmez et al., 2010; Platanios et al., 2016; 2017; Chen et al., 2021a; Baek et al., 2022; Jiang et al., 2021; Jaffe et al., 2015; Fan & Davidson, 2006; Yu et al., 2022; Chuang et al., 2020; Ginsberg et al., 2023), rely on manual provision of information about or make assumptions on the nature of the distribution shift between training and deployment (Redyuk et al., 2019; Chen et al., 2021b; Elsahar & Gall ´e, 2019; Guillory et al., 2021; Schelter et al., 2020; Peng et al., 2024; 2023; Deng & Zheng, 2021), focus on specific input data types (Maggio et al., 2022; Deng et al., 2021; Bialek et al., 2024; Sun et al., 2021; Deng & Zheng, 2021; Unterthiner et al., 2020; Guan & Yuan, 2023; Li et al., 2023), or analyze classification decision boundaries and feature separability (Hu et al., 2023; Xie et al., 2024; Tu et al., 2023; Miao et al., 2023). To ensure generality and broad applicability of the suitability filter across diverse settings, these signals are not included in our experimental evaluation. However, signals shown to predict accuracy in these studies could serve as additional suitability signals in scenarios where their specific constraints are met. Similarly, in selective classification, recent methods enhance the underlying model by augmenting its architecture (Geifman & El-Yaniv, 2019; Lakshminarayanan et al., 2017), employing adapted loss functions during training (Gangrade et al., 2021; Huang et al., 2020; Liu et al., 2019), or utilizing more advanced prediction correctness signals, albeit often with increased inference costs (Geifman et al., 2019; Gal & Ghahramani, 2016; Rabanser et al., 2022; Feng et al., 2023). These approaches require modifications to model architecture, training processes, or inference, and are thus not generally applicable. Having said that, while these approaches are not
https://arxiv.org/abs/2505.22356v1
incorporated into our work, they can serve as additional suitability signals in scenarios where these modifications are feasible. 17 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings A.2.2. D ATASETS AND MODELS FMoW-WILDS .TheFMoW-WILDS dataset contains satellite images taken in different geographical regions and in different years (Christie et al., 2018; Koh et al., 2021), thus considering both temporal and geographical shift. The input xis an RGB satellite image (resized to 224×224pixels), the label yis one of 62building or land use categories, and the domain represents the year the image was taken and its geographical region. We train a DenseNet-121 model (Huang et al., 2017) pretrained on ImageNet (Russakovsky et al., 2015) and without L2regularization with empirical risk minimization. We use the Adam optimizer (Kingma, 2014) with an initial learning rate of 10−4that decays by 0.96per epoch, and train fro 50 epochs with early stopping and a batch size of 64. All reported results are averaged over 3random seeds. Following the standard WILDS training setup, we use 76,863images from the years 2002 -2013 as training data. We split the remaining ID and OOD splits into 16different ID folds and 30different OOD data folds as detailed in Table 2. These folds were chosen with the aim to be as representative as possible of shifts likely to occur in practice while still ensuring a sufficient number of samples per fold for statistical testing (at least 666). RxRx1-WILDS .TheRxRx1-WILDS dataset reflects the disribution shifts induced by batch effects in the context of genetic perturbation classification (Taylor et al., 2019; Koh et al., 2021). The input xis a 3-channel image of human cells obtained by fluorescent microscopy (nuclei, endoplasmic reticuli and actin), the label yindicates which of the 1,139 genetic treatments (including no treatment) the cells received, and the domain specifies the batch in which the imaging experiment was run. The images in RxRx1-WILDS are the result of executing the same experiment 51times, each in a different batch of experiments. Each experiment was run in a single cell type, one of: HUVEC (24 experiments), RPE (11 experiments), HepG2 (11 experiments), and U2OS (5 experiments) across 2 sites. The dataset is split by experimental batches into training, validation, and test sets. For all experiments, we fine-tune a ResNet-50 model (He et al., 2016) pretrained on ImageNet (Russakovsky et al., 2015), using a learning rate of 10−4and L2-regularization strength of 10−5. The models are trained with the Adam optimizer (Kingma, 2014) and a batch size of 75for90epochs, linearly increasing the learning rate for10epochs and then decreasing it following a cosine learning rate schedule. Results are reported averaged over 3 random seeds. Following the standard WILDS training setup, we use 40,612images from 33experiments in site 1 as training data. We then split the remaining data by cell type into 4ID data folds (same experiments as training data but different images but site 2) and 8OOD data folds (different experiments, combining sites 1 and 2) as detailed in Table 3. CivilComments-WILDS .TheCivilComments-WILDS dataset focuses on text toxicity classification across de- mographic identities, aiming to address biases
https://arxiv.org/abs/2505.22356v1
in toxicity classifiers that can spuriously associate toxicity with certain demographic mentions (Borkan et al., 2019; Koh et al., 2021). The input xis a text comment on an online article, and the label yis whether the comment was rated as toxic or not. The domain is represented as an 8-dimensional binary vector, where each component corresponds to the mention of one of the 8 demographic identities: male, female, LGBTQ, Christian, Muslim, other religions, Black, and White. The dataset consists of 450,000 comments, annotated for toxicity and demographic mentions by multiple crowdworkers and randomly split into train, validation and test splits. We hence have no additional OOD data splits (and correspondingly, no OOD data folds) for this dataset. We train a DistilBERT-base-uncased model (Sanh, 2019) with the AdamW optimizer (Loshchilov et al., 2017), using a learning rate of 10−5, a batch size of 16, and an L2 regularization strength of 10−2for 5 epochs with early stopping. All reported results are averaged over 3random seeds. Following the standard WILDS training setup, we use 269,038comments as training data. We split the remaining data into 16different ID folds as detailed in Table 4. Since the data is generally from the same distribution as our training data but we divide it into folds depending on the sensitive attributes mentioned in each comment, this is an example of the target distribution consisting of subpopulations of the source distribution. A.3. Possible Extensions A.3.1. E QUIVALENCE TESTING In equivalence testing, the goal is to assess whether the performance on the target dataset Dtarget is statistically similar to the performance on the source dataset Dsource within a specified margin m, i.e., we want to test whether the difference between the two means is sufficiently small (Wellek, 2002). This is formalized as the following hypothesis setup: H0:|µtarget−µsource|> m (42) H1:|µtarget−µsource| ≤m (43) 18 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Table 2. Summary of different ID and OOD data folds for FMoW-WILDS . For accuracy, we report the mean and the 95% confidence interval based on three models trained with different random seeds. SPLIT YEAR REGION NUMSAMPLES ACCURACY ID F OLDS IDVAL 2002-2006 A LL 1420 53.99±1.32% IDVAL 2007-2009 A LL 1430 55.78±0.70% IDVAL 2010 A LL 2459 62.60±1.64% IDVAL 2011 A LL 2874 65.98±1.25% IDVAL 2012 A LL 3300 64.03±0.13% IDVAL ALL ASIA 2693 62.42±1.03% IDVAL ALL EUROPE 5268 59.88±0.56% IDVAL ALL AMERICAS 3076 63.85±2.10% IDTEST 2002-2006 A LL 1473 51.39±4.54% IDTEST 2007-2009 A LL 1423 57.25±0.61% IDTEST 2010 A LL 2456 61.01±0.41% IDTEST 2011 A LL 2837 65.03±1.15% IDTEST 2012 A LL 3138 62.42±1.36% IDTEST ALL ASIA 2615 59.39±1.94% IDTEST ALL EUROPE 5150 58.99±0.51% IDTEST ALL AMERICAS 3130 63.05±1.39% OOD F OLDS VAL 2013 A LL 3850 60.29±1.81% VAL 2014 A LL 6192 62.44±1.48% VAL 2015 A LL 9873 57.77±1.37% VAL ALL ASIA 4121 56.30±0.73% VAL ALL EUROPE 7732 63.28±1.07% VAL ALL AFRICA 803 50.73±1.25% VAL ALL AMERICAS 6562 58.04±2.05% VAL ALL OCEANIA 693 66.38±2.51% VAL 2013 E UROPE 1620 61.30±1.11% VAL 2014 E UROPE 2523 68.05±1.94% VAL 2015 E UROPE 3589 60.82±0.73% VAL 2013
https://arxiv.org/abs/2505.22356v1
A SIA 813 57.40±3.89% VAL 2014 A SIA 1311 56.90±2.31% VAL 2015 A SIA 1997 55.45±0.26% VAL 2013 A MERICAS 1168 61.13±1.66% VAL 2014 A MERICAS 1967 60.85±1.26% VAL 2015 A MERICAS 3427 55.36±3.32% TEST 2016 A LL 15959 55.48±1.14% TEST 2017 A LL 6149 48.64±2.13% TEST ALL ASIA 4963 55.67±0.72% TEST ALL EUROPE 5858 56.38±1.96% TEST ALL AFRICA 2593 33.50±3.87% TEST ALL AMERICAS 8024 56.20±1.17% TEST ALL OCEANIA 666 59.56±0.43% TEST 2016 E UROPE 4845 58.42±2.68% TEST 2017 E UROPE 1013 46.63±1.48% TEST 2016 A SIA 3216 53.58±0.80% TEST 2017 A SIA 1747 59.53±1.42% TEST 2016 A MERICAS 6165 57.21±1.42% TEST 2017 A MERICAS 1859 52.86±1.49% The null hypothesis H0asserts that the difference in means between the target and source distributions is greater than the margin m. The alternative hypothesis H1posits that the means are equivalent, with their difference being smaller than or equal to the margin m. In practice, this is achieved by conducting two one-sided tests (TOST). This involves testing both lower and upper bounds of the margin to confirm that the performance difference is not meaningfully large in either direction. 19 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Table 3. Summary of different ID and OOD data folds for RxRx1-WILDS . For accuracy, we report the mean and the 95% confidence interval based on three models trained with different random seeds. SPLIT CELLTYPE NUMSAMPLES ACCURACY ID F OLDS IDTEST HEPG2 8622 25.39±1.43% IDTEST HUVEC 19671 50.30±1.24% IDTEST RPE 8623 23.86±1.17% IDTEST U2OS 3696 17.00±0.98% OOD F OLDS VAL HEPG2 2462 21.01±1.77% VAL HUVEC 2464 36.85±0.10% VAL RPE 2464 16.44±0.96% VAL U2OS 2464 2.27±0.30% TEST HEPG2 7388 22.63±1.28% TEST HUVEC 17244 39.99±1.13% TEST RPE 7360 21.32±0.41% TEST U2OS 2440 8.96±1.78% Table 4. Summary of different ID data folds for CivilComments-WILDS . For accuracy, we report the mean and the 95% confidence interval based on three models trained with different random seeds. SPLIT SENSITIVE ATTRIBUTE NUMSAMPLES ACCURACY VAL MALE 4765 89.31±0.22% VAL FEMALE 5891 90.09±0.68% VAL LGBTQ 1457 80.00±0.97% VAL CHRISTIAN 4550 92.72±0.17% VAL MUSLIM 2110 81.52±1.39% VAL OTHER RELIGIONS 986 85.87±0.77% VAL BLACK 1652 77.85±1.14% VAL WHITE 2867 77.26±0.76% TEST MALE 14295 88.84±0.16% TEST FEMALE 16449 90.02±0.18% TEST LGBTQ 4426 79.78±0.68% TEST CHRISTIAN 13361 92.22±0.25% TEST MUSLIM 6982 82.65±0.58% TEST OTHER RELIGIONS 3500 88.18±0.58% TEST BLACK 4872 78.39±0.68% TEST WHITE 7969 79.88±0.47% A.3.2. C ONTINUOUS MONITORING In continuous monitoring, the aim is to regularly re-evaluate if a model is still suitable for a given deployment context based on new incoming data samples. When performance is evaluated over time with changing data, the Benjamini-Hochberg (BH) procedure is used to control the false discovery rate (FDR) across multiple tests (Benjamini & Hochberg, 1995). The BH procedure adjusts p-values by considering the number of tests performed up to the current point, ensuring that the proportion of false positives remains controlled. This is formalized as follows: for each p-value pi, the null hypothesis is rejected ifpi≤i m·α, where mis the total number of tests and αis the desired FDR threshold. The rolling window approach further refines this by
https://arxiv.org/abs/2505.22356v1
evaluating significance across a fixed window of recent data, smoothing out short-term fluctuations and focusing on long-term trends in performance. This approach helps identify true changes in model performance while accounting for variations in individual datasets over time. A.3.3. S EQUENTIAL TESTING When testing the same hypothesis sequentially with accumulating data, the O’Brien-Fleming (O’Brien & Fleming, 1979) and Pocock (Pocock, 2013) methods are used to control the overall false positive rate (type I error rate) across multiple tests. 20 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings These methods are designed for sequential testing, where a decision is made at each stage based on the data collected so far, and more data can be added if no conclusion is reached. The O’Brien-Fleming method is more conservative early on, requiring stronger evidence to reject the null hypothesis at earlier stages and relaxing this criterion as more data becomes available. Specifically, the significance threshold at stage kis adjusted as: αk= 1−(1−α)1/(n−k+1)(44) where nis the total number of stages and αis the desired overall Type I error rate. In contrast, the Pocock method applies a constant critical value across all stages of testing. For each stage k, the significance level remains: αk=α n(45) Both methods adjust the significance threshold at each stage to control the family-wise error rate (FWER), ensuring that the probability of making at least one Type I error remains below a specified threshold, α. A.4. Additional Results A.4.1. C ALIBRATION Impact of Calibration on Dtarget.We visualize the impact of calibration on classifier Cand the performance of the end-to-end suitability filter in Figure 5. To this end, we select the ID validation and test splits from Europe as DsfandDtest, respectively. We then plot the actual accuracy versus the mean of the prediction correctness estimated by a classifier C trained on Dsf, with or without additional calibration on Du. It should be noted that this is mainly a theoretical experiment, as in practice calibration on Duis not possible since we do not have access to ground truth information for user data. We observe that the false positive rate of the end-to-end suitability filter is elevated due to miscalibration on the different distribution Dtarget. Although the relationship between actual accuracy and mean estimated prediction correctness is weaker without calibration, these metrics remain highly correlated. Therefore, the increased risk from miscalibration can be mitigated by selecting an appropriate non-inferiority margin m. 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 0.68 Actual Accuracy0.520.540.560.580.600.620.640.660.68Mean of Estimated Prediction CorrectnessImpact of Calibration on Suitability Filter on FMoW SUITABLE UNSUITABLE Uncalibrated Calibrated Test Set Accuracy SUITABLE SUITABLE within margin UNSUITABLE Figure 5. Suitability filtering on different OOD folds of FMoW-WILDS with and without additional calibration on Du. We choose a non-inferiority margin of m= 0.05for this experiment. Accuracy Estimation Error. In Section 4.4, we propose using the empirical accuracy estimation error ∆to adjust the margin and mitigate the effects of miscalibration in C. To illustrate this in practice, Figure 6 presents the distribution of ∆ for both test and user data across 6300 experiments on FMoW-WILDS .
https://arxiv.org/abs/2505.22356v1
As expected, ∆testis centered around zero, indicating that the estimated accuracy closely matches the ground truth accuracy and there is no clear directional bias. However, ∆uis frequently positive, indicating that accuracy is often overestimated. This miscalibration can lead to incorrect suitability decisions. While adjusting the performance deterioration margin m, as proposed in Section 4.4, would mitigate this issue, 21 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings 0.00 0.05 0.10 0.15 User Accuracy Estimation Error0100200300400500600700800Probability Density 0.04 0.02 0.00 0.02 0.04 Test Accuracy Estimation ErrorCorrect Suitability Decisions Incorrect Suitability DecisionsDistributions of Accuracy Estimation Error for Correct vs Incorrect Suitability Decisions on FMoW Figure 6. Distribution of the empirical accuracy estimation error ∆for both the user and the test data across 6300 experiments on FMoW-WILDS . The suitability decisions depicted here have been made for a choice of m= 0 without margin adjustment due to miscalibration and at a significance level of α= 0.05. no such adjustment was applied here to highlight the impact of miscalibration on suitability decisions. Notably, suitability decision errors do not occur for examples with large accuracy estimation errors. To better understand this phenomenon, Figure 7 explores the relationship between accuracy estimation error ∆uand the actual performance degradation from test to user data. As performance deteriorates, the accuracy estimation error tends to increase. However, performance degradation grows at a faster rate than ∆u, meaning that the overall impact of ∆uon suitability decisions remains limited for m= 0. This explains why incorrect suitability decisions are primarily concentrated near the decision boundary rather than in cases with extreme accuracy estimation errors (and, correspondingly, larger performance degradation). 0.20 0.15 0.10 0.05 0.00 0.05 0.10 Accuracy Difference (User Accuracy - Test Accuracy)0.04 0.02 0.000.020.040.060.080.10User Accuracy Estimation Error (AEE)Scatter Plot of Accuracy Difference vs. User AEE Correct Suitability Decisions Incorrect Suitability Decisions Trend Line Figure 7. Relationship between performance deterioration for model Mand the empirical accuracy estimation error ∆for the user data across 6300 experiments on FMoW-WILDS . The suitability decisions depicted here have been made for a choice of m= 0without margin adjustment due to miscalibration and at a significance level of α= 0.05. As can be seen, incorrect suitability decisions are centered around the suitability decision boundary and are not, as might be expected, in areas of large empirical accuracy estimation error ∆. By incorporating margin adjustments based on the empirical accuracy estimation error, suitability decisions can be made more robust against calibration errors, ultimately improving the reliability of the suitability filter in deployment. A.4.2. U SING DIFFERENT SIGNAL SUBSETS FOR PREDICTION CORRECTNESS ESTIMATOR In Table 5, we compare our proposed suitability filter that trains the prediction correctness estimator using various suitability signals to alternatives that rely on only a single signal. As can be seen, the suitability filter leveraging all signals 22 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings generally outperforms single-signal alternatives, demonstrating the benefit of integrating diverse signals for robust suitability decisions. However, we find that certain signals, such as energy orlogit max, perform nearly as well on their own.
https://arxiv.org/abs/2505.22356v1
Unsurprisingly, these signals are also identified as the most predictive of per-sample prediction correctness for classifier C(see Appendix A.4.3). Noteworthy outliers in Table 5 include logit mean andlogit std that have relatively high accuracy but higher FPR and lower ROC and PR AUC than comparable signals. Upon closer examination, we find that prediction correctness classifiers Ctrained on only these signals generally have a higher expected calibration error even when tested on in-distribution data as can be seen in Table 6. As demonstrated in Corollary 4.5, proper calibration is theoretically crucial for reliable suitability decisions, and this importance is evident in practice here. Signals that yield the best-performing prediction correctness estimators C(high accuracy and low maximum calibration error) also demonstrate superior performance when applied in the end-to-end suitability filter. Table 5. Comparing performance of the proposed suitability filter against individual signal-based suitability decisions on FMoW-WILDS form= 0 with both ID and OOD user data. We report the area under the curve for ROC and PR (capturing the tradeoffs at various significance thresholds), as well as accuracy and the true false positive rate at α= 0.05. We also report 95% confidence intervals based on3models Mtrained on the same Dtrainwith different random seeds. METHOD ACC FPR ROC PR ID U SERDATA SUITABILITY FILTER 81.8±3.1% 0 .027±0.033 0 .969±0.023 0 .967±0.029 E N E R G Y 80.6±4.3% 0 .024±0.040 0 .965±0.020 0 .962±0.033 L O G I T M A X 80.2±4.8% 0 .025±0.041 0 .965±0.018 0 .963±0.030 L O G I T M E A N 80.1±10.4% 0 .112±0.194 0 .918±0.113 0 .896±0.196 L O G I T D I F F T O P2 73.5±4.5% 0 .008±0.001 0 .963±0.017 0 .963±0.017 M A R G I N L O S S 73.5±4.5% 0 .008±0.001 0 .963±0.017 0 .963±0.017 L O G I T S T D 72.3±13.3% 0 .170±0.134 0 .855±0.144 0 .779±0.300 C O N F E N T R O P Y 71.1±2.4% 0 .003±0.012 0 .969±0.014 0 .967±0.012 C O N F S T D 68.8±3.1% 0 .008±0.019 0 .963±0.020 0 .960±0.017 T O P KC O N F S U M 68.2±6.9% 0 .005±0.015 0 .947±0.027 0 .944±0.029 C O N F M A X 67.9±4.4% 0 .008±0.021 0 .957±0.025 0 .954±0.025 L O S S 67.0±2.8% 0 .008±0.021 0 .952±0.031 0 .948±0.035 C O N F R A T I O 62.3±4.5% 0 .046±0.053 0 .846±0.056 0 .826±0.016 OOD U SERDATA SUITABILITY FILTER 91.9±2.5% 0 .018±0.017 0 .965±0.016 0 .891±0.035 E N E R G Y 91.9±4.7% 0 .008±0.007 0 .971±0.005 0 .910±0.028 L O G I T M A X 91.9±4.7% 0 .008±0.007 0 .971±0.005 0 .910±0.030 C O N F E N T R O P Y 89.1±3.1% 0 .011±0.020 0 .957±0.007 0 .872±0.078 C O N F S T D 88.9±4.0% 0 .010±0.019 0 .952±0.013 0 .854±0.108 L O G I T D I F F T O P2 88.9±2.7% 0 .005±0.011 0 .976±0.014 0 .917±0.074 M A R G I N L O S S 88.9±2.7% 0 .005±0.011 0 .976±0.014 0 .917±0.074 C O
https://arxiv.org/abs/2505.22356v1
N F M A X 88.4±4.2% 0 .011±0.020 0 .948±0.015 0 .842±0.121 L O S S 88.3±3.4% 0 .012±0.023 0 .944±0.014 0 .831±0.118 T O P KC O N F S U M 86.7±4.4% 0 .026±0.057 0 .916±0.005 0 .773±0.097 C O N F R A T I O 83.6±6.4% 0 .001±0.005 0 .905±0.050 0 .711±0.102 L O G I T M E A N 61.7±20.5% 0 .446±0.268 0 .845±0.193 0 .698±0.175 L O G I T S T D 28.3±16.3% 0 .812±0.166 0 .324±0.256 0 .137±0.019 A.4.3. S IGNAL IMPORTANCE FOR THE PREDICTION CORRECTNESS ESTIMATOR To analyze the importance of individual signals used to estimate prediction correctness, we present the ANOV A results on FMoW-WILDS in Table 7. All signals, except for class prob ratio , show extremely high F-values with corresponding p-values essentially zero, indicating their strong statistical significance in explaining the variance in prediction correctness. The most valuable signals, as indicated by the highest F-values in Table 7, are logit max,energy ,margin loss , and logit diff top2 . For certain signals, the sign of the logistic regression coefficient matches our expectations, with a higher logit max value, an increase in the logit difference for the predicted class and the runner-up ( logit diff top2 ) or low energy indicating a correct prediction. Interestingly, however, we also observe that for features such as conf max, the sign is negative, indicating that lower confidence is indicative of higher likelihood of correct prediction. While this seems counterintuitive at first, it should be noted that for a large majority of samples this signal is 1and is hence heavily 23 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Table 6. Table showcasing the mean accuracy and calibration metrics (ECE, MCE, RMSCE) for prediction correctness estimators trained on different signals, with 95% confidence intervals. Suitability Filter refers to the classifier Ctrained using all available signals. The metrics evaluate the classifiers’ prediction quality and their calibration over 3random splits of the FMoW-WILDS ID train and validation data splits. SIGNALS ACCURACY ECE MCE RMSCE SUITABILITY FILTER 77.5±0.3% 0 .021±0.007 0 .055±0.021 0 .027±0.006 L O G I T M A X 76.6±0.9% 0 .027±0.017 0 .068±0.050 0 .033±0.020 E N E R G Y 76.3±0.9% 0 .029±0.012 0 .075±0.054 0 .035±0.017 M A R G I N L O S S 75.5±0.6% 0 .021±0.003 0 .060±0.021 0 .028±0.002 L O G I T D I F F T O P2 75.5±0.6% 0 .021±0.003 0 .060±0.021 0 .028±0.002 C O N F E N T R O P Y 74.4±0.9% 0 .089±0.007 0 .208±0.055 0 .113±0.009 C O N F S T D 73.2±0.6% 0 .104±0.020 0 .263±0.068 0 .134±0.013 C O N F M A X 72.7±0.5% 0 .116±0.014 0 .260±0.056 0 .143±0.015 L O S S 72.0±0.4% 0 .121±0.012 0 .265±0.036 0 .148±0.022 T O P KC O N F S U M 67.7±1.8% 0 .146±0.026 0 .301±0.066 0 .177±0.030 L O G I T M E A N 67.1±2.3% 0 .081±0.033 0 .268±0.081 0 .119±0.046 C O N F R
https://arxiv.org/abs/2505.22356v1
A T I O 61.1±2.9% 0 .144±0.008 0 .320±0.008 0 .187±0.001 L O G I T S T D 59.8±2.4% 0 .132±0.070 0 .294±0.209 0 .167±0.108 Table 7. ANOV A results showing the significance of individual signals in predicting model correctness. Signals are ordered by decreasing F-value, which measures the variance explained by each signal relative to the residual variance. We also include the sign of the model’s coefficients for each signal, indicating whether a given feature positively or negatively influences the prediction correctness estimate. SIGNAL F-VALUE P -VALUE REL. L O G I T M A X 2090.61 0 + E N E R G Y 2051.45 0 − M A R G I N L O S S 1982.44 0 − L O G I T D I F F T O P21978.44 0 + C O N F E N T R O P Y 1390.30 0 − C O N F S T D 1232.76 0 − C O N F M A X 1108.32 0 − L O S S 934.26 0 − L O G I T M E A N 755.29 0 − T O P KC O N F S U M 281.76 0 + L O G I T S T D 116.16 9 .18·10−27− C O N F R A T I O 12.05 5 .21·10−4+ concentrated around 0after normalization. The contribution from this signal is thus mostly relevant in cases where the maximum confidence is below 1anyways, in which case it seems that a higher confidence can be indicative of incorrect predictions. SHAP (SHapley Additive exPlanations) is a model-agnostic method for interpreting machine learning models by assigning each feature a contribution value to the model’s prediction. It calculates Shapley values based on cooperative game theory, ensuring that the contribution of each feature is fairly distributed by considering all possible feature combinations and their impact on the prediction. As can be seen in Figure 8, the signals deemed most predictive of prediction correctness are the same ones as identified by the ANOV A analysis in Table 7. A.4.4. C HOICE OF MODEL ARCHITECTURE FOR PREDICTION CORRECTNESS ESTIMATOR In Table 8, we compare the accuracy of different prediction correctness estimators on the FMoW-WILDS dataset. We evaluate a range of classifiers, including simple models like Logistic Regression and more complex architectures such as Single-Layer and Two-Layer Neural Networks. We observe that logistic regression performs as well as more complex models, delivering high accuracy and low expected calibration error. While this may seem surprising, it is important to note that the suitability signals are already non-linear transformations of the model’s output logits. Since these transformations capture the key relationships needed for our task, using more complex models capable of learning additional non-linear patterns, such as neural networks, does not provide any further benefit. 24 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings 2 1 0 1 2 3 4 5 SHAP value (impact on model output)top_k_conf_sumlogit_meanconf_ratioconf_stdlosslogit_stdconf_entropymargin_losslogit_diff_top2conf_maxenergylogit_max SHAP Summary Plot for Prediction Correctness Estimator LowHigh Feature value Figure 8. SHAP analysis for
https://arxiv.org/abs/2505.22356v1
the prediction correctness estimator on FMoW-WILDS . Table 8. Table showcasing the mean accuracy and calibration metrics (ECE, MCE, RMSCE) for various classifiers, with 95% confidence intervals. The metrics evaluate the classifiers’ prediction quality and their calibration over 3random splits of the FMoW-WILDS ID train and validation splits. CLASSIFIER ACCURACY ECE MCE RMSCE LOGISTIC REGRESSION 77.2±1.4% 0 .022±0.004 0 .055±0.040 0 .027±0.007 GRADIENT BOOSTING CLASSIFIER 77.1±1.3% 0 .020±0.018 0 .062±0.102 0 .027±0.030 SINGLE -LAYER NEURAL NETWORK 77.1±1.2% 0 .037±0.006 0 .086±0.017 0 .046±0.006 SUPPORT VECTOR MACHINE 77.1±0.9% 0 .077±0.034 0 .232±0.147 0 .107±0.050 RANDOM FOREST 76.5±0.9% 0 .031±0.007 0 .067±0.068 0 .037±0.016 TWO-LAYER NEURAL NETWORK 75.7±2.3% 0 .040±0.006 0 .087±0.012 0 .048±0.002 DECISION TREE 70.7±0.9% 0 .111±0.021 0 .161±0.050 0 .123±0.029 25
https://arxiv.org/abs/2505.22356v1
Budget-Adaptive Adapter Tuning in Orthogonal Subspaces for Continual Learning in LLMs Zhiyi Wan1, Wanrou Du1, Liang Li2, Miao Pan3, Xiaoqi Qin1 1Beijing University of Posts and Telecommunications 2Pengcheng Laboratory 3University of Houston {wzy10, wanroudu, xiaoqiqin}@bupt.edu.cn ,lil03@pcl.ac.cn ,mpan2@uh.edu Abstract Large language models (LLMs) often suffer from catastrophic forgetting in con- tinual learning (CL) scenarios, where performance on previously learned tasks degrades severely while training on sequentially arriving tasks. Although pio- neering CL approaches using orthogonal subspaces can mitigate task interference, they typically employ fixed budget allocation, neglecting the varying complexity across tasks and layers. Besides, recent budget-adaptive tuning methods for LLMs often adopt multi-stage paradigms that decouple optimization and budget allocation. Such decoupling results in potential misalignment, which hinders those approaches’ practical application in CL scenarios. To address these limitations, we propose OA-Adapter, a novel parameter-efficient approach for continual learning in LLMs that unifies dynamic budget adaptation with orthogonal subspace learning in a single end-to-end training stage. Specifically, OA-Adapter introduces a dynamic bottleneck dimension adaptation mechanism that simultaneously allocates an effi- cient parameter budget and optimizes task objectives without misalignment. To effectively preserve previously acquired knowledge while coordinating with the dynamic budget allocation, orthogonal constraints are applied specifically between the parameter subspace of the current task and the dynamically allocated parameter subspaces of historical tasks. Experimental results on continual learning bench- marks demonstrate that OA-Adapter outperforms state-of-the-art methods in both accuracy and parameter efficiency, achieving higher average accuracy while using 58.5%fewer parameters on the standard CL benchmark. 1 Introduction Recent advances in large language models (LLMs) have transformed artificial intelligence by demon- strating remarkable capabilities across diverse domains, from text generation to logical reason- ing [ 1,2]. However, real-world deployment demands that LLMs continually adapt to evolving user needs and emerging tasks while retaining previously acquired knowledge—a prerequisite for sustainable lifelong learning [3, 4]. Parameter-efficient fine-tuning (PEFT) methods, such as adapter modules [ 5] and low-rank adaptation (LoRA) [ 6], enable task-specific adaptation by updating only 0.01%−4%of model weights [ 7]). While originally designed to reduce computational costs for single-task tuning [ 8,9], these methods struggle in continual learning with sequentially arriving tasks. Sequentially tuning to arriving tasks induces catastrophic forgetting-severe degradation of performance on prior tasks [ 10]. One intuitive solution is to store task-specific adapters for each new task. However, this approach consumes substantial storage resources and leads to considerable inflexibility in multi-task deployments. Al- ternatively, retraining models with archived historical and additional new data necessitates frequent Preprint. Under review.arXiv:2505.22358v1 [cs.LG] 28 May 2025 model updates and large data repositories [ 11]. Both strategies are prohibitively costly and impractical in resource-constrained environments. To efficiently adapt LLMs to downstream tasks while preserving previously acquired knowledge, researchers proposed continual learning (CL) methods, as discussed in Appendix A.1. However, most CL approaches operate within shared parameter spaces across tasks, which inherently induces cross- task interference [ 12,11,13–19]. Furthermore, unlike conventional CL benchmarks that typically handle tasks with limited distribution shifts (e.g., incremental image classification), continual learning for LLMs often deals with substantially divergent task distributions, thus significantly amplifying interference when tuning in shared parameter
https://arxiv.org/abs/2505.22358v1
spaces. Some methods further dynamically construct task-specific parameters for new knowledge integration to resolve this problem, but they heavily rely on explicit task identifiers during inference [20–24]. Recent research efforts in orthogonal subspace learning [ 25–27] offer a promising alternative by restricting task-specific updates to mutually orthogonal parameter subspaces, decoupling optimization directions across tasks and thereby eliminating interference and task ID dependency. However, existing methods typically rely on a fixed budget allocation, assigning the same subspace dimen- sionality to every task and layer. This rigid strategy overlooks the heterogeneity of task complexity and layer-specific adaptation needs, leading to inefficient parameter utilization, allocating excessive resource to simple tasks while under-allocating resources to more complex ones. Such inflexible allocation hinder LLMs’ continuous adaptation capabilities in practice. To achieve dynamic budget allocation, emerging budget-adaptive PEFT methods like AdaLoRA [ 28], ElaLoRA [ 29], and DiffoRA [ 30] proposed multi-stage paradigms with sequential optimization and budget adjustment phases, as detailed in Appendix A.1. Such decoupled optimization may create misalignment between fine-tuning objectives and budget allocation. Moreover, the inherent complexity of multi-stage designs introduces substantial computational overhead and engineering challenges, limiting their practicality for continual learning systems. To address these issues, we propose OA-Adapter, a novel parameter-efficient approach for continual learning in LLMs. Instead of manually assigning a fixed budget, OA-Adapter automatically adjusts the parameter budget for each task and layer based on the task difficulty and model capacity. To the best of our knowledge, this is the first work to integrate budget adaptation into parameter-efficient fine-tuning for continual learning in LLMs. Our key contributions are as follows: •We propose OA-Adapter, a novel parameter-efficient approach for continual learning in LLMs that unifies dynamic budget adaptation with orthogonal subspace learning in a single end-to-end training stage. •We design a dynamic bottleneck dimension adaptation mechanism that simultaneously allocates an efficient parameter budget and optimizes task objectives without misalignment. •We establish orthogonal constraints between the parameter subspace of the current task and the dynamically allocated parameter subspaces of historical tasks, effectively preserving previously acquired knowledge while coordinating with the dynamic dimension adaptation. •Experimental results demonstrate that OA-Adapter outperforms state-of-the-art methods in both accuracy and parameter efficiency. OA-Adapter achieves higher average accuracy with 58.5%fewer parameters on the standard CL benchmark and maintains its advantages on a larger benchmark comprising 15 tasks. 2 Methodology In this section, we introduce OA-Adapter, a novel framework for continual learning in LLMs that simultaneously improves parameter efficiency and mitigates catastrophic forgetting in a single end- to-end training stage, as illustrated in Fig. 1. We first describe its architectural design, including core components and computation flow. Then, we analyze the mathematical foundations of the dy- namic bottleneck dimension adaptation, demonstrating how trainable thresholds enable bidirectional activation and deactivation of dimensions during a single training phase. Finally, we formalize the orthogonal parameter subspace constraints mechanism and explain how it works in concert with the dynamic bottleneck dimension adaptation to achieve parameter-efficient continual learning. 2 Multi-head AttentionOA-AdapterAdd & Norm Feed-forward Add & Norm OA-AdapterData s Q A Data tFigure 1: The OA-Adapter framework for LLM continual learning. Each task-specific
https://arxiv.org/abs/2505.22358v1
OA-Adapter module (task t) comprises three core components: (1) a down-projection layer W(t) 1, (2) a trainable diagonal mask Γ(t)with trainable threshold τ(t), and (3) an up-projection layer W(t) 2. The dynamic masking mechanism enables bidirectional dimension adaptation through activation/deactivation of latent dimensions. Orthogonal subspace constraints are enforced between the column space of the t-th task parameters Col(W(t) 2)and the dynamically allocated parameter subspaces of historical tasks Col(fW(s) 2)(fors < t ). Here,fW(s) 2incorporates only the activated dimensions from the s-th task. 2.1 Module Structure Standard Adapter. When adapting pre-trained language models (PLMs) to downstream tasks, traditional full-parameter fine-tuning proves both parameter-inefficient and computationally expensive. To enable efficient adaptation, adapter modules inject lightweight trainable parameters into PLMs while keeping the original weights frozen. These modules employ a bottleneck architecture to minimize trainable parameters, consisting of three layers: 1) a down-projection layer that reduces the original d-dimensional representation xto a lower dimension r, 2) a nonlinear activation function f(·), and 3) an up-projection layer that restores the features to dimension d. The architecture ensures near-zero initialization of the projection layers while maintaining a skip connection to preserve the original features during initial training stages. For an input representation x∈Rd, the adapter’s output y∈Rdcan be formalized as: y=x+W2·f(W1·x+b1) +b2, (1) where W1∈Rr×d,W2∈Rd×rdenote the down-projection and up-projection matrices. b1∈Rr andb2∈Rddenote the bias terms, respectively, with bottleneck dimension r≪d. OA-Adapter. Building upon the standard Adapter’s bottleneck architecture, OA-Adapter introduces two structural modifications: 1) the removal of bias terms in projection layers to create a bias-free parameter space containing only linear transformations, and 2) the replacement of static non-linear activations with a trainable diagonal masking matrix Γthat dynamically adjusts the bottleneck dimension, as detailed in Section 2.2. These modifications enable the enforcement of orthogonal parameter subspace constraints, as detailed in Section 2.3, to mitigate cross-task interference and co-optimization of budget adaptation with continual learning in a single training phase. Specifically, the forward computation of OA-Adapter operates as follows: y=x+W2·Γ· W1·x, (2) 3 where W1∈Rrmax×dandW2∈Rd×rmaxdenote the down-projection and up-projection matrices, respectively, and Γ∈Rrmax×rmaxis a trainable diagonal masking matrix. Here, rmax≪drepresents the pre-defined maximum bottleneck dimension. 2.2 Dynamic Bottleneck Dimension Adaptation Adaptation Mechanism. To dynamically allocate parameter budget, we adjust the effective bot- tleneck dimensions of OA-Adapter using a trainable diagonal masking matrix Γ∈Rrmax×rmax: Γ = diag( γ). The sparsity of the vector γ∈Rrmaxis controlled via a soft thresholding mechanism applied to a trainable vector g∈Rrmax. Specifically, each diagonal entry γiis computed as: γi= soft( gi;τ) = sign( gi)·max(|gi| −τ,0), (3) where τ >0is a trainable threshold that dynamically modulates the sparsity level of Γthroughout the training process. The projection path of OA-Adapter can then be equivalently reformulated as: W2·Γ· W1=rmaxX i=1γi· W2[:, i]⊗ W 1[i,:], (4) where⊗denotes the outer product. This decomposition clearly demonstrates how each γidynamically adjusts the contribution of the ithlatent dimension pair: when |gi| ≤τ, we have γi= 0, causing both theithcolumn of W2andithrow of W1to be disabled, effectively deactivating the corresponding dimension. This mechanism adaptively controls the bottleneck dimension through reff=∥γ∥0, where ∥γ∥0represents the count of non-zero entries in γ. Gradient Analysis. Our method’s
https://arxiv.org/abs/2505.22358v1
bidirectional dimension adaptation capability, enabled by the trainable threshold τ, offers critical advantages. When |gi| ≤τ, the corresponding dimension pair is deactivated by setting γi= 0. This zeros the i-th diagonal entry of the masking matrix Γ, effectively removing that dimension’s contribution in forward propagation. However, this operation also blocks gradient flow from the downstream loss Ltogiwhen γi= 0. To clarify why gibecomes non-trainable in such cases, consider the gradient calculation via chain rule: ∂L ∂gi=∂L ∂γi∂γi ∂gi. (5) The derivative∂γi ∂giof the soft thresholding function equals sign(gi)when|gi|> τ, but critically becomes 0when|gi| ≤τ. This implies that deactivated dimensions (where |gi| ≤τ) produce zero gradients in Equation (5) due to∂γi ∂gi= 0, blocking gradient updates through gi. With a fixed threshold τ, such dimensions would remain permanently disabled throughout training. Crucially, our method implements τas a learnable parameter shared across all dimensions within each OA-Adapter module. Thus, the gradient of τwith respect to the total loss Lis derived through chain rule as: ∂L ∂τ=rmaxX i=1∂L ∂γi∂γi ∂τ, (6) where∂γi ∂τ=−sign(gi)when|gi|> τ and 0 otherwise. This derivative relationship ensures threshold updates are primarily governed by dimensions exceeding the current τ. As τevolves during training, dimensions previously deactivated with |gi| ≤τmay become reactivated when they satisfy |gi|> τunder the updated threshold, thereby reactivating their corresponding projection paths. This bidirectional adaptation mechanism automatically suppresses dimensions while maintaining their potential for reactivation in later training iterations. The bidirectional nature of this dynamic parameter budget adaptation approach ensures optimal parameter allocation that continuously adapts to the evolving requirements of sequential tasks in continual learning. 2.3 Orthogonal Parameter Subspace Constraints for Continual Learning Continual Learning Setup. Continual learning focuses on incrementally acquiring knowledge from evolving data distributions of sequential tasks while mitigating catastrophic forgetting of previously acquired knowledge. Formally, models are trained on a sequential stream of tasks denoted as {D1, D2,···, Dt}. Each task Dt= (xi t, yi t)nt i=1consists of input instances xi t∈ Xtpaired with 4 corresponding labels yi t∈ Yt, where XtandYtrepresent the task-specific input and label spaces. During the training phase for task Dt, model parameters Φare updated exclusively using data from Dt. The objective of continual learning can be formalized as optimizing: max ΦTX t=1X {xt i,yt i}∈D tlogPΦ(yi t|xi t). (7) Orthogonal Parameter Subspace Constraints. Catastrophic forgetting arises when task-specific adaptations overwrite parameters critical for previous tasks. To mitigate this, we introduce orthogonal- ity constraints that enforce parameter updates across tasks to occupy mutually independent subspaces. Let∆Φk=W(k) 2Γ(k)W(k) 1represent the OA-Adapter’s parameter update for the k-th task. We have: ∆Φk=fW(k) 2· W(k) 1= (W(k) 2·Γ(k))· W(k) 1 (8) Here, the columns of fW(k) 2serve as orthogonal basis vectors spanning the parameter update subspace for the k-th task, while W(k) 1determines how these basis vectors are combined. We therefore formally define the task-specific parameter subspace as the column space of fW(k) 2, which intrinsically aligns with the activated dimensions for the k-th task through the dimension-selective masking operation of Γ(k). Thus, we enforce strict orthogonality to new OA-Adapter parameters across sequential tasks, ensuring new task adaptations occupy parameter subspaces orthogonal to previous
https://arxiv.org/abs/2505.22358v1
tasks’ frozen parameter subspaces. Formally, the constraints for the t-th task is defined as: ⟨W(t) 2[:, i],fW(s) 2[:, j]⟩= 0,∀i, j, s < t (9) The columns of fW(t) 2inherit directional properties from W(t) 2, ensuring orthogonal relationships persist regardless of dynamic dimension activation patterns. These asymmetric orthogonality con- straints enable simultaneous optimization of dynamic bottleneck dimension adaptation and historical knowledge preservation. To formalize this approach, we incorporate an orthogonality regularization term into the optimization objective. Specifically, the pairwise orthogonality loss between current tasktand each historical task s < t is quantified as: L(s,t) orth=X i,jD W(t) 2[:, i],fW(s) 2[:, j]E2 (10) Minimizing the loss term L(s,t) orthdrives the inner product ⟨W(t) 2[:, i],fW(s) 2[:, j]⟩toward zero, enforcing parameter subspace orthogonality. The complete training objective, integrating both task-specific performance and orthogonality constraints, is formulated as: Ltotal=L(t) task+λorth·X s<tL(s,t) orth(11) where L(t) taskrepresents the primary loss for task t, and λorthis a hyperparameter controlling the strength of orthogonal regularization. 3 Experiments 3.1 Experimental Settings. Datasets. We evaluate our approach using two CL benchmarks for LLMs. The first is the standard CL benchmark [ 31], which comprises 5 text classification datasets: AG News, Amazon Reviews, Yelp Reviews, DBpedia, and Yahoo Answers. The second is a continual learning benchmark consisting of a larger number of tasks with 15 datasets [ 20]. This benchmark includes five tasks from the standard CL benchmark, four from GLUE benchmark (MNLI, QQP, RTE, SST2) [ 32], five from SuperGLUE benchmark (WiC, CB, COPA, MultiRC, BoolQA) [ 33] and the IMDB movie reviews dataset [ 34]. Following the methodology of [ 20], we randomly select 1000 samples per class for training. The task details and training sequences of tasks used in our experiments are provided in Appendix A.5. Metrics. Letai,jdenote the test accuracy on the i-th task after training on the j-th task. To evaluate performance, we use the mean accuracy over all tasks after completing training on the final task. Specifically, it is defined as1 TPT i=1ai,T, where Tis the total number of tasks. 5 Table 1: Testing performance on two standard CL benchmarks with T5-large. Standard CL Benchmark Large Number of Tasks Order-1 Order-2 Order-3 Average Order-4 Order-5 Order-6 Average SeqFT 18.9 24.9 41.7 28.5 7.4 7.4 7.5 7.4 EWC 48.7 47.7 54.5 50.3 45.3 44.5 45.6 45.1 LwF 54.4 53.1 49.6 52.3 50.1 43.1 47.4 46.9 Inc-Adapter 57.5 47.8 66.1 57.1 54.3 46.1 58.1 52.8 Replay 55.2 56.9 61.3 57.8 55.0 54.6 53.1 54.2 L2P 60.3 61.7 61.1 60.7 57.5 53.8 56.9 56.1 LFPT5 68.6 72.4 76.9 72.6 69.4 67.8 68.6 68.6 O-Adapter 73.3 73.4 73.1 73.3 69.3 61.8 65.7 65.6 O-LoRA 75.0 75.4 75.6 75.3 71.2 63.8 70.6 68.7 OA-Adapter 75.7 76.2 76.1 76.0 70.9 65.2 71.4 69.2 ProgPrompt 75.1 75.0 75.2 75.1 78.0 77.7 77.9 77.9 PerTaskFT 70.0 70.0 70.0 70.0 78.1 78.1 78.1 78.1 MTL 80.0 80.0 80.0 80.0 76.5 76.5 76.5 76.5 Baselines. We compare our method against various CL baseline approaches, including: SeqFT [ 35] trains all model parameters on a sequence of tasks without any regularization or replaying samples from the previous
https://arxiv.org/abs/2505.22358v1
tasks. EWC [ 36]finetune the whole model with a regularization loss that prevents updating parameters that could interfere with previously learned tasks. LwF [ 37]constrains the shared representation layer to be similar to its original state before learning the new task. Inc-Adapter trains new Adapter parameters on a sequential series of tasks without any constraints or mechanism. Replay finetunes the whole model with a memory buffer, and replay samples from old tasks when learning new tasks to avoid forgetting. L2P [ 38]uses the input to dynamically select and update prompts from the prompt pool in an instance-wise fashion. LFPT5 [ 39]continuously train a soft prompt that simultaneously learns to solve the tasks and generate training samples, which are subsequently used in experience replay. O-Adapter train new Adapter parameters on a sequential series of tasks with orthogonal parameter subspace constraints. O-LoRA [ 27]train new LoRA parameters on a sequential series of tasks in orthogonal subspace while fixing the LoRA matrices of previous tasks. ProgPrompt [ 20]adopts a task-specific soft prompt for each distinct task, sequentially appending it to prior learned prompts. In essence, it trains individual models per task, leveraging the task ID to select the appropriate model during inference. PerTaskFT train a separate model for each task. MTL train a model simultaneously on all tasks as multi-task learning. This approach represents the theoretical upper bound for continual learning performance with a single model, as it maintains access to the entire task distribution throughout training, thus eliminating the fundamental forgetting challenge in continual learning scenarios. In our continual learning baseline selection, we specifically focused on methods that could be reliably reproduced to ensure fair comparison. Furthermore, to guarantee the authenticity and consistency of our experimental results, we reproduced and reran all baseline methods in our infrastructure. 3.2 Main Results Our experiments employ the encoder-decoder T5 model [ 40], consistent with baselines in CL for NLP. Following previous works [ 39,27], we report the results of three independent runs with different task orders on each CL benchmark, in Tab.1. All experimental results are reported as the average of three runs. For more detailed settings, refer to Appendix A.4. Results on Standard Continual Learning Benchmarks. Across all task orders of the standard CL benchmark, OA-Adapter consistently surpasses previous methods by a significant margin, as illustrated in Table 1. Notably, OA-Adapter, compared to the prior state-of-the-art method O-LoRA, achieves performance closer to MTL, the upper bound of continual learning with a single model. Additionally, the performance demonstrates a clear descending trend from OA-Adapter to O-Adapter to Inc-Adapter, providing intuitive evidence that orthogonal parameter subspace constraints effectively prevent catastrophic forgetting, while dynamic bottleneck dimension adaptation further enhances performance through efficient budget allocation. Furthermore, our approach significantly outperforms 6 PerTaskFT, which indicates that OA-Adapter not only avoids catastrophic forgetting but also utilizes knowledge from prior tasks to enhance the learning of new tasks. Performance with Large Number of Tasks. On the more challenging benchmark comprising 15 tasks, OA-Adapter consistently outperforms O-LoRA, as illustrated in Table 1. While ProgPrompt shows higher accuracy, it requires task identifiers during inference and
https://arxiv.org/abs/2505.22358v1
maintains separate parameters per task, fundamentally limiting its generalization to unseen tasks and practical deployment in real- world LLM applications. Notably, all continual learning methods still trail behind PerTaskFT and MTL, highlighting that continual learning for a large number of tasks remains a significant challenge. Parameter Efficiency Analysis. As established in Section 2.2, OA-Adapter leverages dynamic parameter budget allocation to achieve enhanced parameter efficiency. To quantify this advantage, we compare the parameter utilization between OA-Adapter and O-LoRA across various initial budget conditions, as illustrated in Table 2. For OA-Adapter, budget allocation represents bottleneck dimension distribution, while for O-LoRA, it determines the module’s intrinsic rank. Remarkably, OA-Adapter achieves superior performance while using 46.6%to58.5%fewer parameters compared to O-LoRA’s fixed budget approach. Moreover, OA-Adapter maintains consistent performance excellence across all tested initial budget settings, demonstrating its robust dynamic allocation capabilities. These results highlight our method’s ability to effectively adapt parameter budget allocation according to task-specific requirements, providing substantial efficiency benefits over static parameter budget allocation approaches. Table 2: Comparisons of parameter efficiency between OA-Adapter and O-LoRA. Initial Budget Method Avg Final Budget Params Avg Performance 16O-LoRA 16 4.72M 75.3 OA-Adapter 9.95 1.96M -58.5% 76.0 +0.7 8O-LoRA 8 2.36M 74.5 OA-Adapter 6.05 1.18M -50.0% 74.7 +0.2 4O-LoRA 4 1.18M 73.8 OA-Adapter 3.18 0.63M -46.6% 74.1 +0.3 3.3 Discussions Occurrence and Mitigation of Catastrophic Forgetting. We first demonstrate the occurrence and mitigation of catastrophic forgetting over Order-1 on the standard CL benchmark. As shown in Figure 2, we observe a significant decline of each task after their training phase without orthogonal parameter subspace constraints. Performance on Task 2 deteriorates to nearly zero by the end of the subsequent two tasks, while performance on Task 1 and Task 3 significantly decreases from their respective levels at the end of their training phases. In contrast, the performance with orthogonal parameter subspace constraints remains largely preserved by the end of Task 4, with the most severe degradation limited to only 14% performance loss on Task 2. These results demonstrate that severe forgetting phenomena occur during multi-task training and confirm that orthogonal parameter subspace constraints can effectively and consistently mitigate such forgetting. Similar trends are consistently observed under other task orders, as detailed in Appendix A.2. Additionally, as expected, each task achieves higher performance more rapidly during its correspond- ing training phase without orthogonal subspace constraints, though the final performance is not substantially higher than with orthogonal parameter subspace constraints. This occurs because the model has greater flexibility in parameter update directions when not constrained to preserve previous knowledge, thus more easily finding optimal solutions. Meanwhile, this also demonstrates that while orthogonal subspace constraints limit the model’s choice of update directions, they still maintain sufficient capacity for subsequent tasks. Interestingly, tasks exhibit brief performance recovery at the beginning of subsequent training phases before experiencing extended forgetting. Performance on Task 1 recovers to nearly 100% at the start of Task 3 and to approximately 60% during Task 4. Despite performance on Task 2 declining to near-zero by the end of Task 3, it shows rapid improvement during initial Task 4 training. The
https://arxiv.org/abs/2505.22358v1
phenomenon resembles human memory: knowledge seemingly forgotten from disuse often requires 7 < - - Task 1 - - > Training Steps< - - Task 3 - - > < - - Task 4 - - > < - - Task 2 - - >Figure 2: Occurrence and mitigation of catastrophic forgetting during sequential training following Order-1 across multiple tasks. Solid lines represent models with orthogonal parameter subspace constraints, while dashed lines indicate models without. Each color corresponds to a specific task’s test accuracy. Colored background shading denotes the training phase for each respective task, with the X-axis scaled proportionally to accommodate varying training step durations. minimal effort to reactivate. This suggests that apparent catastrophic forgetting merely masks a latent recovery potential within the model, indicating that knowledge representations remain partially preserved despite significant performance degradation. Heterogeneous Budget Requirements Across Tasks and Layers. Intuitively, adapting to individ- ual downstream datasets requires varying parameter budgets across different tasks and layers. To validate this, we analyze budget allocation patterns in CL scenarios, as shown in Figure 3. Our results reveal heterogeneous budget requirements across tasks and layer positions, confirming that optimal parameter allocation cannot follow uniform rules but demands task-specific consideration. Notably, in CL scenarios, the parameter matrices for the initial task exhibit significantly higher sparsity com- pared to subsequent tasks. This pattern supports our hypothesis that initial tasks primarily leverage capabilities inherent in the pretrained model, while later tasks must additionally preserve knowledge from preceding tasks, necessitating more complex parameter spaces. Comprehensive analysis is provided in Appendix A.3. These findings validate the necessity of adaptive budget allocation for CL based on the characteristics of layer, task and training sequence. Budget Adaptation Mechanism Analysis. To assess the impact of our threshold strategy, we compare two policies: (a) a fixed, non-learnable threshold and (b) our proposed learnable threshold that adapts across different layers and tasks during training, as described in Section 2.2. These strategies are assessed using the T5-large model across three task orders on the standard continual learning benchmark. The results, depicted in Tab. 3, reveal that the dynamic threshold consistently demonstrates superior performance compared to the fixed threshold. This confirms our analysis in Section 2.2, where we demonstrate that the dynamic threshold mechanism enables bidirectional adjustment of budget allocation without introducing complex mechanisms or additional computational overhead, thereby enhancing flexibility in the optimization process. Pre-trained Model Analysis. We investigate the performance of models across varying model scales (T5-base, T5-large, T5-XL) using the standard continual learning benchmark. We evaluate both our method and O-LoRA across three task orders. The results, depicted in Tab. 4, reveal that OA-Adapter’s average accuracy consistently improves as the parameter size increases, which suggests that our approach effectively leverages the increased representational capacity of larger models. The consistent superiority across different scales indicates that OA-Adapter’s mechanism provides 8 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn2112125122112542312151412 575121112512111211151123 211151211212111211212115 261174317222116121331621dbpedia 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn111616121616161614151516161616161615161616151616 151515161616161616161616161616161616161516161615 15111115614131615151413151614161515141515151516 11151113111413161514148141613161416141615101413amazon 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn10149151015151316151614151616161616161616161516 91615121512161516161516161616161516161616161616 151314131511161514161413151112121515161313161414 141216151413121216161516111516131116161615161615yahoo 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn116116215521116151161427116141414161 161616161315161616141316411615421571422 11101321212221222211131418813142 1211321211421213111021311311511016814agnews 0246810121416Figure 3: Final dimensions after sequential training following Order-1 with OA-Adapter across four text
https://arxiv.org/abs/2505.22358v1
classification datasets (i.e., DBpedia, Amazon, Yahoo, AG News). The X-axis is the index of T5-large layers, and the Y-axis indicates different layers OA-Adapter applies to. Table 3: Comparisons of threshold strategies. Threshold order Initial Value Strategy 1 2 3 avg 1e-3fixed 71.5 71.1 71.4 71.3 dynamic 73.0 76.4 74.6 74.7 1e-4fixed 73.7 73.1 71.4 72.7 dynamic 75.7 75.6 75.1 75.5 1e-5fixed 72.8 73.5 72.3 72.9 dynamic 74.8 74.9 73.9 74.5Table 4: Comparisons of model scales. Model Methodorder 1 2 3 avg T5-baseO-LoRA 73.9 74.8 74.1 74.3 OA-Adapter 74.7 74.8 74.5 74.7 T5-largeO-LoRA 75.0 75.4 75.6 75.3 OA-Adapter 75.7 76.2 76.1 76.0 T5-XLO-LoRA 77.9 78.5 77.4 77.9 OA-Adapter 78.0 78.2 77.9 78.0 effective protection against catastrophic forgetting while enabling precise task-specific optimization, regardless of the underlying model architecture’s complexity. Moreover, OA-Adapter consistently outperforms O-LoRA across all model scales. 4 Conclusion In this paper, we introduce OA-Adapter, a novel parameter-efficient approach for continual learning in LLMs that considers both dynamic budget adaptation and orthogonal subspace learning in a single end-to-end training stage. Our comprehensive experiments demonstrate OA-Adapter’s consistent superiority over existing methods across multiple benchmarks while using significantly fewer param- eters. The observed heterogeneity in optimal parameter allocation across tasks and layers validates the necessity of our budget-adaptive approach. As the first work to integrate budget adaptation into parameter-efficient fine-tuning for continual learning in LLMs, OA-Adapter establishes a new paradigm that jointly optimizes parameter budget allocation and knowledge preservation. This advancement paves the way for more scalable, efficient, and effective adaptation of LLMs to evolving real-world applications. 9 References [1]D. Zhou, H. Sun, J. Ning, H. Ye, and D. Zhan, “Continual learning with pre-trained models: A survey,” in Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI 2024, Jeju, South Korea, August 3-9, 2024 . ijcai.org, 2024, pp. 8363–8371. [Online]. Available: https://www.ijcai.org/proceedings/2024/924 [2]H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, “Llama: Open and efficient foundation language models,” CoRR , vol. abs/2302.13971, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2302.13971 [3]Z. Xi, W. Chen, X. Guo, W. He, Y . Ding, B. Hong, M. Zhang, J. Wang, S. Jin, E. Zhou, R. Zheng, X. Fan, X. Wang, L. Xiong, Y . Zhou, W. Wang, C. Jiang, Y . Zou, X. Liu, Z. Yin, S. Dou, R. Weng, W. Qin, Y . Zheng, X. Qiu, X. Huang, Q. Zhang, and T. Gui, “The rise and potential of large language model based agents: a survey,” Sci. China Inf. Sci. , vol. 68, no. 2, 2025. [Online]. Available: https://doi.org/10.1007/s11432-024-4222-0 [4]X. Wang, Y . Zhang, T. Chen, S. Gao, S. Jin, X. Yang, Z. Xi, R. Zheng, Y . Zou, T. Gui, Q. Zhang, and X. Huang, “TRACE: A comprehensive benchmark for continual learning in large language models,” CoRR , vol. abs/2310.06762, 2023. [Online]. Available: https://doi.org/10.48550/arXiv.2310.06762 [5]N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. de Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly, “Parameter-efficient transfer learning for NLP,” in Proceedings of the 36th International Conference on Machine Learning, ICML
https://arxiv.org/abs/2505.22358v1
2019, 9-15 June 2019, Long Beach, California, USA , ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. PMLR, 2019, pp. 2790–2799. [Online]. Available: http://proceedings.mlr.press/v97/houlsby19a.html [6]E. J. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, S. Wang, L. Wang, and W. Chen, “Lora: Low-rank adaptation of large language models,” in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net, 2022. [Online]. Available: https://openreview.net/forum?id=nZeVKeeFYf9 [7]C. C. S. Balne, S. Bhaduri, T. Roy, V . Jain, and A. Chadha, “Parameter efficient fine tuning: A comprehensive analysis across applications,” CoRR , vol. abs/2404.13506, 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2404.13506 [8]X. Zhou, J. He, Y . Ke, G. Zhu, V . Gutiérrez-Basulto, and J. Z. Pan, “An empirical study on parameter-efficient fine-tuning for multimodal large language models,” in Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , L. Ku, A. Martins, and V . Srikumar, Eds. Association for Computational Linguistics, 2024, pp. 10 057–10 084. [Online]. Available: https://doi.org/10.18653/v1/2024.findings-acl.598 [9]Z. Chen, Z. Liu, K. Wang, and S. Lian, “Reparameterization-based parameter-efficient fine-tuning methods for large language models: A systematic survey,” in Natural Language Processing and Chinese Computing - 13th National CCF Conference, NLPCC 2024, Hangzhou, China, November 1-3, 2024, Proceedings, Part III , ser. Lecture Notes in Computer Science, D. F. Wong, Z. Wei, and M. Yang, Eds., vol. 15361. Springer, 2024, pp. 107–118. [Online]. Available: https://doi.org/10.1007/978-981-97-9437-9_9 [10] M. McCloskey and N. J. Cohen, “Catastrophic interference in connectionist networks: The sequential learning problem,” in Psychology of learning and motivation . Elsevier, 1989, vol. 24, pp. 109–165. [11] H. Shi, Z. Xu, H. Wang, W. Qin, W. Wang, Y . Wang, and H. Wang, “Continual learning of large language models: A comprehensive survey,” CoRR , vol. abs/2404.16789, 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2404.16789 [12] L. Wang, X. Zhang, H. Su, and J. Zhu, “A comprehensive survey of continual learning: Theory, method and application,” IEEE Trans. Pattern Anal. Mach. Intell. , vol. 46, no. 8, pp. 5362–5383, 2024. [Online]. Available: https://doi.org/10.1109/TPAMI.2024.3367329 [13] A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “On tiny episodic memories in continual learning,” arXiv preprint arXiv:1902.10486 , 2019. [14] H. Shi and H. Wang, “A unified approach to domain incremental learning with memory: Theory and algorithm,” in Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., 2023. [Online]. Available: http://papers.nips.cc/paper_files/paper/2023/hash/ 30d046e94d7b8037d6ef27c4357a8dd4-Abstract-Conference.html 10 [15] S. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017 . IEEE Computer Society, 2017, pp. 5533–5542. [Online]. Available: https://doi.org/10.1109/CVPR.2017.587 [16] R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars, “Memory aware synapses: Learning what (not) to forget,” in Computer Vision - ECCV 2018 - 15th
https://arxiv.org/abs/2505.22358v1
European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part III , ser. Lecture Notes in Computer Science, V . Ferrari, M. Hebert, C. Sminchisescu, and Y . Weiss, Eds., vol. 11207. Springer, 2018, pp. 144–161. [Online]. Available: https://doi.org/10.1007/978-3-030-01219-9_9 [17] J. Schwarz, W. Czarnecki, J. Luketina, A. Grabska-Barwinska, Y . W. Teh, R. Pascanu, and R. Hadsell, “Progress & compress: A scalable framework for continual learning,” in Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018 , ser. Proceedings of Machine Learning Research, J. G. Dy and A. Krause, Eds., vol. 80. PMLR, 2018, pp. 4535–4544. [Online]. Available: http://proceedings.mlr.press/v80/schwarz18a.html [18] S. Rongali, A. Jagannatha, B. P. S. Rawat, and H. Yu, “Continual domain-tuning for pretrained language models,” arXiv preprint arXiv:2004.02288 , 2020. [19] G. Lin, H. Chu, and H. Lai, “Towards better plasticity-stability trade-off in incremental learning: A simple linear connector,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022 . IEEE, 2022, pp. 89–98. [Online]. Available: https://doi.org/10.1109/CVPR52688.2022.00019 [20] A. Razdaibiedina, Y . Mao, R. Hou, M. Khabsa, M. Lewis, and A. Almahairi, “Progressive prompts: Continual learning for language models,” in The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net, 2023. [Online]. Available: https://openreview.net/forum?id=UJTgQBc91_ [21] J. Jang, S. Ye, S. Yang, J. Shin, J. Han, G. Kim, S. J. Choi, and M. Seo, “Towards continual knowledge learning of language models,” in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net, 2022. [Online]. Available: https://openreview.net/forum?id=vfsRB5MImo9 [22] X. Jin, D. Zhang, H. Zhu, W. Xiao, S. Li, X. Wei, A. O. Arnold, and X. Ren, “Lifelong pretraining: Continually adapting language models to emerging corpora,” in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10-15, 2022 , M. Carpuat, M. de Marneffe, and I. V . M. Ruíz, Eds. Association for Computational Linguistics, 2022, pp. 4764–4780. [Online]. Available: https://doi.org/10.18653/v1/2022.naacl-main.351 [23] C. Li and H. Lee, “Examining forgetting in continual pre-training of aligned large language models,” CoRR , vol. abs/2401.03129, 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2401.03129 [24] Y . Yan, K. Xue, X. Shi, Q. Ye, J. Liu, and T. Ruan, “AF adapter: Continual pretraining for building chinese biomedical language model,” in IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023, Istanbul, Turkiye, December 5-8, 2023 , X. Jiang, H. Wang, R. Alhajj, X. Hu, F. Engel, M. Mahmud, N. Pisanti, X. Cui, and H. Song, Eds. IEEE, 2023, pp. 953–957. [Online]. Available: https://doi.org/10.1109/BIBM58861.2023.10385733 [25] M. Farajtabar, N. Azizan, A. Mott, and A. Li, “Orthogonal gradient descent for continual learning,” inThe 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy] , ser. Proceedings of Machine Learning Research, S. Chiappa and R. Calandra, Eds., vol. 108. PMLR, 2020, pp. 3762–3773. [Online]. Available: http://proceedings.mlr.press/v108/farajtabar20a.html [26] Y . Guo, W. Hu, D. Zhao, and B. Liu, “Adaptive orthogonal projection for batch and online continual learning,”
https://arxiv.org/abs/2505.22358v1
in Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022 . AAAI Press, 2022, pp. 6783–6791. [Online]. Available: https://doi.org/10.1609/aaai.v36i6.20634 [27] X. Wang, T. Chen, Q. Ge, H. Xia, R. Bao, R. Zheng, Q. Zhang, T. Gui, and X. Huang, “Orthogonal subspace learning for language model continual learning,” in Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , H. Bouamor, J. Pino, and K. Bali, Eds. Association for Computational Linguistics, 2023, pp. 10 658–10 671. [Online]. Available: https://doi.org/10.18653/v1/2023.findings-emnlp.715 11 [28] Q. Zhang, M. Chen, A. Bukharin, N. Karampatziakis, P. He, Y . Cheng, W. Chen, and T. Zhao, “Adalora: Adaptive budget allocation for parameter-efficient fine-tuning,” 2023. [Online]. Available: https://arxiv.org/abs/2303.10512 [29] H. Chang, Z. Ma, M. Ma, Z. Qi, A. Sabot, H. Jiang, and H. Kung, “Elalora: Elastic & learnable low-rank adaptation for efficient model fine-tuning,” arXiv preprint arXiv:2504.00254 , 2025. [30] T. Jiang, H. Wang, and C. Yuan, “Diffora: Enabling parameter-efficient LLM fine-tuning via differential low-rank matrix adaptation,” CoRR , vol. abs/2502.08905, 2025. [Online]. Available: https://doi.org/10.48550/arXiv.2502.08905 [31] X. Zhang, J. J. Zhao, and Y . LeCun, “Character-level convolutional networks for text classification,” in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada , C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds., 2015, pp. 649–657. [Online]. Available: https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html [32] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman, “GLUE: A multi-task benchmark and analysis platform for natural language understanding,” in 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 . OpenReview.net, 2019. [Online]. Available: https://openreview.net/forum?id=rJ4km2R5t7 [33] A. Wang, Y . Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman, “Super- glue: A stickier benchmark for general-purpose language understanding systems,” Advances in neural information processing systems , vol. 32, 2019. [34] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y . Ng, and C. Potts, “Learning word vectors for sentiment analysis,” in The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA , D. Lin, Y . Matsumoto, and R. Mihalcea, Eds. The Association for Computer Linguistics, 2011, pp. 142–150. [Online]. Available: https://aclanthology.org/P11-1015/ [35] C. de Masson d’Autume, S. Ruder, L. Kong, and D. Yogatama, “Episodic memory in lifelong language learning,” in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada , H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché- Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 13 122–13 131. [Online]. Available: https: //proceedings.neurips.cc/paper/2019/hash/f8d2e80c1458ea2501f98a2cafadb397-Abstract.html [36] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al. , “Overcoming catastrophic forgetting in
https://arxiv.org/abs/2505.22358v1
neural networks,” Proceedings of the national academy of sciences , vol. 114, no. 13, pp. 3521–3526, 2017. [37] Z. Li and D. Hoiem, “Learning without forgetting,” IEEE Trans. Pattern Anal. Mach. Intell. , vol. 40, no. 12, pp. 2935–2947, 2018. [Online]. Available: https://doi.org/10.1109/TPAMI.2017.2773081 [38] Z. Wang, Z. Zhang, C. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V . Perot, J. G. Dy, and T. Pfister, “Learning to prompt for continual learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022 . IEEE, 2022, pp. 139–149. [Online]. Available: https://doi.org/10.1109/CVPR52688.2022.00024 [39] C. Qin and S. Joty, “Lfpt5: A unified framework for lifelong few-shot language learning based on prompt tuning of t5,” arXiv preprint arXiv:2110.07298 , 2021. [40] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” J. Mach. Learn. Res. , vol. 21, pp. 140:1–140:67, 2020. [Online]. Available: https://jmlr.org/papers/v21/20-074.html [41] Z. Liu, J. Lyn, W. Zhu, X. Tian, and Y . Graham, “ALoRA: Allocating low-rank adaptation for fine-tuning large language models,” in Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , K. Duh, H. Gomez, and S. Bethard, Eds. Mexico City, Mexico: Association for Computational Linguistics, Jun. 2024, pp. 622–641. [Online]. Available: https://aclanthology.org/2024.naacl-long.35/ [42] N. Ding, X. Lv, Q. Wang, Y . Chen, B. Zhou, Z. Liu, and M. Sun, “Sparse low-rank adaptation of pre-trained language models,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , H. Bouamor, J. Pino, and K. Bali, Eds. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 4133–4145. [Online]. Available: https://aclanthology.org/2023.emnlp-main.252/ 12 [43] D. Rao, F. Visin, A. A. Rusu, R. Pascanu, Y . W. Teh, and R. Hadsell, “Continual unsupervised representation learning,” in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada , H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett, Eds., 2019, pp. 7645–7655. [Online]. Available: https://proceedings.neurips.cc/paper/2019/hash/861578d797aeb0634f77aff3f488cca2-Abstract.html 13 A Appendix A.1 Related Work. Continual Learning for LLMs. Existing continual learning (CL) techniques can be broadly classified into several categories, including replay-based, regularization-based, architecture-based, optimization-based, and representation-based methods [ 12,11]. Among these, replay-based, regularization-based, and architecture-based methods have been most extensively applied in the context of continual learning for LLMs. Replay-based methods [ 13–15] maintain a memory buffer containing data from previous tasks, which is used to retrain the model alongside new data. While these methods are valued for their simplicity, stability, and strong performance, their core opera- tion—storing and replaying data—introduces significant privacy concerns and leads to substantial storage overheads, especially given that language datasets often contain personal or sensitive informa- tion. Regularization-based methods [ 16–19] introduce a regularization term that penalizes significant weight deviations across tasks, thereby attempting to balance performance between old and new tasks. However, the complexity of these
https://arxiv.org/abs/2505.22358v1
methods increases rapidly as the number of tasks grows, which often results in a sharp decline in performance retention on older tasks. These approaches primarily focus on learning all incremental tasks within a shared parameter space, which is a major contributor to task interference. In contrast, many architecture-based methods [ 20–24] address this challenge by proposing incorporating task-specific components, isolated parameters, or dedicated pathways within the model. These strategies essentially learn task-specific expert modules for different tasks. While these methods can effectively mitigate task interference, they often rely heavily on explicit task ID during the inference stage, which limits their ability to generalize across different tasks. Building on a similar principle, recent advancements have explored a promising direction that can retain generalization capacity while reducing task interference without requiring explicit task ID. These methods, such as OGD [ 25], AOP [ 26], and O-LoRA [ 27], constrain weight updates for each task to lie within mutually orthogonal subspaces of the high-dimensional parameter space. By doing so, these methods effectively decouple the optimization directions of different tasks, thereby mitigating interference and preserving performance on previously learned tasks. However, a common limitation of many existing orthogonal subspace methods is that they typically adopt a fixed budget for all tasks and layers. This uniform treatment overlooks the inherent variability in complexity and importance across different tasks and network layers, which is particularly pronounced in large LLMs. To fully exploit the potential of methods based on orthogonal subspace learning, it is essential to develop mechanisms that can dynamically adapt budget allocation based on task-specific and layer-specific requirements. Adaptive Budget Tuning. Although existing budget-adaptive tuning methods for LLMs differ in implementation details, they commonly adopt a multi-stage design in which budget adaptation is handled in a separate stage from model optimization. This multi-stage paradigm is often designed to identify optimal configurations for different tasks or layers in large-scale models. For example, DiffoRA [ 30] trains a LoRA module for each layer in the first stage, and then selects a subset of these modules based on a learned Difference-aware Adaptive Matrix in a subsequent stage. ElaLoRA [ 29] involves a “Warm-up” fine-tuning stage, followed by a “Dynamic Rank Adjustment” stage to adapt LoRA module ranks, and finally a “Stabilization” fine-tuning stage. AdaLoRA [ 28] injects rank pruning operations after training iterations by applying singular value decomposition (SVD) to selectively retain low-rank components. While these methods achieve strong performance on single-task static datasets, their multi-stage design introduces substantial computational and engineering overhead that compromises practicality and scalability, while creating misalignment between optimization objectives and budget adaptation criteria that ultimately hinders the achievement of truly optimal solutions. Moreover, their design is limited to only unidirectional rank reduction operations, which precludes bidirectional budget adaptation. These characteristics make it particularly challenging to extend such methods to the CL setting, where the training process must be efficient, unified, and adaptive to sequentially arriving tasks. Additionally, recent methods such as ALoRA [ 41] and ElaLoRA [ 29] eliminate parameters based on specific importance metrics. While these approaches offer strong interpretability, their elimination ratios rely
https://arxiv.org/abs/2505.22358v1
on empirical settings rather than adapting to the intrinsic differences between sequentially arriving tasks. Imitating the formulation of AdaLoRA, SoRA [ 42] introduces a proximal gradient-based update rule with strong theoretical grounding. However, it relies on fixed hyperparameters to decide the fixed sparsity threshold, which renders optimization challenging. Furthermore, SoRA’s scheduling algorithm for sparsity indicators is more 14 suitable for exploring the balance between performance and parameter efficiency after convergence in single-task scenarios, rather than capturing task sequence characteristics in multi-task environments. < - - Task 1 - - > Training Steps< - - Task 3 - - > < - - Task 4 - - > < - - Task 2 - - > Figure 4: Occurrence and mitigation of catastrophic forgetting during sequential training following Order-2 across multiple tasks. < - - Task 1 - - > Training Steps< - - Task 3 - - > < - - Task 4 - - > < - - Task 2 - - > Figure 5: Occurrence and mitigation of catastrophic forgetting during sequential training following Order-3 across multiple tasks. A.2 Occurrence and Mitigation of Catastrophic forgetting. To further validate the effectiveness and consistency of orthogonal parameter subspace constraints, we conduct sequential training following Order-2 and Order-3, with results illustrated in Figure 4 and 5, respectively. Consistent with the results reported in Section 3.3, we observe severe catastrophic forgetting in the absence of orthogonal constraints, especially for earlier tasks. In contrast, models trained with orthogonal parameter subspace constraints are able to preserve performance across all tasks to a much greater extent. Notably, although the specific tasks affected most by forgetting vary depending on the task order, the general trend holds: orthogonal constraints provide consistent 15 mitigation of forgetting regardless of task permutation. These results reinforce our earlier findings and highlight the robustness of orthogonal subspace regularization as a general mechanism for alleviating forgetting in continual learning scenarios. A.3 Heterogeneous Budget Requirements Across Tasks and Layers. As discussed in Section 3.3, we extend our analysis to investigate budget allocation in the context of continual learning. Here, we further present results under other task orders, as illustrated in Figure 6 and 7. These findings corroborate our analysis in Section 3.3: (a) The relationship between performance and parameter budget does not follow constant rules but rather necessitates case-specific consideration. (b) Within continual learning scenarios, the first task primarily focuses on acquiring capabilities built upon the pretrained model, whereas subsequent tasks must additionally preserve knowledge from preceding tasks, thus requiring more nuanced fine-tuning. This further substantiates our analysis that different tasks in the CL scenarios require varying budgets, and that allocating budgets according to training sequence and task characteristics is both necessary and justified. A.4 Implementation Detail. All our experiments involving T5 models were performed on a server outfitted with four NVIDIA GeForce RTX 3090 GPUs, utilizing the DeepSpeed repository for implementation. Following previous studies [ 35,43], for CL experiments, for each dataset we use the available validation set as a test set (since test data is not available) and hold out 500 samples from the
https://arxiv.org/abs/2505.22358v1
train set to construct the validation set. For every sequence of tasks across different orders, we trained the models for one epoch using a batch size of 32 (8 per GPU), a dropout rate of 0.1, and no weight decay. Across all experiments, we primarily used Adapter modules with a bottleneck dimension of 16, and applied a sparsification threshold chosen from {1e-3,1e-4,1e-5}. The learning rate was selected from {5e-3,3e-3,1e-3,5e-4}depending on task characteristics. We applied an orthogonality regularization on the Adapter’s upsampling matrix with a coefficient λorth∈ {0.5,1,5}, and used an additional coefficient λ2∈ {0,0.1,0.5}to scale the associated L2 loss term. This flexible configuration allowed us to balance knowledge retention and model sparsity across tasks, especially in long sequences with substantial distribution shifts. To ensure experimental comparability and fair result comparison, we maintain consistency with O-LoRA [ 27] by adopting instruction tuning as the training paradigm across all experiments for both our method and other baselines, as shown in Table 5. This approach offers dual advantages: it incorporates human expertise for efficient learning while enabling models to better capture underlying principles through explicit guidance, thereby enhancing generalization capabilities. The consistent instruction-based framework allows for direct performance comparisons while leveraging the benefits of natural language supervision. Table 5: Instructions for different tasks. Task Prompts NLIWhat is the logical relationship between the "sentence 1" and the "sentence 2"? Choose one from the option. QQPWhether the "first sentence" and the "second sentence" have the same meaning? Choose one from the option. SC What is the sentiment of the following paragraph? Choose one from the option. TC What is the topic of the following paragraph? Choose one from the option. BoolQAAccording to the following passage, is the question true or false? Choose one from the option. MultiRCAccording to the following passage and question, is the candidate answer true or false? Choose one from the option. WiCGiven a word and two sentences, whether the word is used with the same sense in both sentence? Choose one from the option. 16 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn66121321512141316111461086910131113125 816112713376162711115147839915 6112951113142141315111821111914511 8121015141111141526141542791059151110dbpedia 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn141616151616161516151516161616131616161616161616 161616161616151616161615161616151616161616161616 161315151316141516131516151615151516161615151514 151415151215151616151514151516161616161616161511amazon 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn15151611616168161157116161161161616161616 1616161611621521416161615151416116161616142 1410115151411616151416151016413315614161616 21613131613141425211611416161511142116agnews 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn11113410416161161101116415121651616 112151111651551615161516161161316161161 1115161215511611111216111521531163 16141151312416151515151153416161612151415yahoo 0246810121416Figure 6: Final dimensions after sequential training following Order-2 with OA-Adapter. 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn1122812210236121333131011612141215 1114931011610131112871541591414141513 2512812791310810981051091871171314 61099810131310911154516101013121012151214yahoo 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn19114111614116116161211161616161615142 1521614131413151615164161616161616116161316 4111111312161615116141411513611216141 13151161111151616113162161621316212161amazon 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn1615161111616151616151616116161616161161616 151611616161615131516161616161162161616161616 11316116141516151151151151615116151416216 11151311415161415411516131515116161616161616agnews 01234567891011121314151617181920212223enc.att enc.ffn dec.att dec.ffn131516151516151616161615151515161616151616161616 161616161616161616161616161616161616161616161616 121310141215161314141515141214111516131616151616 111515161410121415141616151415151516161616161616dbpedia 0246810121416 Figure 7: Final dimensions after sequential training following Order-3 with OA-Adapter. 17 A.5 Datasets and Task Orders. Table 6 shows details of the 15 datasets we used for our CL experiments, along with their evaluation metrics. Overall, we used datasets from CL benchmark [ 31], GLUE [ 32] and SuperGLUE [ 33] benchmarks, and added IMDB movie reviews dataset [ 34], following [ 20]. Table 7 shows details of task orders used for our CL experiments. Table 6: The details of 15 datasets used in our CL experiments. NLI denotes natural language inference, QA denotes questions and answers task. The first five tasks correspond to the standard CL benchmark, all
https://arxiv.org/abs/2505.22358v1
other tasks are used in long-sequence experiments. Dataset Category Task Domain Metric Yelp CL Benchmark Sentiment Analysis Yelp Reviews Accuracy Amazon CL Benchmark Sentiment Analysis Amazon Reviews Accuracy DBPedia CL Benchmark Topic Classification Wikipedia Accuracy Yahoo CL Benchmark Topic Classification Yahoo Q&A Accuracy AG News CL Benchmark Topic Classification News Accuracy MNLI GLUE NLI Various Accuracy QQP GLUE Paraphrase Detection Quora Accuracy RTE GLUE NLI News, Wikipedia Accuracy SST-2 GLUE Sentiment Analysis Movie Reviews Accuracy WIC SuperGLUE Word Sense Disambiguation Lexical Databases Accuracy CB SuperGLUE NLI Various Accuracy COPA SuperGLUE QA Blogs, Encyclopedia Accuracy BoolQA SuperGLUE Boolean QA Wikipedia Accuracy MultiRC SuperGLUE QA Various Accuracy IMDB SuperGLUE Sentiment Analysis Movie Reviews Accuracy Table 7: Six different task sequence orders utilized in continual learning experiments. Orders 1-3 follow the standard continual learning benchmark as established by previous research, focusing on a more traditional task sequence. Orders 4-6 customized for long-sequence experimentation, encompass 15 tasks each and are structured according to the methodologies outlined in [20]. Order Task Sequence 1 dbpedia →amazon →yahoo →ag 2 dbpedia →amazon →ag→yahoo 3 yahoo →amazon →ag→dbpedia 4mnli→cb→wic→copa→qqp→boolqa →rte→imdb→ yelp→amazon →sst-2→dbpedia →ag→multirc →yahoo 5multirc →boolqa →wic→mnli→cb→copa→qqp→rte→ imdb→sst-2→yelp→amazon →ag→dbpedia →yahoo 6yelp→amazon →mnli→cb→copa→qqp→rte→imdb→ sst-2→dbpedia →ag→yahoo →multirc →boolqa →wic A.6 Limitations and Further Research Directions. While our method has demonstrated effectiveness in empirical evaluations, there are a few limitations and potential directions of research to consider. Firstly, although our method does not rely on task identification during inference, it still requires task identification during training. Exploring methods for task-agnostic training would be a valuable future direction. Additionally, while our findings in Figure 2 reveal that models do not completely forget knowledge from previous tasks due to cross-task interference, the underlying mechanisms and potential ways to leverage this phenomenon remain unexplored in our current approach. This represents a promising new perspective and direction for future continual learning research that warrants further investigation. 18
https://arxiv.org/abs/2505.22358v1
arXiv:2505.22368v1 [cs.AI] 28 May 2025AgentDNS: A Root Domain Naming System for LLM Agents Enfang Cui1, Yujun Cheng2, Rui She1, Dan Liu1, Zhiyuan Liang1, Minxin Guo1, Tianzheng Li1, Qian Wei1, Wenjuan Xing1, Zhijie Zhong3,4 1China Telecom Research Institute, Beijing, China 2School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China 3School of Future Technology, South China University of Technology, Guangzhou, Guangdong, China 4Pengcheng Laboratory, Shenzhen, Guangdong, China cuief@chinatelecom.cn, yjcheng@tsinghua.edu.cn, {sher, liud11, liangzy17, guomx3, litz4, weiq12, xingwj }@chinatelecom.cn, csemor@mail.scut.edu.cn Abstract The rapid evolution of Large Language Model (LLM) agents has highlighted critical challenges in cross-vendor service discovery, interoperability, and communication. Existing pro- tocols like model context protocol and agent-to-agent pro- tocol have made significant strides in standardizing inter- operability between agents and tools, as well as communi- cation among multi-agents. However, there remains a lack of standardized protocols and solutions for service discov- ery across different agent and tool vendors. In this paper, we propose AgentDNS, a root domain naming and service discovery system designed to enable LLM agents to au- tonomously discover, resolve, and securely invoke third-party agent and tool services across organizational and techno- logical boundaries. Inspired by the principles of the tradi- tional DNS, AgentDNS introduces a structured mechanism for service registration, semantic service discovery, secure invocation, and unified billing. We detail the architecture, core functionalities, and use cases of AgentDNS, demonstrat- ing its potential to streamline multi-agent collaboration in real-world scenarios. The source code will be published on https://github.com/agentdns . Introduction In recent years, LLM agent (Luo et al. 2025) technology has been reshaping industries at an unprecedented pace. Lever- aging natural language understanding, multi-modal inter- action capabilities, and complex task orchestration, LLM agents have penetrated core sectors such as education (Chu et al. 2025), finance (Ding et al. 2024), and academic (Alzubi et al. 2025; Chen et al. 2024), etc., driving intelli- gent transformation across domain-specific workflows. Mar- ket research indicates that the global LLM agent market is projected to exceed $50 billion by 2030 (MarketsandMar- kets 2025). This growth stems from the flexibility of “nat- ural language as instructions”—users can describe require- ments in everyday language, enabling agents to automati- cally invoke toolchains, parse heterogeneous data, and com- plete end-to-end tasks. The dual drive of technological ad- vancements and commercial adoption has positioned LLM agents as a foundational infrastructure for enterprise digital transformation. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: AgentDNS system and its relationship with A2A and MCP Protocols. The current focus of technological evolution is shifting from enhancing single-agent capabilities to building multi- agent collaborative systems (Guo et al. 2024; Han et al. 2024), transcending the limitations of isolated intelligence. In industry practice, multiple agent interoperability and communication protocols (Yang et al. 2025) have been pro- posed, such as Anthropic’s model context protocol (MCP) (Hou et al. 2025), which standardizes tool invocation in- terfaces of diverse toolchains. Meanwhile, Google’s Agent- to-Agent (A2A) protocol (Google 2025) establishes task delegation mechanisms between agents, supporting cross- organizational workflow orchestration. The maturation of technical standards and open-source
https://arxiv.org/abs/2505.22368v1
frameworks is acceler- ating the development of an open, scalable multi-agent col- laboration ecosystem. In the future, we envision a world where agents can au- tonomously discover, communicate, and collaborate with one another without human intervention. Although protocols like MCP and A2A have effectively facilitated communica- tion and collaboration between agents and external tools, as well as between agents themselves, there is still a lack of standardized protocols and systems for cross-vendor service naming, service discovery, authentication, and billing. As a result, agent-to-agent collaboration still demands significant manual effort, preventing the realization of true autonomous cooperation. The specific challenges are as follows: •The Service Discovery Challenge: LLM agents typi- cally generate an action plan, where each action may re- quire calling external tools or agent services. However, currently, services from different vendors are not stan- dardized in naming or management, which forces devel- opers to manually maintain service information for each tool or agent. This lack of standardization makes it im- possible for LLM agents to autonomously discover other agents or tool services, hindering seamless integration and collaboration between agents across different plat- forms. •The Interoperability Challenge: Currently, different vendors’ agents or tool services support various inter- operability or communication protocols, with typical ex- amples including MCP, A2A, and ANP(Agent Network Protocol Project 2025). We anticipate that more inter- operability protocols will emerge in the future. How- ever, LLM agents are unable to autonomously recognize and adapt to these differences, meaning they still require manual configuration and management. This lack of flexibility in handling diverse protocols limits seamless agent-to-agent and agent-to-tool communication across platforms. •The Authentication and Billing Challenge: Cross- vendor collaboration is further complicated by security and authentication challenges. Each service provider typ- ically requires proprietary API keys, necessitating man- ual configuration of multiple authentication credentials for agents. This adds significant overhead and disrupts seamless integration. In addition, billing systems are fragmented across vendors, requiring manual interven- tion for payment setup. As a result, agents are unable to autonomously discover and invoke new third-party paid agents or tool services without manual configuration. To address these challenges, we propose AgentDNS , the first root domain naming and resolution system designed for LLM agents. Inspired by the Internet’s Domain Name System (DNS), AgentDNS introduces a unified names- pace ( agentdns:// ), natural language-based service discov- ery, and unified authentication and billing. As shown in Fig.1, AgentDNS is compatible with protocols such as MCP and A2A, allowing them to coexist seamlessly. With AgentDNS, agents can autonomously discover services, au- thenticate, and interoperate seamlessly across organizational boundaries. AgentDNS redefines the multi-agent collabora- tion ecosystem through four key functions: •Unified Namespace with Semantic Information: AgentDNS introduces a semantically rich naming scheme (e.g., agentdns://org/category/name) for agents and tool services, decoupling service identifier name from physical addresses such as URLs. This also enables the semantic embedding of agent capabilities into their identifier name, facilitating more efficient classification and retrieval of agent and tool services. •Natural Language-Driven Service Discovery : Agentscan interact with the AgentDNS root service using nat- ural language queries to discover third-party agents or tool services.
https://arxiv.org/abs/2505.22368v1
They can obtain the corresponding service identifier names and related metadata, including physi- cal addresses, capabilities, and communication protocol, etc. Agents can also dynamically request the AgentDNS root service to resolve an identifier name and retrieve the latest metadata as needed. •Protocol-Aware Interoperability : AgentDNS enables agents to dynamically discover the supported interoper- ability or communication protocols of third-party agents and tool services by resolving their identifier names into metadata. This metadata includes not only network ad- dresses and capabilities, but also the specific protocols (e.g., MCP, A2A, ANP) each service supports. Based on this information, agents can autonomously select and adapt to the appropriate protocol for communication, eliminating the need for manual configuration. •Unified Authentication and Billing : AgentDNS re- places fragmented API keys with a single-sign-on mech- anism. Agents authenticate once with the AgentDNS root server to obtain time-bound access tokens, valid across all registered services. For billing, AgentDNS serves as a unified billing platform: users pre-fund accounts, us- age costs are tracked and deducted in real-time, and pay- ments are automatically settled across vendors. This en- ables transparent billing and autonomous access to paid services by agents. Related Work LLM Agents LLM agents have rapidly emerged as a pivotal research frontier in artificial intelligence, driven by their transfor- mative potential to bridge human-AI collaboration and au- tonomous problem-solving. In the industrial, several LLM agents have been launched, such as Deep Research (Ope- nAI 2025), Manus (Monica 2025), and Cursor (Anysphere 2025), etc. Unlike traditional AI systems constrained by pre- defined rules, LLM agents leverage the general-purpose rea- soning, contextual understanding, and multi-task capabil- ities of LLMs to dynamically adapt to complex environ- ments. LLM agents have demonstrated broad application prospects across various fields. The future of LLM agents is expected to trend towards multi-agent collaboration. Re- searchers are increasingly interested in how to design ef- ficient communication protocols and coordination mecha- nisms (Hou et al. 2025; Google 2025; Li et al. 2024; Marro et al. 2024) that enable seamless cooperation among agents. This collaborative approach is seen as a key direction for ad- vancing the capabilities and applications of LLM agents in the coming years. Agent Interaction Protocols Model Context Protocol The Model Context Protocol (MCP) (Hou et al. 2025) is an open standard developed by Anthropic, designed to facilitate seamless interactions be- tween LLM models and external tools, data sources, and ser- vices. Inspired by the concept of a universal adapter, MCP Figure 2: AgentDNS system architecture. aims to simplify the integration process, much like how a USB-C port allows various devices to connect effortlessly. MCP operates on a client-server architecture. The AI appli- cation (such as a chatbot or an integrated development envi- ronment) acts as the host and runs an MCP client, while each external integration runs as an MCP server. The server ex- poses capabilities such as functions, data resources, or pre- defined prompts, and the client connects to it to utilize these capabilities. This design allows AI models to interact with external systems without directly accessing APIs, thereby enhancing security and reducing the complexity
https://arxiv.org/abs/2505.22368v1
of custom integrations. Agent-to-Agent Protocol The Agent-to-Agent (A2A) protocol (Google 2025) is introduced by Google, aimed at enabling seamless communication and collaboration be- tween LLM agents, regardless of their underlying frame- works or vendors. A2A was developed in collaboration with over 50 technology partners, including major companies like Atlassian, Salesforce, SAP, and MongoDB. This pro- tocol uses HTTP-based APIs and JSON data format, ensur- ing compatibility and ease of integration with existing en- terprise IT systems. It supports various communication pat- terns, including request-response, event-based communica- tion, and streaming data exchange. A2A complements proto- cols like MCP, which focuses on providing tools and context for agents. A2A focuses on agent-to-agent communication, allowing them to work together more effectively. Domain Naming System The Domain Name System (DNS) (Danzig, Obraczka, and Kumar 1992; Cheshire and Krochmal 2013) serves as the critical naming and discovery infrastructure for the human internet, translating memorable domain names (e.g., exam- ple.com) into physical addresses (IP addresses) through its hierarchical, decentralized architecture. While DNS effec- tively decouples human-readable names from machine-level addressing, its design proves inadequate for the emerging agent Internet where LLM agents require autonomous ser- vice discovery and interoperability. Traditional DNS lacks three critical capabilities essential for agent ecosystems: ser- vice discovery through natural language, querying servicemetadata beyond physical addresses (including capabilities, protocols, etc.), and unified authentication and billing. These limitations necessitate AgentDNS-a purpose-built system that preserves DNS’s core benefits of naming and resolution while introducing agent-specific innovations. AgentDNS Method AgentDNS System Overview AgentDNS is a root service system for agent service nam- ing, discovery, and resolution, enabling seamless service discovery, cross-vendor interoperability, unified authentica- tion and billing. As shown in Fig. 2, the AgentDNS root server is the central hub of the entire system, which con- nects agent users (e.g., Agent A) with service providers (e.g., Agent Service B, Tool Service D). The AgentDNS root server comprises components including service registration, service proxy pool, service management, service search, ser- vice resolution, billing, authentication, etc. The core compo- nents are as follows: • Service Registration Component: Agent or tool service vendors register their services through this component. The process begins with organization registration, where developers first create an organization account. Under the organization’s domain, they then setup a service category and name to generate a globally unique service iden- tifier name (e.g., agentdns://org/category/name). Con- currently, developers must submit service metadata to bind with the identifier, including network addresses (e.g., endpoints, URLs), supported interoperability pro- tocols (e.g., MCP, A2A), detailed service capabilities, etc. This metadata is securely stored in the AgentDNS database. During subsequent service discovery or reso- lution phases, the system performs semantic matching by analyzing the identifier’s category and the metadata. This ensures precise retrieval of services aligned with an agent’s operational requirements. • Service Proxy Pool: After a vendor registers a service, AgentDNS creates a corresponding service proxy within the Service Proxy Pool. This proxy acts as an interme- diary, forwarding service invocation requests from user agents to the vendor’s actual service endpoint. In other words, the user agent sends a service request to the AgentDNS
https://arxiv.org/abs/2505.22368v1
root server, which then routes the request to the appropriate vendor for execution. This design allows user agents to authenticate only once with AgentDNS, eliminating the need for separate registration and authen- tication with each individual vendor. • Service Search Component: User agents can send nat- ural language queries to the AgentDNS root server to discover relevant third-party agents or tool services. This component interprets the query and performs intelligent retrieval using a combination of keyword matching and retrieval-augmented generation (RAG) (Gao et al. 2023) techniques. Based on the search results, it returns a list of top-k candidate services that match the agent’s intent. Each result includes the service identifier name, physical endpoint, supported communication protocols, capabil- ity descriptions, and pricing information. The user agent Figure 3: AgentDNS service naming. can then evaluate these candidates and choose the most appropriate one for execution. Once selected, the agent can directly invoke the service by using the appropriate protocol and access the physical endpoint with an authen- tication token issued by AgentDNS. • Service Resolution Component: User agents can cache service identifier names and, during subsequent invoca- tions, dynamically request the AgentDNS root server to resolve these identifiers and get the latest metadata as needed. • Service Management Component: This component han- dles the lifecycle of these service proxies, including their creation, updates, and deletion, ensuring that the proxy infrastructure remains up-to-date and synchronized with the underlying services. • Service Billing Component: This component is respon- sible for tracking and processing service invocation costs. Users only need to settle payments directly with AgentDNS, which then handles the backend settlement with individual vendors. This design significantly simpli- fies the billing process for users by eliminating the need for managing multiple vendor-specific payment systems, enabling a streamlined and unified billing experience. • Authentication Component: This component handles identity verification and access control for both user agents and service providers. Instead of requiring agents to authenticate separately with each vendor, AgentDNS offers a unified authentication mechanism. User agents authenticate once with the AgentDNS root server and re- ceive a time-bound access token. This token can be used to securely access any registered third-party service with- out additional logins. By centralizing authentication, this component ensures secure, efficient, and scalable access across a heterogeneous agent ecosystem, while also re- ducing the operational burden on both users and service vendors. Together, these components form the backbone of AgentDNS, providing a unified framework that supports nat- ural language-driven discovery, protocol-aware interoper- ability, trustless authentication, and unified billing—paving the way for truly autonomous multi-agent ecosystems. Next, we will provide a detailed introduction to AgentDNS’s ser- vice naming, service discovery, service resolution, and uni- fied authentication and billing mechanisms. (a) AgentDNS service discovery. (b) AgentDNS service resolution. Figure 4: AgentDNS service discovery and resolution. Figure 5: AgentDNS unified authentication and billing. Service Naming The AgentDNS service naming system provides a struc- tured and globally unique service identifier name for each registered agent or tool service. The identifier name fol- lows the format as shown in Fig. 3. The organization rep- resents the name
https://arxiv.org/abs/2505.22368v1
of the registering entity, such as a com- pany, university, or research lab. Each organization must go through a registration and verification process to en- sure uniqueness and authenticity. The category denotes the functional domain or classification of the agent service. This can be chosen manually by the developer or automat- ically generated by AgentDNS, and it supports hierarchical structures—allowing for multi-level categories using slashes (e.g., academic/nlp/summarization). Finally, the name is the unique identifier for the specific agent within the organiza- tion and category. This name must be explicitly defined by the developer. Together, this structured naming convention ensures precise identification, facilitates organized discov- ery, and supports scalable service management within the AgentDNS ecosystem. Service Discovery The service discovery process is illustrated in Fig. 4a. In step 1, Agent A initiates a natural language query to the AgentDNS root server, describing the desired service. In the example, Agent A is looking for an intelligent Agent capable 1 { "steps": [ { "step": "Use search engines (such as Google, Bing, etc.) to retrieve literature, reports, and information related to agent communication protocols, filter high - value materials, and download them", "tool_required": true, "tool_function": "Supports multi - platform (e.g., Google, Bing, etc.) literature retrieval with keyword - based search" }, { "step": "Organize and analyze existing agent communication protocols, summarize their characteristics, advantages, disadvantages, and application scenarios", "tool_required": false, "tool_function": "" }, { "step": "Summarize the standardization progress of agent communication protocols and the industry's needs and challenges regarding protocol standardization", "tool_required": true, "tool_function": "Supports standard retrieval from multiple standards organizations (such as IEEE, ITU - T, etc.)" }, { "step": "Write the research report, divide it into multiple sections, and generate a document", "tool_required": false, "tool_function": "" } ] } Action Plan (Generated by LLM)Service Discovery (After AgentDNS search)Action Execution (Step by step) Execute step 1 (Call searchagent) Execute step 2 (LLM+Prompt) Execute step 3 (Call standardagent) Execute step 4 (LLM+Prompt)AgentDNS Root Serveragentdns://example/search/ searchagent agentdns://example/standard /standardagentMetadata Metadataphysical address: https://xxxx.ai/searchagent protocol: MCP cost: 1 dollar/million tokens capability: Support keyword search on Google and Bing search engines. physical address: https://xxxx.ai/standardagent protocol: MCP cost: free capability: Support querying IEEE, ITUT and other standard organizations' standards.Figure 6: AgentDNS case study. of analyzing academic papers. In step 2, upon receiving the request, AgentDNS searches through its registry of available services to identify those with the required capabilities. It returns a list of service identifiers along with corresponding metadata, such as the proxy’s physical address, supported protocols, pricing information, and more. This discovery process employs a hybrid retrieval mechanism that combines keyword matching and RAG. Specifically, we construct a knowledge base using the capability descriptions of regis- tered services. During service discovery, hybrid retrieval is performed over these capability descriptions to identify can- didates that best match the user agent’s intent. In step 3, after receiving the service information, Agent A uses the appropriate protocol and an authentication token issued by AgentDNS to directly access the physical proxy address and initiate a service call. Finally, in step 4, the AgentDNS proxy forwards the request to the actual service endpoint
https://arxiv.org/abs/2505.22368v1
hosted by the vendor, ensuring seamless interaction between Agent A and the service provider. Service Resolution As previously mentioned, user agents can cache service identifier names and request the AgentDNS root server for updated metadata when needed. This functionality helps re- duce the frequency of accessing AgentDNS, improving re- sponse times and lowering operational costs. The service resolution process is illustrated in Fig. 4b. In step 1, agent vendors update the metadata associated with their agent ser- vices. In step 2, Agent A sends a resolution request to the AgentDNS root server, providing the cached service iden- tifier name to retrieve the latest information. In step 3, AgentDNS locates the most recent metadata based on the identifier and returns it to Agent A, ensuring that the serviceinvocation uses up-to-date information. Unified Authentication and Billing AgentDNS introduces a unified authentication and billing mechanism by acting as a proxy layer between user agents and third-party services. As shown in Fig. 5, when a user agent (e.g., Agent A) authenticates once with the AgentDNS root server using its own access key (Key A), it gains the ability to seamlessly invoke multiple external agent or tool services without needing to manage individual credentials for each provider. Internally, the AgentDNS root server maintains a service proxy pool that forwards user requests to the corresponding third-party services. For each third- party service, the proxy uses the appropriate authentica- tion key (e.g., Key B, C, or D), which corresponds to the access control requirements of the service provider. This abstraction decouples the user agent from vendor-specific authentication logic. Moreover, billing is centralized: user agents are charged by AgentDNS based on their usage, while AgentDNS handles settlements with the respective third- party services. This model simplifies cross-vendor interoper- ability, enforces secure access, and enables consistent billing across a heterogeneous service ecosystem. AgentDNS Case Study In this section, we present a case study illustrating the in- teraction between an agent and the AgentDNS root server. The case demonstrates the complete agent workflow—from generating an action plan (Huang et al. 2024), to querying the AgentDNS root server for service discovery, and finally to executing the planned actions. The full process is illustrated in Fig. 6 After receiving a user request—such as “Help me research agent communi- cation protocols and write a survey report”—the agent first invokes a LLM to generate an action plan. As shown in Fig. 6, the generated plan in this case is structured in JSON for- mat and consists of multiple steps. Each step includes a de- scription of its purpose, whether it requires a service, and a natural language description of the desired service function- ality. These services correspond to third-party agent or tool services. For example, Step 1 requires a search service to re- trieve relevant keywords, while Step 3 calls for a standards retrieval service to query documents from organizations like IEEE (IEEE Standards Association 2025) or ITU-T (Inter- national Telecommunication Union 2025). After generating the action plan, the agent submits a natural language query to the AgentDNS root server to discover suitable third-party services. For
https://arxiv.org/abs/2505.22368v1
instance, in Step 1, the agent sends the tool function description directly to AgentDNS, which uses in- telligent retrieval methods to identify matching ser- vices. Suppose AgentDNS returns a service named agentdns://example/search/searchagent ; it also provides metadata such as the physical endpoint, supported protocols, service cost, capabilities, and available APIs. The agent uses this information to invoke the selected third-party service. Following service discovery, the agent enters the action execution phase. During this stage, it executes the steps of the action plan in sequence. When a step requires a service, the agent uses the corresponding protocol to access the third- party service obtained from AgentDNS and passes the result to the next step. For steps that do not involve external ser- vices, the agent inputs the step purpose description, along with previous outputs and prompt instructions, into the LLM for generation. This process continues until all steps in the action plan are completed. This case study presents a simplified example, while in practice, the structure and format of an action plan can be adapted to suit different needs. Importantly, the third-party service descriptions within the action plan are expressed in natural language, which means they are not tightly cou- pled with specific service identifiers, tool names, or endpoint URLs. AgentDNS plays a critical role in decoupling the foundational agent model from vendor-specific details such as service names, tool identifiers, and physical addresses, en- abling more flexible and scalable agent architectures. Future opportunities While AgentDNS addresses fundamental challenges in ser- vice discovery, interoperability, and billing in the agent ecosystem, numerous directions remain open for fu- ture exploration. These include decentralized and feder- ated architecture, AgentDNS-compatible agent planning LLMs, privacy-preserving and trusted discovery, as well as AgentDNS service discovery optimization. First, while the current design of AgentDNS adopts a centralized ar- chitecture, future iterations may benefit from decentralized or federated (Huang and Pierre 2024) architecture, such as blockchain (Li et al. 2021; Karaarslan and Adiguzel 2018). This would improve robustness, reduce the risk of singlepoints of failure, and enhance trust in cross-organizational collaborations. Second, training and fine-tuning agent plan- ning LLMs (Wang et al. 2023; Hu et al. 2024) specif- ically compatible with AgentDNS is also an important direction. This can involve constructing agent planning datasets and fine-tuning LLMs to enhance their compati- bility with AgentDNS. Alternatively, reinforcement learn- ing techniques (Wen et al. 2024; Jin et al. 2025; Qi et al. 2024; Peiyuan et al. 2024) can be used to train agents to au- tonomously explore and optimize action sequences, dynam- ically selecting and combining various services registered in AgentDNS to maximize task success rates and efficiency. Third, security and privacy will remain central in cross- vendor agent collaboration. Future directions may involve privacy-preserving search and resolution, using technolo- gies such as homomorphic encryption(Buban et al. 2025), differential privacy, and secure multi-party computation. AgentDNS could also integrate trust and reputation systems to allow agents to evaluate service quality and security risks before invocation. Finally, the optimization of AgentDNS service discovery and retrieval remains a critical area for im- proving system performance and user
https://arxiv.org/abs/2505.22368v1
experience. Conclusion The rapid advancement of LLM agents has exposed critical gaps in cross-vendor service discovery, interoperability, and authentication, hindering the vision of autonomous multi- agent collaboration. This paper introduces AgentDNS, a uni- fied root domain naming system designed to bridge these gaps by providing a semantically rich namespace, natu- ral language-driven service discovery, protocol-aware in- teroperability, and trustless authentication and billing. By decoupling agent identifiers from physical addresses and embedding dynamic metadata resolution, AgentDNS en- ables agents to autonomously discover, resolve, and se- curely invoke services across organizational and technolog- ical boundaries. Our architecture and case studies demon- strate its potential to streamline multi-agent workflows, re- duce manual overhead, and foster an open ecosystem for agent collaboration. Future works include decentralized and federated architecture, AgentDNS-compatible agent plan- ning LLMs, privacy-preserving and trusted discovery, as well as AgentDNS service discovery optimization, etc. Acknowledgments This work was supported by the National Key R&D Program of China under Grant No.2023YFB2904100. References Agent Network Protocol Project. 2025. Agent Network Protocol (ANP): Complete Guide to Agent Network Proto- col. https://agentnetworkprotocol.com/en/docs/. Accessed: 2025-05-11. Alzubi, S.; Brooks, C.; Chiniya, P.; Contente, E.; von Gerlach, C.; Irwin, L.; Jiang, Y .; Kaz, A.; Nguyen, W.; Oh, S.; et al. 2025. Open deep search: Democratizing search with open-source reasoning agents. arXiv preprint arXiv:2503.20201 . Anysphere . 2025. Cursor: The AI Code Editor. https:// www.cursor.com/. Accessed: 2025-05-11. Buban, J.; Zhang, H.; Angione, C.; Yang, H.; Farhan, A.; Sultanov, S.; Du, M.; Ma, X.; Wang, Z.; Zhao, Y .; et al. 2025. Encrypted Large Model Inference: The Equivariant Encryption Paradigm. arXiv preprint arXiv:2502.01013 . Chen, Z.; Liu, K.; Wang, Q.; Liu, J.; Zhang, W.; Chen, K.; and Zhao, F. 2024. Mindsearch: Mimicking human minds elicits deep ai searcher. arXiv preprint arXiv:2407.20183 . Cheshire, S.; and Krochmal, M. 2013. RFC 6763: DNS- Based Service Discovery. https://www.rfc-editor.org/rfc/ rfc6763.html. Accessed: 2025-05-11. Chu, Z.; Wang, S.; Xie, J.; Zhu, T.; Yan, Y .; Ye, J.; Zhong, A.; Hu, X.; Liang, J.; Yu, P. S.; et al. 2025. Llm agents for education: Advances and applications. arXiv preprint arXiv:2503.11733 . Danzig, P. B.; Obraczka, K.; and Kumar, A. 1992. An anal- ysis of wide-area name server traffic: A study of the internet domain name system. In Conference proceedings on Com- munications architectures & protocols , 281–292. Ding, H.; Li, Y .; Wang, J.; and Chen, H. 2024. Large lan- guage model agent in financial trading: A survey. arXiv preprint arXiv:2408.06361 . Gao, Y .; Xiong, Y .; Gao, X.; Jia, K.; Pan, J.; Bi, Y .; Dai, Y .; Sun, J.; Wang, H.; and Wang, H. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2: 1. Google. 2025. An open protocol enabling communication and interoperability between opaque agentic applications. https://github.com/google/A2A. Accessed: 2025-05-11. Guo, T.; Chen, X.; Wang, Y .; Chang, R.; Pei, S.; Chawla, N. V .; Wiest, O.; and Zhang, X. 2024. Large language model based multi-agents: A survey of progress and challenges. arXiv preprint arXiv:2402.01680 . Han, S.; Zhang, Q.; Yao, Y .; Jin, W.; Xu, Z.; and He, C. 2024.
https://arxiv.org/abs/2505.22368v1
LLM multi-agent systems: Challenges and open problems. arXiv preprint arXiv:2402.03578 . Hou, X.; Zhao, Y .; Wang, S.; and Wang, H. 2025. Model context protocol (mcp): Landscape, security threats, and fu- ture research directions. arXiv preprint arXiv:2503.23278 . Hu, M.; Zhao, P.; Xu, C.; Sun, Q.; Lou, J.; Lin, Q.; Luo, P.; and Rajmohan, S. 2024. Agentgen: Enhancing planning abilities for large language model based agent via environ- ment and task generation. arXiv preprint arXiv:2408.00764 . Huang, C.-K.; and Pierre, G. 2024. Aggregate Monitoring for Geo-Distributed Kubernetes Cluster Federations. IEEE Transactions on Cloud Computing . Huang, X.; Liu, W.; Chen, X.; Wang, X.; Wang, H.; Lian, D.; Wang, Y .; Tang, R.; and Chen, E. 2024. Understand- ing the planning of LLM agents: A survey. arXiv preprint arXiv:2402.02716 . IEEE Standards Association. 2025. IEEE Standards Associ- ation Official Website. https://standards.ieee.org/. Accessed: 2025-05-11.International Telecommunication Union. 2025. ITU-T: Telecommunication Standardization Sector. https://www. itu.int/en/ITU-T/Pages/Default.aspx. Accessed: 2025-05- 11. Jin, B.; Zeng, H.; Yue, Z.; Yoon, J.; Arik, S.; Wang, D.; Za- mani, H.; and Han, J. 2025. Search-r1: Training llms to rea- son and leverage search engines with reinforcement learn- ing.arXiv preprint arXiv:2503.09516 . Karaarslan, E.; and Adiguzel, E. 2018. Blockchain based DNS and PKI solutions. IEEE Communications Standards Magazine , 2(3): 52–57. Li, Y .; Du, Y .; Zhang, J.; Hou, L.; Grabowski, P.; Li, Y .; and Ie, E. 2024. Improving multi-agent debate with sparse com- munication topology. arXiv preprint arXiv:2406.11776 . Li, Z.; Gao, S.; Peng, Z.; Guo, S.; Yang, Y .; and Xiao, B. 2021. B-DNS: A secure and efficient DNS based on the blockchain technology. IEEE Transactions on Network Sci- ence and Engineering , 8(2): 1674–1686. Luo, J.; Zhang, W.; Yuan, Y .; Zhao, Y .; Yang, J.; Gu, Y .; Wu, B.; Chen, B.; Qiao, Z.; Long, Q.; et al. 2025. Large Language Model Agent: A Survey on Methodology, Appli- cations and Challenges. arXiv preprint arXiv:2503.21460 . MarketsandMarkets. 2025. AI Agents Market. https://www.marketsandmarkets.com/Market-Reports/ai- agents-market-15761548.html. Accessed: 2025-05-11. Marro, S.; La Malfa, E.; Wright, J.; Li, G.; Shadbolt, N.; Wooldridge, M.; and Torr, P. 2024. A scalable communica- tion protocol for networks of large language models. arXiv preprint arXiv:2410.11905 . Monica. 2025. Leave it to Manus. https://manus.im. Ac- cessed: 2025-05-11. OpenAI. 2025. Introducing Deep Research. https://openai. com/index/introducing-deep-research/. Accessed: 2025-05- 11. Peiyuan, F.; He, Y .; Huang, G.; Lin, Y .; Zhang, H.; Zhang, Y .; and Li, H. 2024. AGILE: A Novel Reinforcement Learn- ing Framework of LLM Agents. Advances in Neural Infor- mation Processing Systems , 37: 5244–5284. Qi, Z.; Liu, X.; Iong, I. L.; Lai, H.; Sun, X.; Zhao, W.; Yang, Y .; Yang, X.; Sun, J.; Yao, S.; et al. 2024. WebRL: Train- ing LLM Web Agents via Self-Evolving Online Curriculum Reinforcement Learning. arXiv preprint arXiv:2411.02337 . Wang, Z.; Cai, S.; Chen, G.; Liu, A.; Ma, X. S.; and Liang, Y . 2023. Describe, explain, plan and select: interactive planning with llms enables open-world multi-task agents. Advances in Neural Information Processing Systems , 36: 34153–34189. Wen, M.; Wan, Z.; Wang, J.; Zhang, W.; and Wen, Y . 2024. Reinforcing
https://arxiv.org/abs/2505.22368v1
arXiv:2505.22370v1 [cs.LG] 28 May 2025SplitLoRA: Balancing Stability and Plasticity in Continual Learning Through Gradient Space Splitting Haomiao Qiu1,2, Miao Zhang1∗, Ziyue Qiao2, Weili Guan1, Min Zhang1, Liqiang Nie1 1Harbin Institute of Technology (Shenzhen) 2Great Bay University 24B951058@stu.hit.edu.cn, zhangmiao@hit.edu.cn, zyqiao@gbu.edu.cn , honeyguan@gmail.com, zhangmin2021@hit.edu.cn, nieliqiang@gmail.com Abstract Continual Learning (CL) requires a model to learn multiple tasks in sequence while maintaining both stability—preserving knowledge from previously learned tasks, and plasticity—effectively learning new tasks. Gradient projection has emerged as an effective and popular paradigm in CL, where it partitions the gradient space of previously learned tasks into two orthogonal subspaces: a primary subspace and a minor subspace. New tasks are learned effectively within the minor subspace, thereby reducing interference with previously acquired knowledge. However, existing Gradient Projection methods struggle to achieve an optimal balance between plasticity and stability, as it is hard to appropriately partition the gradient space. In this work, we consider a continual learning paradigm based on Low-Rank Adaptation (LoRA), which has gained considerable attention due to its efficiency and wide applicability, and propose a novel approach for continual learning, called SplitLoRA. We first provide a theoretical analysis of how subspace partitioning affects model stability and plasticity. Informed by this analysis, we then introduce an effective method that derives the optimal partition of the gradient space for previously learned tasks. This approach effectively balances stability and plasticity in continual learning. Experimental results on multiple datasets demonstrate that the proposed method achieves state-of-the-art performance. 1 Introduction Continual Learning (CL) refers to a model’s ability to sequentially learn new tasks while retaining knowledge from previously learned tasks [ 1]. This contrasts with traditional machine learning paradigms, which assume that models are trained on a fixed dataset where all data is available at once. In the CL setting, the challenge lies in maintaining performance on previous tasks while adapting to new ones, necessitating a balance between stability and plasticity. In recent years, orthogonal projection methods have demonstrated strong performance in continual learning tasks. These methods require storing the subspace spanned by the gradients of previous tasks in memory. During new task training, the gradient of the current task is projected onto the minor subspace of the previous task’s gradient subspace, reducing the interference of new task updates with previously learned knowledge. Parameter-Efficient Fine-Tuning (PEFT) [ 2,3,4] enables efficient fine-tuning for new tasks by keeping the pre-trained model parameters unchanged while introducing a small subset of trainable parameters. Due to its advantages in computational efficiency and performance, PEFT methods have gained increasing popularity in continual learning [ 5,6,7,8,9]. Combining orthogonal projection with PEFT can better leverage the knowledge of the pre-trained model, allowing for faster adaptation to new tasks. ∗Corresponding Author Preprint. However, existing methods [ 8,9] typically determine the subspace dimension for each module within the model by using a predefined threshold based on the cumulative sum of squared singular values. This approach enforces a uniform partitioning rule across all modules, ignoring the fact that different modules contribute unequally to knowledge retention [ 10,11,12]. As a result, it fails to achieve an optimal trade-off between stability and
https://arxiv.org/abs/2505.22370v1
plasticity. In this paper, we theoretically analyze the relationship between the size of the minor gradient subspace of previous tasks and the upper bound of loss increments across all tasks. Furthermore, we model its impact on both stability and plasticity. In practice, we build upon the LoRA framework and propose a novel method called SplitLoRA for CL tasks. Specifically, to minimize the upper bound of the total task loss growth, we construct an optimization problem to determine the optimal size of minor subspace and derive an approximate solution to balance stability and plasticity. Our contributions are summarized as follows: •We theoretically model the impact of the gradient subspace size of previous tasks on stability and plasticity in orthogonal projection based continual learning in Theorem 4.2 and derive an approximate optimal minor subspace in CL. •We introduce SplitLoRA, a novel PEFT framework. By projecting the minor subspace onto the LoRA dimension reduction matrix Atvia a random projection and optimizing only Bt, SplitLoRA ensures that updates remain confined to the minor subspace, thereby achieving an effective balance between stability and plasticity. •Our method achieves state-of-the-art performance across multiple datasets, surpassing existing CL methods by 2%–5% on different datasets. 2 Related Work 2.1 Parameter-Efficient Fine-Tuning Parameter-efficient fine-tuning modifies pre-trained models by introducing a small set of trainable parameters while keeping the original model frozen, significantly reducing computational costs while maintaining strong performance. Adapter [ 3] fine-tunes small modules added to multiple layers, while Prompt-tuning [ 13] and Prefix-tuning [ 14] inject trainable tokens into Transformer layers. LoRA [ 2] decomposes weight updates into low-rank matrices, tuning only these structures. Despite training fewer parameters, PEFT methods often achieve comparable or superior performance [15, 16, 2, 17]. Initially developed for NLP, PEFT has been extended to vision tasks, with methods such as Visual Prompt Tuning (VPT) [ 4] and AdapterFormer [ 18] achieving performance on par with full fine-tuning. 2.2 Continual Learning Continual learning methods fall into three main categories: regularization-based, memory-based, and expansion-based. Regularization-based approaches [ 19,20,21,22] constrain significant changes to key parameters to mitigate catastrophic forgetting. Memory-based methods [ 23,24,25,26] retain prior task information in a buffer, allowing models to revisit past knowledge. Expansion-based techniques [ 27,28,29] dynamically expand the model architecture to accommodate new tasks while preserving learned representations. Gradient Projection in CL. Gradient projection [ 30,31,32] mitigates task interference by constrain- ing updates to directions orthogonal to previous tasks. Orthogonal Weight Modulation [ 30] learns a projector matrix to prevent new gradients from overwriting prior knowledge. Orthogonal Gradient Descent [ 31] projects new gradients onto the orthogonal complement of previous task gradients. Gradient Projection Memory (GPM) [ 32] stores subspace bases of old task data and projects new gradients onto their orthogonal complement. Trust Region Gradient [ 33] enhances forward knowledge transfer by leveraging task-related representations. PEFT in CL. With the rise of pre-trained models [ 34,35,36], continual learning has shifted toward leveraging them rather than training from scratch. While some approaches [ 37,38] fine-tune pre- trained models fully, this is often inefficient. To address this, PEFT methods have been explored in continual
https://arxiv.org/abs/2505.22370v1
learning, with studies [ 6,5,39,40] integrating prompt-tuning to improve class-incremental 2 Figure 1: An overview of our proposed SplitLoRA. During the learning of the t-th task, the gradient space of tasks 1 to t−1is decomposed into major and minor subspaces. InfLoRA determines k∗ solely based on a predefined threshold, whereas SplitLoRA balances stability loss and plasticity loss to determine k∗. Then the minor subspaces are randomly projected onto the low-dimensional matrix Aof LoRA and fixed, while only Bis trained. Specifically, Wt−1=W0+ Σt−1 i=1AiBi, where W0 represents the pre-trained model weights, and kdenotes the size of the minor subspace. learning. A unified framework [ 7] further combines various PEFT techniques, including prompt- tuning, LoRA, and Adapter, into continual learning. 3 Preliminary 3.1 Continual Learning Formulation In CL, there are TtasksT1, ...,TT, each task includes data: Dt={(xt i, yt i)}nt i=1, where xt i∈Rdis the input and yt i∈Ris the label. The goal of CL is to achieve an overall optimal performance across all tasks. Let WTrepresent the model parameters after training on the last task T. The loss on task t, denoted as Lt(WT), measures the performance of WTon task t. LetW∗ Trepresent the optimal parameters. Then, the objective is to minimize the total loss across all tasks: W∗ T=argmin WTLall(WT) =argmin WTTX t=1Lt(WT). (1) 3.2 LoRA-based Continual Learning LoRA [ 2] is a low-rank based PEFT method that reduces the number of parameters by decomposing the weight matrix into the product of two low-rank matrices. Specifically, for a linear layer, the extra weight matrix ∆Wis decomposed into two low-rank matrices AandBas :∆W=AB, where A∈Rd1×randB∈Rr×d2, and ris the dimension of the low-rank. In this way, the number of parameters of the weight matrix is reduced from d1d2to2dr. During training, we learn AandBby minimizing the loss function of the current task. During testing, we recover W=W0+AB and use it for forward propagation. CL generally initializes an additional LoRA for the new task t, while the LoRA of old tasks also participates in the forward process [ 8,41]. In the current task t, the forward process of the linear layer is Y=WtX= (Wt−1+AtBt)X, (2) and only AtandBtare trained during training. 4 SplitLoRA In this section, we introduce SplitLoRA, a PEFT method for CL that mitigates catastrophic forgetting by partitioning the gradient space. Unlike existing Gradient Projection-based methods [ 8,9,42], which typically define the minor subspace solely based on the sum of squared singular values 3 being below a predefined threshold, SplitLoRA determines the optimal subspace size by analyzing the impact of the minor subspace on stability loss and plasticity loss. We first introduce gradient projection, then model the effect of subspace partitioning on learning dynamics and formulate an optimization problem to derive the approximate optimal size of the minor subspace. Finally, we present how to construct the low-rank projection matrix within this subspace to enhance CL. The entire process of SplitLoRA is illustrated in Figure 1. 4.1 Orthogonal Decomposition based Gradient Projection Empirically, training on new tasks often leads to performance degradation on previously learned tasks due to interference between the gradients of new and
https://arxiv.org/abs/2505.22370v1
old tasks, resulting in stability loss. To address this issue, GPM [ 32] orthogonally decomposes the gradient space of previous tasks into a major subspace and a minor subspace, and constrains the update direction of the new task within the minor subspace. Our work is also built upon orthogonal decomposition. In CL, we aim to maintain an average gradient space Goldfor all previous tasks. Specifically, after training the task t−1, we re-feed the data from this task into the model and calculate the average gradient Gnew t−1ofWthroughout this process. Finally, we compute the average gradient space Gold t for the previous t−1tasks: Gold t=1 t−1((t−2)Gold t−1+Gnew t−1). (3) For the first task, Gold 1is equal to zero. Next, we receive the data from task t. We perform a Singular Value Decomposition (SVD) on the gradient Gold t. Larger singular values correspond to singular vectors that dominate in describing the vector’s importance. We select the last kleft singular vectors ofˆUtas the minor subspace: ˆUt,ˆΣt,ˆV⊤ t=SVD(Gold t), ˆUk t=ˆUt :,−k: . (4) The gradient of previous tasks has a much smaller component in the minor subspace compared to the major subspace. Therefore, projecting the gradient of the new task onto the minor subspace will result in minimal interference. A common projection method is to construct a projection transformation matrix. For simplicity, we consider a linear layer Win a model. As the model update ∆Wis determined by the gradient, projecting the model update onto the minor subspace is equivalent to constraining the gradient direction, which helps mitigate interference. Specifically, we project the model update ∆Wonto the minor subspace. The projection result is given by: ∆ˆW=projcol(ˆUk t)(∆W) =ˆUk tˆUk⊤ t∆W, (5) where ˆUk tis the projection subspace. Since the gradients of previous tasks are distributed primarily in the major subspace, therefore the projection ensures that the updates primarily benefit the new task while minimally affecting the performance of old tasks. 4.2 Minor Space Setting for Old Tasks Projecting the gradient onto the minor subspace can mitigate interference with previous tasks. And the larger the minor subspace size k, the larger the learning space for the new task, leading to better plasticity. However, as the gradient components of previous tasks in the minor subspace increase, stability deteriorates. Previous methods [ 42,33,8,9] compute the sum of the squared singular values corresponding to the minor subspace, ensuring that it remains below a predefined threshold τ. Among all values of kthat satisfy this condition, they select the largest one: k∗=max( k|Pd−k i=1σ2 iPσ2< τ) . (6) The size of the minor subspace, denoted as k, is a crucial parameter that affects both model stability and plasticity. However, previous methods determine kbased on a predefined threshold τ, which is merely a hyperparameter and does not effectively balance stability and plasticity. Thus, we proceed to analyze how subspace selection impacts the loss across all tasks. Based on the smoothness of the loss function L, we can derive an upper bound on the loss incurred due to parameter updates in CL. 4 Proposition 4.1 (Upper Bound on Loss Increase) .Consider a model with
https://arxiv.org/abs/2505.22370v1
a linear layer updated fromWt−1toWt=Wt−1+ ∆Wt. Assume the loss function is L-smooth and that the first t−1 tasks were trained with updates constrained to be orthogonal to the gradients of previous tasks. Then, the total loss change over tasks 1, . . . , t is bounded by: tX i=1(Li(Wt)− Li(Wt−1))≤ −(t−1) ∆Wt,Gold t | {z } Stability Loss−⟨∆Wt,Gt⟩|{z} Plasticity Loss+(t−1)L 2∥∆Wt∥2 F,(7) where Gt=∇Lt(Wt)is the gradient for task t, andGold t=1 t−1Pt−1 i=1Giis the average gradient of previous tasks. ⟨·,·⟩denotes the Frobenius inner product. This result shows that parameter updates affect both the current and past tasks. The term ⟨∆Wt,Gold t⟩ captures interference with past tasks (stability loss), while ⟨∆Wt,Gt⟩reflects progress on the current task (plasticity gain). The squared norm ∥∆Wt∥2 Facts as a regularization term controlled by the smoothness constant L. Next, we discuss how to choose the minor subspace in gradient projection to minimize the combined stability and plasticity losses. Building on Proposition 1, we can theoretically analyze how stability loss and plasticity loss vary as a function of k. From Eq. (7), after replacing ∆Wtwith∆ˆWt, where ∆ˆWt=Uk tUk⊤ t∆Wtis the projected update onto the minor subspace, we can express the stability lossLS t(Wt)and the plasticity loss LP t(Wt)as follows: LS t(Wt) =−(t−1)D ∆ˆWt,Gold tE , (8) LP t(Wt) =−D ∆ˆWt,GtE . (9) The stability loss is proportional to the alignment between the projected update ∆ˆWtand the gradient of old tasks Gold t, while the plasticity loss depends on the alignment of ∆ˆWtwith the gradient of the new task Gt. Then we define the error function ϵ(k), which quantifies the proportion of these minor directions and is given by: ϵ(k) =Pd i=d−k+1σiPd i=1σi. (10) ϵ(k)measures the interference error caused by updating the model within the minor subspace on old tasks. Based on Proposition 4.1, we can derive the following theorem: Theorem 4.2. LetWt−1denote the weight matrix of a linear layer in the model, updated as Wt=Wt−1+ ∆ˆWt=Wt−1+UktUk⊤ t∆Wt. Since the update direction of the new task is unknown, we assume that it is uniformly distributed across all directions. that is to say, ∆Wthas the same expected projection value across different feature directions of Gt, we provide the expected values of the stability loss : E[LS t(Wt)] =−(t−1)ϵt(kt) ∆Wt,Gold t , (11) and the plasticity loss: E[LP t(Wt)] =−kt d⟨∆Wt,Gt⟩. (12) The proof of this theorem can be found in Appendix A.2. Therefore, achieving an optimal balance between these two objectives requires solving the following optimization problem: k∗ t=argmin k E[LF t(Wt)] +E[LP t(Wt)] . (13) Noting that ∆WtandGtgradually change as training progresses, solving this optimization problem is challenging. To simplify this, we introduce a ratio parameter α: α=−⟨∆Wt,Gt⟩ ⟨∆Wt,Gold t⟩. (14) 5 Algorithm 1 SplitLoRA 1:Input: Datasets Dt={(xt i, yt i)}nt i=1, forTtasksT1, ...,TT, a pre-trained ViT model fΘ(·)with llayers. 2:Output: The optimized Wl Tfor each layer l. 3:Initialization: Gold 1=0 4:fort= 1toTdo 5: ift >1then 6: Compute ktusing Eq. (15) for each LoRA module 7: Initialize Atusing Eq. (17) for each LoRA module 8: end if 9: Train LoRA on task Ttusing dataset Dt 10: Update Gold tusing
https://arxiv.org/abs/2505.22370v1
Eq. (3) for each layer 11:end for This substitution reformulates the optimization problem into the following form: k∗ t=argmin k (t−1)ϵt(kt)−αkt d . (15) Since ∆Wtbenefits new tasks, it often interferes with previous task knowledge, leading to: ⟨∆Wt,Gt⟩>0,⟨∆Wt,ˆGt⟩<0. (16) From Eq. (15), it is evident that increasing ktleads to higher ϵt(kt), which increases stability loss LF t, while expanding the learning space and thus reducing plasticity loss LP t. However, since both ∆WtandGtchange during training, αalso varies dynamically. Meanwhile, the update subspace must be determined before training begins for task t. To resolve this mismatch, we treat αas a fixed hyperparameter throughout the learning process. In our experiments, αwas set as a hyperparameter, withα= 20 as a general choice. Further experimental analysis indicates that our method is highly robust to αin Table 4. In the simplified optimization problem of Eq. (15), the parameter ktis restricted to integer values within the range [1, d]. The optimal solution to this equation can be obtained by evaluating the objective function for all possible values of ktand selecting the one that minimizes it as k∗ t. For example, in ViT-B/16 [ 35], embedding dimension is 768 , and kcan be selected as any integer between 1 and 768. It is worth noting that we compute kseparately for each LoRA module, as weights at different layers and positions capture substantially different knowledge. A fixed threshold, as used in InfLoRA [8], cannot effectively account for such variation across modules. 4.3 LoRA Updates in the Minor Subspace To ensure LoRA updates remain within the minor subspace, we fix the projection matrix Atand only optimize Bt. LoRA parameterizes the weight update as: ∆Wt=AtBt. When Atis fixed, the update is confined to its column space [ 8,41]. To restrict this space to the minor subspace of previous tasks, we construct At=ˆUk tR, (17) where ˆUk t∈Rd×kis an orthonormal basis of the minor subspace and R∈Rk×ris a random Gaussian matrix. Importantly, this constraint only holds if Atremains fixed during training, otherwise, the update direction may drift out of the subspace. Fortunately, prior works [ 43,8] verify that fixing Atmaintains sufficient model capacity while controlling interference. This design ensures that task updates are constrained to low-interference directions ˆUk t, balancing stability and plasticity without additional memory or computational cost. The full procedure of SplitLoRA is summarized in Algorithm 1. 6 Table 1: We present FAA (%) and CAA(%) on ImageNet-R under three incremental learning settings: “5-task,” “10-task,” and “20-task.” All backbone networks are pre-trained on ImageNet-21K. Method Pub.5-task 10-task 20-task FAA (↑) CAA ( ↑) FAA ( ↑) CAA ( ↑) FAA ( ↑) CAA ( ↑) Upper-bound – 84.09 ±0.21 – 84.09 ±0.21 – 84.09 ±0.21 – FT – 18.74 ±0.44 48.39 ±0.58 10.12 ±0.51 35.23 ±0.92 4.75 ±0.40 22.8 ±0.37 FT++ – 60.42 ±0.87 71.59 ±0.50 48.93 ±1.15 66.79 ±0.92 35.98 ±1.38 59.68 ±0.95 L2P++ [5] CVPR22 70.83 ±0.58 78.34 ±0.47 69.29 ±0.73 78.30 ±0.69 65.89 ±1.30 77.15 ±0.65 Deep L2P++ [5] CVPR22 73.93 ±0.37 80.14 ±0.54 71.66 ±0.64 79.63 ±0.90 68.42 ±1.20 78.68 ±1.03 DualPrompt [44] ECCV22 73.05 ±0.50 79.47
https://arxiv.org/abs/2505.22370v1
±0.40 71.32 ±0.62 78.94 ±0.72 67.87 ±1.39 77.42 ±0.80 CODA-P [6] CVPR23 76.51 ±0.38 82.04 ±0.54 75.45 ±0.56 81.59 ±0.82 72.37 ±1.19 79.88 ±1.06 HiDe-Prompt [45] NeurIPS23 76.29 ±0.10 78.77 ±0.11 76.74 ±0.18 78.76 ±0.11 76.46 ±0.06 78.76 ±0.11 EvoPrompt [46] AAAI24 77.16 ±0.18 82.22 ±0.54 76.83 ±0.08 82.09 ±0.68 74.41 ±0.23 80.96 ±1.42 InfLoRA [8] CVPR24 79.82 ±0.27 84.07 ±0.48 78.10 ±0.43 83.47 ±1.23 73.81 ±0.47 81.02 ±0.56 VQ-Prompt [47] NeurIPS24 79.23 ±0.29 82.96 ±0.50 78.71 ±0.22 83.24 ±0.68 78.10 ±0.22 82.70 ±1.16 VPT-NSP2[9] NeurIPS24 79.71 ±0.22 84.54 ±0.68 79.35 ±0.19 84.92 ±0.41 76.72 ±0.44 82.91 ±0.60 S-LoRA [48] ICLR25 79.15 ±0.20 83.01 ±0.42 77.34 ±0.35 82.04 ±0.24 75.26 ±0.37 80.22 ±0.72 SplitLoRA This work 81.92 ±0.29 85.83 ±0.55 81.00 ±0.17 85.84 ±0.62 78.82 ±0.28 84.57 ±0.44 /uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056 /uni0000000b/uni00000044/uni0000000c/uni0000002c/uni00000050/uni00000044/uni0000004a/uni00000048/uni00000031/uni00000048/uni00000057/uni00000010/uni00000035/uni00000003/uni00000018/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni0000001a/uni00000018/uni0000001b/uni00000013/uni0000001b/uni00000018/uni0000001c/uni00000013/uni0000001c/uni00000018/uni00000029/uni00000024/uni00000024/uni0000000b/uni00000008/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024 /uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000039/uni00000033/uni00000037/uni00000010/uni00000031/uni00000036/uni000000332 /uni00000013 /uni00000015 /uni00000017 /uni00000019 /uni0000001b /uni00000014/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056 /uni0000000b/uni00000045/uni0000000c/uni0000002c/uni00000050/uni00000044/uni0000004a/uni00000048/uni00000031/uni00000048/uni00000057/uni00000010/uni00000035/uni00000003/uni00000014/uni00000013/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni0000001a/uni00000018/uni0000001b/uni00000013/uni0000001b/uni00000018/uni0000001c/uni00000013/uni0000001c/uni00000018/uni00000029/uni00000024/uni00000024/uni0000000b/uni00000008/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024 /uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000039/uni00000033/uni00000037/uni00000010/uni00000031/uni00000036/uni000000332 /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056 /uni0000000b/uni00000046/uni0000000c/uni0000002c/uni00000050/uni00000044/uni0000004a/uni00000048/uni00000031/uni00000048/uni00000057/uni00000010/uni00000035/uni00000003/uni00000015/uni00000013/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni0000001a/uni00000018/uni0000001b/uni00000013/uni0000001b/uni00000018/uni0000001c/uni00000013/uni0000001c/uni00000018/uni00000029/uni00000024/uni00000024/uni0000000b/uni00000008/uni0000000c /uni00000036/uni00000053/uni0000004f/uni0000004c/uni00000057/uni0000002f/uni00000052/uni00000035/uni00000024 /uni0000002c/uni00000051/uni00000049/uni0000002f/uni00000052/uni00000035/uni00000024 /uni00000039/uni00000033/uni00000037/uni00000010/uni00000031/uni00000036/uni000000332 Figure 2: Variation of the performance of different methods during the learning of ImageNet-R. 5 Experiment 5.1 Experimental Settings Datasets. We conducted experiments on three standard datasets: ImageNet-R [ 49], CIFAR-100 [ 50], and DomainNet [ 51]. ImageNet-R is a variant of ImageNet with 200 classes. CIFAR-100 consists of 100 classes, each containing 600 images. DomainNet contains images from diverse domains, posing a challenge for cross-domain generalization. Following [ 7,5,8], we divided ImageNet-R into 5, 10, and 20 tasks, with each task comprising 40, 20, and 10 classes, respectively. CIFAR-100 was split into 10 tasks, each containing 10 classes, while DomainNet was uniformly partitioned into 5 tasks. Baselines and Evaluation Metrics. We compare our method with several state-of-the-art continual learning approaches, including L2P++ [ 5], Deep L2P++ [ 5], DualPrompt [ 44], CODA-P [ 6], HiDe- Prompt [ 45], EvoPrompt [ 46], VQ-Prompt [ 47], VPT-NSP2, and InfLoRA [ 8]. The “Upper bound” represents the performance achieved by jointly training on all classes in one go. Results are averaged over three runs with different random seeds. Following [ 47], we report Final Average Accuracy (FAA) and Cumulative Average Accuracy (CAA). For details, please refer to the appendix A.3. Implementation Details. We follow prior works [ 5,44,6,45,53,46] and adopt ViT-Base [ 35] pre-trained on ImageNet-21K [ 54] as the backbone. The LoRA rank is set to 10, and the embedding dimension is D= 768 , matching the feature dimension of ViT-Base[ 35]. Following [ 8], we insert SplitLoRA modules into the key and value projections in multi-head attention. Our method is optimized using AdamW [ 55] with an initial learning rate of 1e−3for LoRA and 1e−2for the classification head. We use a batch size of 256across all datasets, and each task is trained for 10 epochs. All experiments are conducted on a single NVIDIA GeForce L40S GPU. All results are reported as mean ± standard deviation over three random seeds. 7 Table 2: We present FAA (%) and CAA(%) on CIFAR100: 10 tasks and DomainNet: 5 tasks. We report results over 3 trials. All backbone networks are pre-trained on ImageNet-21K. Method Pub.CIFAR100 DomainNet FAA
https://arxiv.org/abs/2505.22370v1
(↑) CAA ( ↑) FAA (↑) CAA ( ↑) Upper-bound – 91.92 ±0.05 – 90.12±0.13 – DualPrompt [44] ECCV22 84.42 ±0.30 90.06 ±0.07 72.14±0.05 77.71 ±0.06 CODA-Prompt [6] CVPR23 86.62 ±0.11 91.08 ±0.28 73.23±0.13 78.72 ±0.07 LAE [7] ICCV23 84.15 ±0.16 89.84 ±0.03 66.85±0.40 75.01 ±0.17 C-LoRA [52] TMLR24 82.97 ±0.47 88.81 ±0.34 69.34±0.16 75.25 ±0.11 InfLoRA [8] CVPR24 87.06 ±0.25 91.59 ±1.43 78.26±0.50 78.82 ±0.34 VPT-NSP2[9] NeurIPS24 88.04 ±0.11 92.25 ±0.80 83.83±0.19 88.63 ±0.10 SplitLoRA This work 90.33 ±0.73 93.70 ±0.32 84.31±0.23 88.99 ±0.57 Table 3: We present FAA (%) and CAA(%) on ImageNet-R: 10-tasks. Backbones are with different self-supervised pre-training paradigms: iBOT-1K and DINO-1K. Method Pub.iBOT-1K DINO-1K FAA↑ CAA↑ FAA↑ CAA↑ Upper-bound – 84.09 ±0.21 – 81.98±0.07 – DualPrompt [44] ECCV22 61.51 ±1.05 67.11 ±0.08 58.57±0.45 64.89 ±0.15 CODA-Prompt [6] CVPR23 66.56 ±0.68 73.14 ±0.57 63.15±0.39 69.73 ±0.25 HiDe-Prompt [45] NeurIPS23 71.33 ±0.21 73.62 ±0.13 68.11±0.18 71.70 ±0.01 InfLoRA [8] CVPR24 71.84 ±0.09 78.29 ±0.09 68.31±0.28 76.15 ±0.05 VPT-NSP2[9] NeurIPS24 73.85 ±0.23 80.34 ±0.60 69.45±0.74 76.38 ±0.50 VQ-Prompt [47] NeurIPS24 71.68 ±0.72 76.66 ±0.40 68.42±0.28 74.43 ±0.58 SplitLoRA This work 74.58 ±1.05 81.45 ±1.72 70.49±0.31 78.15 ±1.13 5.2 Experimental Results Results on ImageNet-R, CIFAR100 and Domainet. Table 1 presents the results of different methods evaluated on ImageNet-R with varying numbers of tasks. It highlights how our proposed method, SplitLoRA, achieves consistently higher accuracy compared to existing continual learning methods across different task setups. Additionally, Table 2 shows the results of these methods on CIFAR100 and DomainNet datasets. Across both tables, SplitLoRA outperforms other methods in FAA and CAA. Figure 2 shows the accuracy trends of various CL methods on ImageNet-R. Our method achieves the highest accuracy at the end and outperforms others throughout the learning curve. Variant Pre-trained Models. Table 3 provides a summary of experimental results on the 10-task ImageNet-R dataset using different self-supervised pre-training paradigms. Specifically, we evaluate our method with iBOT-1K [ 56] and DINO-1K [ 57] pre-training frameworks. These results clearly demonstrate that SplitLoRA consistently outperforms state-of-the-art continual learning methods, irrespective of the pre-training paradigm used. This robustness underscores the generalizability and effectiveness of SplitLoRA in leveraging self-supervised pre-training for continual learning tasks. Table 5: Impact of different Atinitialization strategies on ImageNet-R. Init of At 5 tasks 10 tasks 20 tasks Random 76.57 76.13 72.30 InfLoRA 78.92 78.10 73.81 SplitLoRA 81.92 81.00 78.82Table 6: Efficiency of LoRA variants on ImageNet- R (10 tasks). Method Extra Fwd Mem Time LoRA None 22.80 GB 1h 37m InfLoRA 2/task 23.06 GB 1h 48m SplitLoRA 1/task 23.03 GB 1h 43m Initialization strategies of At.Table 5 compares different initialization strategies for At. SplitLoRA achieves consistently better performance across task splits, demonstrating the effectiveness of using projected minor subspace over random or InfLoRA. Memory and Time Cost. Table 6 shows that SplitLoRA achieves a favorable trade-off between performance and efficiency. It introduces only 1 extra forward pass per task while maintaining similar memory and runtime overheads compared to InfLoRA. 8 Table 4: Evaluation of model performance under different values of αon ImageNet-R. A higher α may improve plasticity but could impact stability.
https://arxiv.org/abs/2505.22370v1
Method5-task 10-task 20-task FAA (↑) CAA ( ↑) FAA ( ↑) CAA ( ↑) FAA ( ↑) CAA ( ↑) InfLoRA 79.82 84.07 78.10 83.47 73.81 81.02 SplitLoRA( α= 30 ) 82.15 85.60 81.03 85.56 78.73 84.06 SplitLoRA( α= 20 ) 81.92 85.83 81.00 85.84 78.82 84.57 SplitLoRA( α= 10 ) 82.35 85.82 81.03 85.67 77.89 83.27 SplitLoRA( α= 5) 82.52 85.89 81.38 85.89 78.15 84.19 SplitLoRA( α= 1) 82.40 85.86 80.89 85.22 78.59 84.20 /uni00000013 /uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000013/uni00000014/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000003/uni00000029/uni00000052/uni00000055/uni0000004a/uni00000048/uni00000057/uni00000057/uni0000004c/uni00000051/uni0000004a/uni0000000b/uni00000008/uni0000000c /uni00000016/uni00000013 /uni00000015/uni00000013 /uni00000014/uni00000013 /uni00000018 /uni00000014 /uni00000013 /uni00000015 /uni00000017 /uni00000019 /uni0000001b /uni00000014/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000013/uni00000014 /uni00000016/uni00000013 /uni00000015/uni00000013 /uni00000014/uni00000013 /uni00000018 /uni00000014 /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000013/uni00000014 /uni00000016/uni00000013 /uni00000015/uni00000013 /uni00000014/uni00000013 /uni00000018 /uni00000014 /uni00000013 /uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni00000003/uni0000000b/uni00000044/uni0000000c/uni0000002c/uni00000050/uni00000044/uni0000004a/uni00000048/uni00000031/uni00000048/uni00000057/uni00000010/uni00000035/uni00000003/uni00000018/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000013/uni00000014/uni00000015/uni00000035/uni00000048/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000059/uni00000048/uni00000003/uni00000033/uni0000004f/uni00000044/uni00000056/uni00000057/uni0000004c/uni00000046/uni0000004c/uni00000057/uni0000005c/uni0000000b/uni00000008/uni0000000c /uni00000016/uni00000013 /uni00000015/uni00000013 /uni00000014/uni00000013 /uni00000018 /uni00000014 /uni00000013 /uni00000015 /uni00000017 /uni00000019 /uni0000001b /uni00000014/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni00000003/uni0000000b/uni00000045/uni0000000c/uni0000002c/uni00000050/uni00000044/uni0000004a/uni00000048/uni00000031/uni00000048/uni00000057/uni00000010/uni00000035/uni00000003/uni00000014/uni00000013/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000013/uni00000014/uni00000015/uni00000016/uni00000017/uni00000018/uni00000019 /uni00000016/uni00000013 /uni00000015/uni00000013 /uni00000014/uni00000013 /uni00000018 /uni00000014 /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000003 /uni00000003/uni0000000b/uni00000046/uni0000000c/uni0000002c/uni00000050/uni00000044/uni0000004a/uni00000048/uni00000031/uni00000048/uni00000057/uni00000010/uni00000035/uni00000003/uni00000015/uni00000013/uni00000003/uni00000057/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000019 /uni00000017 /uni00000015 /uni00000013/uni00000015/uni00000017/uni00000019/uni0000001b /uni00000016/uni00000013 /uni00000015/uni00000013 /uni00000014/uni00000013 /uni00000018 /uni00000014 Figure 3: The impact of αon the stability and plasticity of the model in continual learning. As α increases, stability decreases (higher forgetting) while plasticity improves, illustrating the trade-off between retaining past knowledge and adapting to new tasks. 5.3 Hyperparameter Analysis and Discussion We study the effect of the hyperparameter αon continual learning performance. As shown in Table 4, changing αhas limited impact on final accuracy, and all settings consistently outperform InfLoRA. Model stability is measured by forgetting , defined as the average gap between each task’s best historical accuracy and its current accuracy. Lower forgetting indicates better knowledge retention. For clarity, we define relative forgetting as the difference from the setting where α= 1. Plasticity is evaluated by the model’s accuracy on the current task. Similarly, relative plasticity is defined as the difference from the plasticity when α= 1. Figure 3 presents results on 5-, 10-, and 20-task splits of ImageNet-R. As αincreases, forgetting grows (lower stability) while plasticity improves. These results show that αeffectively controls the trade-off between retaining past knowledge and adapting to new tasks, while consistently maintaining better performance than InfLoRA across all settings. 6 Conclusion In this paper, we investigate the problem of continual learning based on pre-trained ViT models and propose the SplitLoRA method. Specifically, we partition the gradient space of previous tasks into a major subspace and a minor subspace and theoretically model the impact of the minor subspace size on stability and plasticity. After simplifying the optimization problem, we compute the optimal minor subspace size during the continual learning process. Finally, we employ random projection to map the minor subspace onto the low-dimensional matrix of LoRA. Experiments on multiple benchmark datasets demonstrate that our method effectively achieves state-of-the-art performance. Limitation. In estimating the optimal subspace size, we assume that the ratio between the gradients of the new and previous tasks remains constant. While experimental results suggest that this assumption is robust and effective in practice, it may not be the most principled or optimal solution. 9 References [1]German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Networks , 113:54–71, 2019.
https://arxiv.org/abs/2505.22370v1