text
string
source
string
methods, like edit distance method [ 41] and TF-IDF method [ 10]. To overcome the limitations of unsupervised distance-based methods, researchers have proposed supervised learning methods. Ravikumar et al. [ 50] define ER as a classification problem and use SVM to solve it. How- ever, these methods are heavily based on...
https://arxiv.org/abs/2505.22349v1
in 𝐸newif the cluster indeed refers to a new real-world dataset. The resolution process constructs a paper-dataset network by connecting papers 𝑝∈𝑃to their used dataset entities 𝑀(𝑑)∈𝐸for all descriptions 𝑑∈𝐷(𝑝). 4 SYSTEM DESIGN In this section, we introduce ChatPD , a novel LLM-driven system designed to autom...
https://arxiv.org/abs/2505.22349v1
of the LLM as a computer science researcher, allowing it to better understand the task scenario. 2https://github.com/py-pdf/pypdfPaper Information . The prompt features a ‘ {Paper Information} ’ field designed to incorporate relevant text from the paper pertaining to the dataset. Intuitively, this field could contain t...
https://arxiv.org/abs/2505.22349v1
the datasets. In our current implementation, the dataset information extrac- tion module employs GPT-4o-mini3, OpenAI’s most advanced and cost-effective small-scale model. After cost optimization, the ex- pense for ChatPD to process 10,000 papers would be reduced to just 3https://chat.openai.com/ ChatPD: An LLM-driven ...
https://arxiv.org/abs/2505.22349v1
(dataset descriptions) and E-nodes (dataset entities). For instance, if a D-node 𝑑is linked to an I-node and this same I-node is also connected to an E-node 𝑒, we can infer that𝑑corresponds to 𝑒. This process effectively matches the dataset description to an existing dataset entity in the database through their sha...
https://arxiv.org/abs/2505.22349v1
Inference for Entity Resolution 1:Input: A list of dataset descriptions 𝐷, a list of dataset entities 𝐸, the completed graph𝐺=(𝑉,E) 2:Output: A list of matched dataset descriptions and entities 𝑀 3:𝑀←{} 4:forD-node𝑑∈𝐷do 5: forattribute𝛼∈{dataset name, dataset url} do 6: if∃I-node𝐼𝑑,𝛼refers_to−−−−−→ E-node𝑒...
https://arxiv.org/abs/2505.22349v1
evaluate ChatPD to ascertain its effectiveness in constructing the paper-dataset network following three questions: RQ1: Can ChatPD efficiently and accurately extract dataset infor- mation? RQ2: Can ChatPD effectively resolve dataset descriptions entities? RQ3: Can ChatPD discover new datasets? 5.1 Performance of Datas...
https://arxiv.org/abs/2505.22349v1
Precision. Note that processing the full text would require approximately 7 times more tokens compared to our optimized method, significantly increasing costs. Given that ChatPD is designed to handle a contin- uous and large volume of papers, we believe that limiting the input to 1500 tokens strikes an effective balanc...
https://arxiv.org/abs/2505.22349v1
ChatPD can detect novel dataset entities referenced in academic papers. We list the top 10 most frequently used new dataset entities discovered by ChatPD that were not included in PwC’s dataset database as of November 16, 2024. We compare the coverage of these dataset entities in PwC’s database on November 16, 2024, an...
https://arxiv.org/abs/2505.22349v1
the paper-dataset network constructed from cs.AI papers on arXiv in 2024. Our results are summarized in Table 5. Our results show that approximately 87.8% of papers have acces- sible text information via ar5iv. ChatPD successfully extracts dataset information from 85.5% of these papers, with an average of 2.41 dataset ...
https://arxiv.org/abs/2505.22349v1
believe that our future system can collaborate with platforms like PwC to transition from entirely manual annotation to manual calibration based on the results obtained from ChatPD . This can significantly reduce the workload of manual annotation and yield a more accurate paper- dataset network. As we continue to refin...
https://arxiv.org/abs/2505.22349v1
Machine Learning from Disaster. https://kaggle.c om/competitions/titanic [16] FICO. 2018. Fico xml challenge. https://community.fico.com/s/explainable- machine-learning-challenge [17] C Lee Giles, Kurt D Bollacker, and Steve Lawrence. 1998. CiteSeer: An automatic citation indexing system. In Proceedings of the third AC...
https://arxiv.org/abs/2505.22349v1
and Chao Zhang. 2022. Sparse Conditional Hidden Markov Model for Weakly Supervised Named Entity Recognition. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Wash- ington DC, USA) (KDD ’22) . Association for Computing Machinery, New York, NY, USA, 978–988. https://doi.org/10.1145...
https://arxiv.org/abs/2505.22349v1
prediction and climate change studies. International Journal of Information Engineering and Electronic Business 4, 1 (2012), 51. [45] Kelley Pace and Ronald Barry. 1997. Sparse spatial autoregressions. Statistics & Probability Letters 33, 3 (1997), 291–297. https://EconPapers.repec.org/RePEc: eee:stapro:v:33:y:1997:i:3...
https://arxiv.org/abs/2505.22349v1
to offer a suite of services designed to enhance the research community’s ability to discover and utilize datasets effec- tively. These services are not only aimed at simplifying the dataset search process but also at providing insights into dataset relevance, usage trends, and their applicability to various research t...
https://arxiv.org/abs/2505.22349v1
" Large " , " d a t a s e t p r o v i d e r " : "New York C i t y TLC " , " d a t a s e t u r l " : " h t t p s : / /www. nyc . gov / s i t e / t l c / about / t l c −t r i p −record −d a t a . page " , " d a t a s e t p u b l i c l y a v a i l a b l e " : " Yes " , " o t h e r u s e f u l i n f o r m a t i o n about t...
https://arxiv.org/abs/2505.22349v1
are similar to dataset SQuAD? The Stanford Question Answering Dataset (SQuAD) is a dataset of question-answer pairs, which is widely used in the field of natural language processing. We can use the RWR algorithm to find datasets similar to the SQuAD dataset. The top 5 similar datasets are shown in Table 6. Similar to t...
https://arxiv.org/abs/2505.22349v1
1 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdataVME: a Satellite Imagery Dataset and Benchmark for Detecting Vehicles in the Middle East and Beyond Noora al-Emadi 1,2 ✉, Ingmar Weber 3, Yin Yang2 & Ferda Ofli 1 Detecting vehicles in satellite images is crucial...
https://arxiv.org/abs/2505.22353v1
on top of these benchmarks with in-depth analyses, resulting in discussions on backbone effectiveness, 1Qatar Computing Research Institute, Hamad Bin Khalifa University, Doha, Qatar. 2college of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar. 3Saarland Informatics Campus, Saarland University, Saarbr...
https://arxiv.org/abs/2505.22353v1
elaborate on the new benchmark dataset (CDSI), where we collect car-related objects from the publicly available datasets and combine them with the VME dataset. VME Dataset. We constructed the VME dataset by collecting satellite images of different cities in the Middle Eastern countries such as Syria, Libya, Iraq, Jorda...
https://arxiv.org/abs/2505.22353v1
gories with definitions and examples, rules and tips for the annotation process, and the deliverable format. Then, the data annotation process was conducted with a crowdsource of 6000+ people, where each group focused on a specific category. Finally, an annotation review process was implemented to detect mislabeled ann...
https://arxiv.org/abs/2505.22353v1
the datasets with high ground sample distance (GSD) ranges or hidden contexts, such as COWC33, PaCaBa34, PSU35, and VisDrone36. Even some of the new datasets are not yet released, e.g., VehSat37 and EAGLE38. The following are the datasets we employed in our study. xView22 is considered one of the largest publicly avail...
https://arxiv.org/abs/2505.22353v1
object detection in optical satellite images. It consists of 23,463 images annotated for 20 object categories and 192,512 object instances using horizontal bounding boxes. Spatial resolution of images is between 0.5m/pixel and 30m/pixel. The dataset claims to cover more than 80 countries, but the specific list of count...
https://arxiv.org/abs/2505.22353v1
following steps: Fig. 4 Example images with car-related objects in (a) xView22, (b) DOTA-v2.020, (c) VEDAI21, (d) DIOR29, (e) FAIR1M-2.030, (f) VME (our) datasets. 6 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/• Annotation stan...
https://arxiv.org/abs/2505.22353v1
describing a targeted object as follows: x 1, y1, x2, y2, x3, y3, x4, y4, category_id, where (x1, y1) is the top left, (x2, y2) is the top right, (x3, y3) is the bottom right, and (x4, y4) is the bottom left point of OBB, and c ategory_id indicates the class index as 0, 1, 2 corresponding to car , bus, and truck , resp...
https://arxiv.org/abs/2505.22353v1
compute precision, recall, and F1 scores as 0.999, 0.970, and 0.984, respectively. Although some objects were missed, the crucial factor is that the Fig. 5 Dataset consolidation pipeline and final experimental setups. Fig. 6 Distribution of car sizes in (a) xView, (b) DOTA-v2.0, (c) VEDAI, (d) FAIR1M-2.0, (e) DIOR, and...
https://arxiv.org/abs/2505.22353v1
models using the DINO Swin-L detector with a batch size of 2 for 36 epochs with AdamW optimizer with an initial learning rate of 0.0001, which was configured to change at epochs 27 and 33 with learning rate decay equal to 0.1. We ran all of our experi - ments on an NVIDIA A100 80GB GPU. VME Benchmark. As we introduce o...
https://arxiv.org/abs/2505.22353v1
12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/Table 5 summarizes the results achieved by both detectors, TOOD and DINO Swin-L, on CDSI and its con - stituents. Each row corresponds to a model trained on a particular dataset with a specific setup, e.g., a...
https://arxiv.org/abs/2505.22353v1
mAP mAP50 mAP mAP50 mAP mAP50 mAP mAP50 all cat. 42 80.8 30.3 48.9 27.9 45.9 33.4 58.5 45.1 85.8 32.8 51.8 29.8 50.6 35.9 62.7 Table 4 . VME baseline results obtained by training and testing the object detection models on the original VME data splits with all object categories as presented in Table 1. All mAP results a...
https://arxiv.org/abs/2505.22353v1
percentage (%). Fig. 8 Comparison of detections on VME images employing the model trained on VME car setup versus detections of the models trained on xView and DOTA-v2.0 car setup. 11 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata...
https://arxiv.org/abs/2505.22353v1
diversity in object instances and variations in image characteristics collected from different regions. In summary, both plots illus - trate that the errors are dominated by imperfect localization and background confusions. Usage Notes The VME dataset and the script for creating the CDSI dataset are available at Zenodo...
https://arxiv.org/abs/2505.22353v1
Orlikova, L. Vehicle detection using panchromatic high-resolution satellite images as a support for urban planning. case study of prague’s centre.GeoScape 16 (2022). 10. Rufener, M.-C., Ofli, F., Fatehkia, M. & Weber, I. Estimation of internal displacement in ukraine from satellite-based car detections. Sci. Reports 14...
https://arxiv.org/abs/2505.22353v1
Accessed on 2024-09-18 https://www.wilsoncenter.org/article/rise-gulf- smart-cities Accessed on 2024-09-18 (2024). 29. Li, K., Wan, G., Cheng, G., Meng, L. & Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS Journal of Photogrammetry and Remote Sensing 159, 296–307, https://...
https://arxiv.org/abs/2505.22353v1
Diagnosing error in object detectors. In European conference on computer vision , 340–353 (Springer, 2012). 14 Scientific Data | (2025) 12:500 | https://doi.org/10.1038/s41597-025-04567-y www.nature.com/scientificdata www.nature.com/scientificdata/Acknowledgements This publication was made possible by GSRA grant, I.D. ...
https://arxiv.org/abs/2505.22353v1
arXiv:2505.22356v1 [cs.LG] 28 May 2025Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings Ang´eline Pouget1Mohammad Yaghini2Stephan Rabanser2Nicolas Papernot2 Abstract Deploying machine learning models in safety- critical domains poses a key challenge: ensur- ing reli...
https://arxiv.org/abs/2505.22356v1
approach combines insights from distribution shift detection, unsuper- vised accuracy estimation, selective prediction, and dataset inference into a novel performance deterioration detector. Central to our solution is the suitability filter , an auxil- 1 Suitability Filter: A Statistical Framework for Classifier Evalua...
https://arxiv.org/abs/2505.22356v1
a suitability decision. Leveraging formal hypothesis test- ing, our approach enables control of the false positive rate via a user-interpretable significance level. 3.We theoretically analyze the end-to-end false positive rates of our suitability filters and provide sufficient condi- tions for bounded false positive ra...
https://arxiv.org/abs/2505.22356v1
al., 2010). In contrast to selective classification, we do not re- ject or accept individual input data samples. Instead, we leverage sample-level signals and aggregate them to provide a statistically grounded suitability decision for the entire dataset. Initial selective classification methods for neural networks base...
https://arxiv.org/abs/2505.22356v1
if the estimated accuracy of MonDudeviates at most by mfrom the accuracy on Dtest. Formally: 1 |Du|X x∈DuI{M(x) =O(x)} ≥ 1 |Dtest|X (x,y)∈DtestI{M(x) =y} −m.(1) Here,I{·}is the indicator function and O(x)represents an oracle that provides the true label yfor any input x(the ground truth label is unavailable for samples...
https://arxiv.org/abs/2505.22356v1
computed for an individual sample and is predictive of prediction correctness can be incorpo- rated into our framework, allowing for flexible extension based on the specific task, dataset, or model M. 4.2. Per-Sample Prediction Correctness Estimator To learn a per-sample prediction correctness estimator, we require the...
https://arxiv.org/abs/2505.22356v1
about Dsource andDtarget can be made, achieving reliable calibration is challenging. Calibrating ConDsf(e.g., using Platt’s method (Platt et al., 1999) or temperature scaling (Guo et al., 2017)) ensures that the classification correctness estimator Cprovides reliable esti- mates of the probability that model Mcorrectly...
https://arxiv.org/abs/2505.22356v1
of these suitability decisions, we next discuss statistical guarantees and the conditions under which they hold for the end-to-end suitability decision. Statistical Guarantees. To account for miscalibration er- rors, we define δ-calibration as follows: Definition 4.1 (δ-Calibration) .Letpc(x)denote the esti- mated prob...
https://arxiv.org/abs/2505.22356v1
line is the perfect-calibration diagonal, the dashed black/gray lines mark the original margin mand its corrected value m′, respectively. The blue/orange arrows indicate the estimation errors on the test set ( ∆test) and user data ( ∆u), respectively. In the left panel, the user data Duis deemed suitable; in the right ...
https://arxiv.org/abs/2505.22356v1
OOD folds for FMoW-WILDS , 4 ID and 8 OOD folds for RxRx1-WILDS , and 16 ID folds forCivilComments-WILDS ). We conduct two types of experiments: first, each ID fold is used as the user dataset ( Du), and the remaining ID data is split into 15 subsets, used as DtestandDsf. This yields 16 ×15×14 exper- iments for FMoW-WI...
https://arxiv.org/abs/2505.22356v1
0% to 1% 1% to 2% 2% to 3% 3% to 4% 4% to 5% 5% to 6% 6% to 7% Acc(M,Du)Acc(M,Dtest) 020406080100SUITABLE Decisions (%)SUITABLE Decisions Across Varying Accuracy Differences False Positives True Positives Figure 4. Sensitivity of suitability decisions to accuracy dif- ferences between user and test data on FMoW-WILDS ....
https://arxiv.org/abs/2505.22356v1
real-world deployment set- tings deteriorates compared to its performance on test data. We present an instantiation for classification accuracy that leverages statistical hypothesis testing. We provide theo- retical guarantees on the false positive rate of suitability decisions and propose a margin adjustment strategy ...
https://arxiv.org/abs/2505.22356v1
suitability filter framework and the experiments presented in this paper is pub- licly available on GitHub at https://github.com/ cleverhans-lab/suitability . Acknowledgements We thank Anvith Thudi, Mike Menart, David Glukhov, and other members of the Cleverhans group for their feedback on this work. We would like to a...
https://arxiv.org/abs/2505.22356v1
world wide web conference , pp. 491–500, 2019. Cha, J., Lee, K., Park, S., and Chun, S. Domain gener- alization by mutual-information regularization with pre- 9 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings trained models. In European conference on computer visi...
https://arxiv.org/abs/2505.22356v1
79(2):811–825, 2023. Gal, Y . and Ghahramani, Z. Dropout as a bayesian approx- imation: Representing model uncertainty in deep learn- ing. In international conference on machine learning , pp. 1050–1059. PMLR, 2016. Gangrade, A., Kag, A., and Saligrama, V . Selective classi- fication via one-sided prediction. In Intern...
https://arxiv.org/abs/2505.22356v1
L., Zhang, C., and Zhang, H. Self-adaptive training: beyond empirical risk minimization. Advances in neural information processing systems , 33:19365–19376, 2020. Jaffe, A., Nadler, B., and Kluger, Y . Estimating the ac- curacies of multiple classifiers without labeled data. In Artificial Intelligence and Statistics , ...
https://arxiv.org/abs/2505.22356v1
D., Nowozin, S., Dillon, J., Lakshminarayanan, B., and Snoek, J. Can you trust your model’s uncertainty? evalu- ating predictive uncertainty under dataset shift. Advances in neural information processing systems , 32, 2019. Peng, R., Duan, Q., Wang, H., Ma, J., Jiang, Y ., Tu, Y ., Jiang, X., and Zhao, J. Came: Contras...
https://arxiv.org/abs/2505.22356v1
F., Bellumore, S., Bosia, M., and De Micco, F. Machine learning and criminal jus- tice: A systematic review of advanced methodology for recidivism risk prediction. International journal of en- vironmental research and public health , 19(17):10594, 2022. Tu, W., Deng, W., Gedeon, T., and Zheng, L. A bag-of- prototypes r...
https://arxiv.org/abs/2505.22356v1
the appropriate critical value for the t-distribution. Since non-inferiority testing is inherently a one-sided test, after calculating the two-sample t-test statistic, the p-value is divided by 2to reflect the one-sided nature of the non-inferiority test. This adjusted p-value is then compared to the chosen significanc...
https://arxiv.org/abs/2505.22356v1
the probability of rejecting the null hypothesis H0at significance level αand returning SUITABLE when, in reality: Acc target<Acc source−m. (25) We can define m′:=m+δsource−δtarget and leverage Equation 24 to write: Acc target<Acc source−m⇐⇒ µtarget−δtarget< µ source−δsource−m⇐⇒ µtarget< µ source−m′. (26) With this mar...
https://arxiv.org/abs/2505.22356v1
Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings - Maximum logit ( logit max): logit max= max i∈{1,...,k}zi (36) The maximum logit value. - Logit standard deviation ( logit std): logit std=vuut1 kkX i=1(zi−¯z)2,¯z=1 kkX i=1zi (37) The standard deviation of the logits. - Differ...
https://arxiv.org/abs/2505.22356v1
incorporated into our work, they can serve as additional suitability signals in scenarios where these modifications are feasible. 17 Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings A.2.2. D ATASETS AND MODELS FMoW-WILDS .TheFMoW-WILDS dataset contains satellite im...
https://arxiv.org/abs/2505.22356v1
in toxicity classifiers that can spuriously associate toxicity with certain demographic mentions (Borkan et al., 2019; Koh et al., 2021). The input xis a text comment on an online article, and the label yis whether the comment was rated as toxic or not. The domain is represented as an 8-dimensional binary vector, where...
https://arxiv.org/abs/2505.22356v1
A SIA 813 57.40±3.89% VAL 2014 A SIA 1311 56.90±2.31% VAL 2015 A SIA 1997 55.45±0.26% VAL 2013 A MERICAS 1168 61.13±1.66% VAL 2014 A MERICAS 1967 60.85±1.26% VAL 2015 A MERICAS 3427 55.36±3.32% TEST 2016 A LL 15959 55.48±1.14% TEST 2017 A LL 6149 48.64±2.13% TEST ALL ASIA 4963 55.67±0.72% TEST ALL EUROPE 5858 56.38±1.9...
https://arxiv.org/abs/2505.22356v1
evaluating significance across a fixed window of recent data, smoothing out short-term fluctuations and focusing on long-term trends in performance. This approach helps identify true changes in model performance while accounting for variations in individual datasets over time. A.3.3. S EQUENTIAL TESTING When testing th...
https://arxiv.org/abs/2505.22356v1
As expected, ∆testis centered around zero, indicating that the estimated accuracy closely matches the ground truth accuracy and there is no clear directional bias. However, ∆uis frequently positive, indicating that accuracy is often overestimated. This miscalibration can lead to incorrect suitability decisions. While a...
https://arxiv.org/abs/2505.22356v1
Unsurprisingly, these signals are also identified as the most predictive of per-sample prediction correctness for classifier C(see Appendix A.4.3). Noteworthy outliers in Table 5 include logit mean andlogit std that have relatively high accuracy but higher FPR and lower ROC and PR AUC than comparable signals. Upon clos...
https://arxiv.org/abs/2505.22356v1
N F M A X 88.4±4.2% 0 .011±0.020 0 .948±0.015 0 .842±0.121 L O S S 88.3±3.4% 0 .012±0.023 0 .944±0.014 0 .831±0.118 T O P KC O N F S U M 86.7±4.4% 0 .026±0.057 0 .916±0.005 0 .773±0.097 C O N F R A T I O 83.6±6.4% 0 .001±0.005 0 .905±0.050 0 .711±0.102 L O G I T M E A N 61.7±20.5% 0 .446±0.268 0 .845±0.193 0 .698±0.175...
https://arxiv.org/abs/2505.22356v1
A T I O 61.1±2.9% 0 .144±0.008 0 .320±0.008 0 .187±0.001 L O G I T S T D 59.8±2.4% 0 .132±0.070 0 .294±0.209 0 .167±0.108 Table 7. ANOV A results showing the significance of individual signals in predicting model correctness. Signals are ordered by decreasing F-value, which measures the variance explained by each signa...
https://arxiv.org/abs/2505.22356v1
the prediction correctness estimator on FMoW-WILDS . Table 8. Table showcasing the mean accuracy and calibration metrics (ECE, MCE, RMSCE) for various classifiers, with 95% confidence intervals. The metrics evaluate the classifiers’ prediction quality and their calibration over 3random splits of the FMoW-WILDS ID train...
https://arxiv.org/abs/2505.22356v1
Budget-Adaptive Adapter Tuning in Orthogonal Subspaces for Continual Learning in LLMs Zhiyi Wan1, Wanrou Du1, Liang Li2, Miao Pan3, Xiaoqi Qin1 1Beijing University of Posts and Telecommunications 2Pengcheng Laboratory 3University of Houston {wzy10, wanroudu, xiaoqiqin}@bupt.edu.cn ,lil03@pcl.ac.cn ,mpan2@uh.edu Abstrac...
https://arxiv.org/abs/2505.22358v1
spaces. Some methods further dynamically construct task-specific parameters for new knowledge integration to resolve this problem, but they heavily rely on explicit task identifiers during inference [20–24]. Recent research efforts in orthogonal subspace learning [ 25–27] offer a promising alternative by restricting ta...
https://arxiv.org/abs/2505.22358v1
OA-Adapter module (task t) comprises three core components: (1) a down-projection layer W(t) 1, (2) a trainable diagonal mask Γ(t)with trainable threshold τ(t), and (3) an up-projection layer W(t) 2. The dynamic masking mechanism enables bidirectional dimension adaptation through activation/deactivation of latent dimen...
https://arxiv.org/abs/2505.22358v1
bidirectional dimension adaptation capability, enabled by the trainable threshold τ, offers critical advantages. When |gi| ≤τ, the corresponding dimension pair is deactivated by setting γi= 0. This zeros the i-th diagonal entry of the masking matrix Γ, effectively removing that dimension’s contribution in forward propa...
https://arxiv.org/abs/2505.22358v1
tasks’ frozen parameter subspaces. Formally, the constraints for the t-th task is defined as: ⟨W(t) 2[:, i],fW(s) 2[:, j]⟩= 0,∀i, j, s < t (9) The columns of fW(t) 2inherit directional properties from W(t) 2, ensuring orthogonal relationships persist regardless of dynamic dimension activation patterns. These asymmetric...
https://arxiv.org/abs/2505.22358v1
tasks. EWC [ 36]finetune the whole model with a regularization loss that prevents updating parameters that could interfere with previously learned tasks. LwF [ 37]constrains the shared representation layer to be similar to its original state before learning the new task. Inc-Adapter trains new Adapter parameters on a s...
https://arxiv.org/abs/2505.22358v1
maintains separate parameters per task, fundamentally limiting its generalization to unseen tasks and practical deployment in real- world LLM applications. Notably, all continual learning methods still trail behind PerTaskFT and MTL, highlighting that continual learning for a large number of tasks remains a significant...
https://arxiv.org/abs/2505.22358v1
phenomenon resembles human memory: knowledge seemingly forgotten from disuse often requires 7 < - - Task 1 - - > Training Steps< - - Task 3 - - > < - - Task 4 - - > < - - Task 2 - - >Figure 2: Occurrence and mitigation of catastrophic forgetting during sequential training following Order-1 across multiple tasks. Solid ...
https://arxiv.org/abs/2505.22358v1
classification datasets (i.e., DBpedia, Amazon, Yahoo, AG News). The X-axis is the index of T5-large layers, and the Y-axis indicates different layers OA-Adapter applies to. Table 3: Comparisons of threshold strategies. Threshold order Initial Value Strategy 1 2 3 avg 1e-3fixed 71.5 71.1 71.4 71.3 dynamic 73.0 76.4 74....
https://arxiv.org/abs/2505.22358v1
2019, 9-15 June 2019, Long Beach, California, USA , ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. PMLR, 2019, pp. 2790–2799. [Online]. Available: http://proceedings.mlr.press/v97/houlsby19a.html [6]E. J. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, S. Wang, L. Wang, ...
https://arxiv.org/abs/2505.22358v1
European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part III , ser. Lecture Notes in Computer Science, V . Ferrari, M. Hebert, C. Sminchisescu, and Y . Weiss, Eds., vol. 11207. Springer, 2018, pp. 144–161. [Online]. Available: https://doi.org/10.1007/978-3-030-01219-9_9 [17] J. Schwarz, W. Czarneck...
https://arxiv.org/abs/2505.22358v1
in Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022 . AAAI Press, 2022, pp. 6783–6...
https://arxiv.org/abs/2505.22358v1
neural networks,” Proceedings of the national academy of sciences , vol. 114, no. 13, pp. 3521–3526, 2017. [37] Z. Li and D. Hoiem, “Learning without forgetting,” IEEE Trans. Pattern Anal. Mach. Intell. , vol. 40, no. 12, pp. 2935–2947, 2018. [Online]. Available: https://doi.org/10.1109/TPAMI.2017.2773081 [38] Z. Wang,...
https://arxiv.org/abs/2505.22358v1
methods increases rapidly as the number of tasks grows, which often results in a sharp decline in performance retention on older tasks. These approaches primarily focus on learning all incremental tasks within a shared parameter space, which is a major contributor to task interference. In contrast, many architecture-ba...
https://arxiv.org/abs/2505.22358v1
on empirical settings rather than adapting to the intrinsic differences between sequentially arriving tasks. Imitating the formulation of AdaLoRA, SoRA [ 42] introduces a proximal gradient-based update rule with strong theoretical grounding. However, it relies on fixed hyperparameters to decide the fixed sparsity thres...
https://arxiv.org/abs/2505.22358v1
train set to construct the validation set. For every sequence of tasks across different orders, we trained the models for one epoch using a batch size of 32 (8 per GPU), a dropout rate of 0.1, and no weight decay. Across all experiments, we primarily used Adapter modules with a bottleneck dimension of 16, and applied a...
https://arxiv.org/abs/2505.22358v1
other tasks are used in long-sequence experiments. Dataset Category Task Domain Metric Yelp CL Benchmark Sentiment Analysis Yelp Reviews Accuracy Amazon CL Benchmark Sentiment Analysis Amazon Reviews Accuracy DBPedia CL Benchmark Topic Classification Wikipedia Accuracy Yahoo CL Benchmark Topic Classification Yahoo Q&A ...
https://arxiv.org/abs/2505.22358v1
arXiv:2505.22368v1 [cs.AI] 28 May 2025AgentDNS: A Root Domain Naming System for LLM Agents Enfang Cui1, Yujun Cheng2, Rui She1, Dan Liu1, Zhiyuan Liang1, Minxin Guo1, Tianzheng Li1, Qian Wei1, Wenjuan Xing1, Zhijie Zhong3,4 1China Telecom Research Institute, Beijing, China 2School of Intelligence Science and Technology...
https://arxiv.org/abs/2505.22368v1
frameworks is acceler- ating the development of an open, scalable multi-agent col- laboration ecosystem. In the future, we envision a world where agents can au- tonomously discover, communicate, and collaborate with one another without human intervention. Although protocols like MCP and A2A have effectively facilitated...
https://arxiv.org/abs/2505.22368v1
They can obtain the corresponding service identifier names and related metadata, including physi- cal addresses, capabilities, and communication protocol, etc. Agents can also dynamically request the AgentDNS root service to resolve an identifier name and retrieve the latest metadata as needed. •Protocol-Aware Interope...
https://arxiv.org/abs/2505.22368v1
of custom integrations. Agent-to-Agent Protocol The Agent-to-Agent (A2A) protocol (Google 2025) is introduced by Google, aimed at enabling seamless communication and collaboration be- tween LLM agents, regardless of their underlying frame- works or vendors. A2A was developed in collaboration with over 50 technology par...
https://arxiv.org/abs/2505.22368v1
root server, which then routes the request to the appropriate vendor for execution. This design allows user agents to authenticate only once with AgentDNS, eliminating the need for separate registration and authen- tication with each individual vendor. • Service Search Component: User agents can send nat- ural language...
https://arxiv.org/abs/2505.22368v1
of the registering entity, such as a com- pany, university, or research lab. Each organization must go through a registration and verification process to en- sure uniqueness and authenticity. The category denotes the functional domain or classification of the agent service. This can be chosen manually by the developer ...
https://arxiv.org/abs/2505.22368v1
hosted by the vendor, ensuring seamless interaction between Agent A and the service provider. Service Resolution As previously mentioned, user agents can cache service identifier names and request the AgentDNS root server for updated metadata when needed. This functionality helps re- duce the frequency of accessing Age...
https://arxiv.org/abs/2505.22368v1
instance, in Step 1, the agent sends the tool function description directly to AgentDNS, which uses in- telligent retrieval methods to identify matching ser- vices. Suppose AgentDNS returns a service named agentdns://example/search/searchagent ; it also provides metadata such as the physical endpoint, supported protoco...
https://arxiv.org/abs/2505.22368v1
experience. Conclusion The rapid advancement of LLM agents has exposed critical gaps in cross-vendor service discovery, interoperability, and authentication, hindering the vision of autonomous multi- agent collaboration. This paper introduces AgentDNS, a uni- fied root domain naming system designed to bridge these gaps...
https://arxiv.org/abs/2505.22368v1
LLM multi-agent systems: Challenges and open problems. arXiv preprint arXiv:2402.03578 . Hou, X.; Zhao, Y .; Wang, S.; and Wang, H. 2025. Model context protocol (mcp): Landscape, security threats, and fu- ture research directions. arXiv preprint arXiv:2503.23278 . Hu, M.; Zhao, P.; Xu, C.; Sun, Q.; Lou, J.; Lin, Q.; Lu...
https://arxiv.org/abs/2505.22368v1
arXiv:2505.22370v1 [cs.LG] 28 May 2025SplitLoRA: Balancing Stability and Plasticity in Continual Learning Through Gradient Space Splitting Haomiao Qiu1,2, Miao Zhang1∗, Ziyue Qiao2, Weili Guan1, Min Zhang1, Liqiang Nie1 1Harbin Institute of Technology (Shenzhen) 2Great Bay University 24B951058@stu.hit.edu.cn, zhangmiao...
https://arxiv.org/abs/2505.22370v1
plasticity. In this paper, we theoretically analyze the relationship between the size of the minor gradient subspace of previous tasks and the upper bound of loss increments across all tasks. Furthermore, we model its impact on both stability and plasticity. In practice, we build upon the LoRA framework and propose a n...
https://arxiv.org/abs/2505.22370v1
learning, with studies [ 6,5,39,40] integrating prompt-tuning to improve class-incremental 2 Figure 1: An overview of our proposed SplitLoRA. During the learning of the t-th task, the gradient space of tasks 1 to t−1is decomposed into major and minor subspaces. InfLoRA determines k∗ solely based on a predefined thresho...
https://arxiv.org/abs/2505.22370v1
old tasks, resulting in stability loss. To address this issue, GPM [ 32] orthogonally decomposes the gradient space of previous tasks into a major subspace and a minor subspace, and constrains the update direction of the new task within the minor subspace. Our work is also built upon orthogonal decomposition. In CL, we...
https://arxiv.org/abs/2505.22370v1
a linear layer updated fromWt−1toWt=Wt−1+ ∆Wt. Assume the loss function is L-smooth and that the first t−1 tasks were trained with updates constrained to be orthogonal to the gradients of previous tasks. Then, the total loss change over tasks 1, . . . , t is bounded by: tX i=1(Li(Wt)− Li(Wt−1))≤ −(t−1) ∆Wt,Gold t | {z ...
https://arxiv.org/abs/2505.22370v1
Eq. (3) for each layer 11:end for This substitution reformulates the optimization problem into the following form: k∗ t=argmin k (t−1)ϵt(kt)−αkt d . (15) Since ∆Wtbenefits new tasks, it often interferes with previous task knowledge, leading to: ⟨∆Wt,Gt⟩>0,⟨∆Wt,ˆGt⟩<0. (16) From Eq. (15), it is evident that increasing...
https://arxiv.org/abs/2505.22370v1
±0.40 71.32 ±0.62 78.94 ±0.72 67.87 ±1.39 77.42 ±0.80 CODA-P [6] CVPR23 76.51 ±0.38 82.04 ±0.54 75.45 ±0.56 81.59 ±0.82 72.37 ±1.19 79.88 ±1.06 HiDe-Prompt [45] NeurIPS23 76.29 ±0.10 78.77 ±0.11 76.74 ±0.18 78.76 ±0.11 76.46 ±0.06 78.76 ±0.11 EvoPrompt [46] AAAI24 77.16 ±0.18 82.22 ±0.54 76.83 ±0.08 82.09 ±0.68 74.41 ±...
https://arxiv.org/abs/2505.22370v1
(↑) CAA ( ↑) FAA (↑) CAA ( ↑) Upper-bound – 91.92 ±0.05 – 90.12±0.13 – DualPrompt [44] ECCV22 84.42 ±0.30 90.06 ±0.07 72.14±0.05 77.71 ±0.06 CODA-Prompt [6] CVPR23 86.62 ±0.11 91.08 ±0.28 73.23±0.13 78.72 ±0.07 LAE [7] ICCV23 84.15 ±0.16 89.84 ±0.03 66.85±0.40 75.01 ±0.17 C-LoRA [52] TMLR24 82.97 ±0.47 88.81 ±0.34 69.3...
https://arxiv.org/abs/2505.22370v1
Method5-task 10-task 20-task FAA (↑) CAA ( ↑) FAA ( ↑) CAA ( ↑) FAA ( ↑) CAA ( ↑) InfLoRA 79.82 84.07 78.10 83.47 73.81 81.02 SplitLoRA( α= 30 ) 82.15 85.60 81.03 85.56 78.73 84.06 SplitLoRA( α= 20 ) 81.92 85.83 81.00 85.84 78.82 84.57 SplitLoRA( α= 10 ) 82.35 85.82 81.03 85.67 77.89 83.27 SplitLoRA( α= 5) 82.52 85.89 ...
https://arxiv.org/abs/2505.22370v1