text
string
source
string
Patch Success Rate, Realism Score Cin` a et al. [11] Image Gradient-based Attacks Adversarial Success Rate Zheng et al. [12] Graph Attacks on Graphs Adversarial Robustness on Graphs Li et al. [13] VQA Attacks on VQA VQA Accuracy under Adversarial Conditions Siddiqui et al. [14] Time-series Attacks on Time-Series Time-S...
https://arxiv.org/abs/2505.21027v1
by generating deliberately perturbed input data, known as adversarial examples . Consider a dataset where each input data point, represented by a vector x∈ X, is associated with a class label y∈ Y. We define a machine learning classifier f(·). An adversarial exam- plexadvis a perturbed variant of xthat remains similar ...
https://arxiv.org/abs/2505.21027v1
by maximising the loss function Lof the machine learning model being attacked. maxL(f(xadv), y) subject to ∥δ∥ ≤ϵ (2) Our benchmark will encompass both unbounded and bounded attacks to provide a com- prehensive evaluation of adversarial attacks on tabular data. 2.1.3. Adversary’s Knowledge Adversarial attacks are categ...
https://arxiv.org/abs/2505.21027v1
FGSM. BIM. The Basic Iterative Method [19] extends FGSM by repeatedly applying gradient- guided perturbations. Starting with the original input xadv 0=x, it iteratively updates adversarial examples by ascending the loss gradient while constraining perturbations within a predefined ϵ-ball. For the ith iteration, the upd...
https://arxiv.org/abs/2505.21027v1
imperceptible. These attacks can be categorised into white-box (requiring full model access) and black-box (re- quiring only query-based access) methods. This review groups adversarial attack methods accordingly and presents their key strengths and weaknesses. White-box attacks leverage full access to the model’s inter...
https://arxiv.org/abs/2505.21027v1
the most influential features of a dataset without relying on internal model parameters. By focusing on high-importance features, FIGA maximises the likelihood of misclassification while minimising the number of modified features. Cartella et al. [33] extended black-box adversarial attacks into real-world fraud detecti...
https://arxiv.org/abs/2505.21027v1
0 0 11 WineQuality-Red 1599 1119 160 320 11 0 0 11 phoneme 5404 3782 541 1081 5 0 0 5 MiniBooNE 130064 91044 13007 26013 50 0 0 50 15 have access to both the dataset and the predictive model’s configuration. The objective of the adversarial attack is to deceive the predictive model’s predictions. Notably, our benchmark...
https://arxiv.org/abs/2505.21027v1
research, four quantitative metrics of imperceptibility can be employed, including proximity, sparsity, sensitivity and deviation. 3.4.1. Proximity •Definition: Average distance between inputs and generated adversarial examples. •Purpose: Measures how close the adversarial examples are to the original inputs in terms o...
https://arxiv.org/abs/2505.21027v1
feature space? – RQ2.3 (Deviation) : How significantly do the modified features differ from their original values? – RQ2.4 (Sensitivity) : How much do perturbations respect narrow-guard feature perturbation? •RQ3. Whether and how can the evaluated algorithms achieve a balance between both imperceptibility and effective...
https://arxiv.org/abs/2505.21027v1
it to evaluate 10 candidate classes per iteration and oper- ate directly on model logits, providing precise gradient information for minimal adversarial 21 perturbations. For the C&W attack, we implement a rigorous optimisation process controlled by three key parameters: (1) 10 binary search steps to optimally scale th...
https://arxiv.org/abs/2505.21027v1
provide some inherent robustness to adversarial perturbations when handling mixed tabular data. Numerical Datasets. As shown in Figure 4, 5 and 6, our analysis of eight numerical datasets reveals more consistent patterns compared to mixed datasets, though with several dataset- specific characteristics worth noting. The...
https://arxiv.org/abs/2505.21027v1
RQ2: How imperceptible are these adversarial attack algorithms on tabular data? Based on our analysis of attack success rates across varying ϵvalues, we establish a systematic approach for selecting optimal attack budgets. For each experimental setting, we identify the value at which attack success rates first reach a ...
https://arxiv.org/abs/2505.21027v1
Features (c) Numerical Features Figure 9: Sparsity results of evaluated attack methods and four ML models on the Electricity dataset. tures. When attacked by FGSM, PGD, and BIM, LR models show moderate categorical feature sparsity (40-52%) across all mixed datasets, suggesting these models encode in- formation differen...
https://arxiv.org/abs/2505.21027v1
rates typically ranging from 17-80%. This attack shows its most selective behaviour on the Higgs dataset (17-24%, Figure 11b) and more moderate selectivity on other datasets. Interestingly, DeepFool’s sparsity rates appear least affected by model architecture differences, maintaining relatively consistent modification ...
https://arxiv.org/abs/2505.21027v1
ℓ2-based attacks preserving significantly better proximity. The distance gap between attack types is most pronounced in LR models, where ℓ∞-based attacks produce distances around 0.89, while C&W and DeepFool achieve distances of only 0.17 and 0.10 respectively. Interestingly, all neural network architectures demonstrat...
https://arxiv.org/abs/2505.21027v1
two (out of eight) numerical datasets. distances for ℓ∞-based attacks, but with notable attack-specific patterns. In WineQuality- White, PGD produces consistently higher ℓ2distances compared to FGSM and BIM across all model architectures, most pronounced in LR model (1.33 versus 0.89). However, this pattern is less evi...
https://arxiv.org/abs/2505.21027v1
other mixed datasets. FGSM, PGD, and BIM 39 (a) Adult (b) Electricity (c) Compas Figure 15: Deviation results of evaluated attack methods and four ML models on all three mixed dataset. generate outlier rates ranging from 0.60 to 0.88, with the TabTransformer model showing particular vulnerability to distribution shifts...
https://arxiv.org/abs/2505.21027v1
ℓ∞-based attacks, with BIM producing rates as low as 0.82 on TabTransformer. This suggests that the Red variant may have a more dispersed feature distribution that can accommodate certain perturbations while remaining in-distribution. The phoneme dataset (Figure 17a) reveals the most variable behaviour across models an...
https://arxiv.org/abs/2505.21027v1
uniform sensitivity pattern across model architectures for the same attack method. LR model consistently shows higher sensi- tivity scores (0.41) for ℓ∞-based attacks compared to other models (0.13-0.19). This suggests that simpler model architectures may induce attackers to make more substantial modifica- tions to nar...
https://arxiv.org/abs/2505.21027v1
most attack-model com- binations (0.01-0.06), with only PGD occasionally producing slightly higher values (0.14). From a model architecture perspective, we observe that LR models often show either the highest or the lowest sensitivity scores depending on the dataset, suggesting that the interaction between model simpli...
https://arxiv.org/abs/2505.21027v1
auxiliary normalisation function is required. Considering that the possible range of both ℓ2distance and sensitivity is [0,+∞), common normalisation method (Eq. 15) is not suitable since it is hard to seek the max value. xnorm =x−xmin xmax−xmin(15) 49 Practically, we select xnorm =ln(x+ 1) as normalisation function to ...
https://arxiv.org/abs/2505.21027v1
quality control processes. Ineffective but Imperceptible (Low ASR, Low IS). Attacks in this quadrant make sub- tle changes that preserve data characteristics but fail to successfully mislead models. C&W 51 shows a significant density in this region, indicating that it sometimes generates examples that maintain excellen...
https://arxiv.org/abs/2505.21027v1
overshooting at higher epsilon values. This occurs because BIM computes gradients with respect to the input and takes steps in that direction. As epsilon increases, these steps can become too large, causing the attack to miss optimal adversarial regions and produce less effective perturbations. 2.Decision Boundary Char...
https://arxiv.org/abs/2505.21027v1
to manipulation, attackers can craft adversarial examples that achieve their objectives while minimising perceptible changes to the data. 5.3. Evaluating the Suitability of One-Hot Encoding for Adversarial Attacks on Tabular Data Adversarial attacks in machine learning have predominantly focused on image data, which ar...
https://arxiv.org/abs/2505.21027v1
sparsity, dimensionality, and attack performance. Moreover, exploring alternative distance metrics presents a promising direction for future research. Traditional metrics like the Lp-norm may not be well-suited for the mixed data types often found in tabular datasets. Metrics such as Gower’s distance [38], which can ha...
https://arxiv.org/abs/2505.21027v1
Electricity FTTrans 1 0.1 0.1 0.1 0.3 0.3 Compas LR 0.5 0.3 0.3 0.3 0.1 1 Compas MLP 0.3 0.5 0.5 0.5 1 1 Compas TabTrans 1 0.3 0.3 0.3 0.1 0.5 Compas FTTrans 0.07 0.3 1 0.3 0.01 0.5 Higgs LR 1 0.07 0.07 0.07 0.3 0.1 Higgs MLP 1 0.07 0.07 0.07 0.3 0.1 Higgs TabTrans 1 0.07 0.07 0.07 0.3 0.1 Higgs FTTrans 1 0.07 0.07 0.0...
https://arxiv.org/abs/2505.21027v1
Dong, Q.-A. Fu, X. Yang, T. Pang, H. Su, Z. Xiao, J. Zhu, Benchmarking adversarial robustness on image classification, in: proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 321–331. [9] M. Zheng, X. Yan, Z. Zhu, H. Chen, B. Wu, Blackboxbench: A comprehensive benchmark of black...
https://arxiv.org/abs/2505.21027v1
space, arXiv:2112.01156 (2021). [25] Y. Mathov, E. Levy, Z. Katzir, A. Shabtai, Y. Elovici, Not all datasets are born equal: On heterogeneous tabular data and adversarial examples, Knowl. Based Syst. 242 (2022) 108377. [26] A. Chernikova, A. Oprea, FENCE: feasible evasion attacks on neural networks in constrained envir...
https://arxiv.org/abs/2505.21027v1
arXiv:2505.21032v1 [cs.CV] 27 May 2025FeatInv: Spatially resolved mapping from feature space to input space using conditional diffusion models Nils Neukirch Division AI4Health Carl von Ossietzky Universität Oldenburg nils.neukirch@uol.deJohanna Vielhaben Explainable Artificial Intelligence Group Fraunhofer Heinrich-Her...
https://arxiv.org/abs/2505.21032v1
aspect of this challenge is to derive a mapping from the entirety of the feature space representation back to the input domain – beyond mere localization. Recent works proposed to leverage conditional generative models to learn such a mapping by conditioning them on feature maps [ 4,6,23]. However, these approaches eit...
https://arxiv.org/abs/2505.21032v1
x′within the set of natural images, whose feature representation aligns as closely as possible with cf, i.e., to learn a probabilistic mapping from feature space to input space. Previous work consider spatially pooled feature maps, whereas this work conditions on spatially resolved feature maps. Middle: We leverage a p...
https://arxiv.org/abs/2505.21032v1
original input resolution of the respective pretrained models, which varies between 224×224and384×384for the considered models, see the supplementary material for a detailed breakdown. Even though the approach allows conditioning on any feature map, we restrict ourselves to the last spatially resolved feature map, i.e....
https://arxiv.org/abs/2505.21032v1
approaches for computer vision models. 4 Results We investigate three models ResNet50 [ 11] (original torchvision weights), ConvNeXt [ 16] and SwinV2 [ 15]1all of which have been pretrained/finetuned on ImageNet1k. ConvNeXt and SwinV2 represent modern convolution-based and vision-transformer-based architectures, identi...
https://arxiv.org/abs/2505.21032v1
we rely on FID-scores as established measures to assess sample quality. Reconstruction quality Comparing identical models conditioned either on pooled or unpooled feature maps, not surprisingly unpooled models show a significantly higher reconstruction quality. Samples generated by models conditioned on unpooled featur...
https://arxiv.org/abs/2505.21032v1
themselves, stressing qualitative difference between modern architectures such as ConvNeXt and SwinV2 and older model architectures such as ResNet50, which are much more pronounced than the differences between 6 Table 2: Cross-model evaluation : Percentage of matching of the actual predictions (top5/top1) and the predi...
https://arxiv.org/abs/2505.21032v1
seeds for the diffusion process. By comparing the resulting images, we gain insights into how the concept is expressed in input space. We call this method FeatInv-Viz and present it in Algorithm 1. 7 Algorithm 1: FeatInv-Viz : Visualization of concept steering in input space Input: Model m, concept decomposition ϕ=P iϕ...
https://arxiv.org/abs/2505.21032v1
0.40 0.36 0.81 0.73 0.71 0.69 0.80 0.710.65 0.42 0.42 0.61 0.69 Figure 5: Reconstructions from weighted combinations of two ConvNeXt feature maps . The cosine similarity between the weighted feature map and that of the reconstruction is noted at the bottom edge of the images. 4.4 Limitations and future work Our work is...
https://arxiv.org/abs/2505.21032v1
Vincent. Visualizing higher-layer features of a deep network. 2009. [8]P. Esser, R. Rombach, and B. Ommer. A disentangling invertible interpretation network for explaining latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9223–9232, 2020. [9]T. FEL, V ....
https://arxiv.org/abs/2505.21032v1
A. Saparov, and Z. Yao. A practical review of mechanistic inter- pretability for transformer-based language models. arXiv preprint arXiv:2407.02646 , 2024. [22] J. Rathjens, S. Reyhanian, D. Kappel, and L. Wiskott. Inverting transformer-based vision models. arXiv preprint 2412.06534 , 2024. URL https://arxiv.org/abs/24...
https://arxiv.org/abs/2505.21032v1
tested the models on an unconditional baseline to see if MiniSD is able to generate good representations of the classes and if they can be correctly classified. To define the classes for the input prompt as precisely as possible, we use the WordNet hierarchy and create the prompt as follows: ‘a high-quality, detailed, ...
https://arxiv.org/abs/2505.21032v1
increase improves object fidelity in pooled feature maps. This trend is also roughly reflected in the corresponding cosine distances. 0 1 2 3 4 5 6 7 8 9 10 11 12 13020406080 guidance scaleFID score unpooled FID pooled FID0.40.60.81 cosine distance unpooled cos-dist pooled cos-dist Figure 7: Impact of guidance scale on...
https://arxiv.org/abs/2505.21032v1
vector ϕ, MCD provides a concept decomposition ϕ=Pnc+1 i=1ϕi, where ϕiis associated with the concept i(of in total ncconcepts), which represents a linear subspace of the feature space, and concept nc+ 1corresponds to the orthogonal complement of the span of all concept subspaces. The latter is necessary to achieve a co...
https://arxiv.org/abs/2505.21032v1
RainFusion: Adaptive Video Generation Acceleration via Multi-Dimensional Visual Redundancy Aiyue Chen1*, Bin Dong1*, Jingru Li1 Jing Lin1,Yiwu Yao1, Gongyi Wang1 1Huawei Technologies Co., Ltd Abstract Video generation using diffusion models is highly compu- tationally intensive, with 3D attention in Diffusion Trans- fo...
https://arxiv.org/abs/2505.21036v1
respectively concentrate on portraying global spatial details with local temporal information, local spatial details with global temporal information, and high-level textural information. Profiling analysis demonstrates that the attention mecha- nism consumes over 80% of total computation, making it the principal perfo...
https://arxiv.org/abs/2505.21036v1
[26], HunyuanVideo- 13B [14], CogVideoX-5B [39] prove the generality and ef- fectiveness of RainFusion. The contributions of this paper include: • We present RainFusion , a novel plug-and-play frame- work that leverages tri-dimensional sparsity across spa- tial, temporal, and textural domains to optimize video dif- fus...
https://arxiv.org/abs/2505.21036v1
poral, and conditional redundancies for efficient attention compression. These advancements demonstrate the poten- tial of integrating sparse attention and caching to enhance the scalability and speed of diffusion model inference. Recent Work. SVG [38] advances sparse attention re- search by analyzing spatial and tempo...
https://arxiv.org/abs/2505.21036v1
Temporal Head is particularly attentive to the correlation between the same lo- cal regions across different video frames. Its primary focus is on creating regional details that maintain spatial continu- ity. This unique property can lead to the manifestation of local sparsity within a single-frame sub-sequence and per...
https://arxiv.org/abs/2505.21036v1
are able to adap- tively and efficiently determine the category of each head online with minimal computational overhead. Algorithm 1 provides a detailed introduction to the pro- cess of the Adaptive Recognition Module (ARM). 4. Experiments 4.1. Settings Models We evaluate RainFusion on three widely adopted video genera...
https://arxiv.org/abs/2505.21036v1
45.83 58.12 64.79 ✓ ✓ ✓ -1.05 92.87 94.65 97.47 45.83 56.58 60.91 ✓✓✓✓ -0.18 93.27 95.31 97.23 45.83 58.11 63.80CogvideoX-5B ✓ ✓ ✓ -0.42 93.08 95.40 97.27 45.83 57.17 63.36 / 94.65 95.19 99.40 41.67 56.84 57.67 ✓ ✓ ✓ -1.29 92.53 94.72 98.94 43.75 55.05 52.66 ✓✓✓✓ 1.03 93.22 94.31 99.15 52.08 55.67 57.17OpenSoraPlan-1.2...
https://arxiv.org/abs/2505.21036v1
sparse ratio by using different bandwidth and stride in spatial-temporal head and textural head, respectively. We test different Rain- Fusion configuration in CogVideoX-5B of speedup 2.5 × and 3.0 ×by setting the bandwidth of spatial and tempo- ral head to 0.18 and 0.13, and textural stride to 3 and 4, respectively. As...
https://arxiv.org/abs/2505.21036v1
Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram V oleti, Adam Letts, Varun Jampani, and Robin Rombach. Stable video diffusion: Scaling latent video diffusion models to large datasets, 2023. 1 [2] Andreas Blattmann, Tim Dockhorn, Sumith Ku...
https://arxiv.org/abs/2505.21036v1
Wenqing Yu, Xinchi Deng, Yang Li, Yi Chen, Yutao Cui, Yuanbo Peng, Zhentao Yu, Zhiyu He, Zhiyong Xu, Zixiang Zhou, Zun- nan Xu, Yangyu Tao, Qinglin Lu, Songtao Liu, Dax Zhou, Hongfa Wang, Yong Yang, Di Wang, Yuhong Liu, Jie Jiang, and Caesar Zhong. Hunyuanvideo: A systematic framework for large video generative models,...
https://arxiv.org/abs/2505.21036v1
Albert Pumarola, Ali Thabet, Artsiom Sanakoyeu, Arun Mallya, Baishan Guo, Boris Araya, Breena Kerr, Carleigh Wood, Ce Liu, Cen Peng, Dimitry Vengertsev, Edgar Schon- feld, Elliot Blanchard, Felix Juefei-Xu, Fraylie Nord, Jeff Liang, John Hoffman, Jonas Kohler, Kaolin Fire, Karthik Sivakumar, Lawrence Chen, Licheng Yu, ...
https://arxiv.org/abs/2505.21036v1
arXiv:2505.21038v1 [math.CT] 27 May 2025Fixed-Point Traps and Identity Emergence in Educational Feedback Systems Faruk Alpay∗ May 28, 2025 Abstract I present a categorical framework for analyzing fixed-point emergence in educational feedback systems, where exam-grade collapse mechanisms prevent the formation of stable ...
https://arxiv.org/abs/2505.21038v1
a complexity measure h: Ob(C)→Ord (an ”entropy”). A morphism f: X→YinCis called a fold or entropy-reducing collapse if fis an epimorphism (surjective on structure) that is not invertible, and h(Y)< h(X)(sofidentifies distinct substructures of XinY). For example, in the observer-coupled collapse of [4], the perturbed id...
https://arxiv.org/abs/2505.21038v1
φ(µφ)∼=µφ. Theorem 4.1 (Categorical Blocking of Creativity) .In an EGCS, any φ-emergence of identity is categorically blocked. That is, even for a ”creativity-driven” func- torφ, the required fixed-point object µφcannot form because the exam-induced collapse intervenes. Hence creativity-driven emergence of φ’s identity...
https://arxiv.org/abs/2505.21038v1
FCKT: Fine-Grained Cross-Task Knowledge Transfer with Semantic Contrastive Learning for Targeted Sentiment Analysis Wei Chen1,Zhao Zhang2,Meng Yuan1,Kepeng Xu3andFuzhen Zhuang1,4† 1School of Artificial Intelligence, Beihang University, China 2School of Computer Science and Engineering, Beihang University, China 3Xidian...
https://arxiv.org/abs/2505.21040v2
terms and classify their sentiments by modeling the interactions between aspects and sentiments. A widely studied technique in this field is the task-specific fea- ture alignment approach [Chen et al. , 2024a ]. This method involves two key steps: first, encoding task-specific features for both aspects and sentiments; ...
https://arxiv.org/abs/2505.21040v2
as positive pairs, while tokens from unrelated aspects serve as negative pairs. This refines the model’s understanding of aspects and enhances its ability to capture subtle contextual dependen- cies. To mitigate the lack of supervisory signals in the sen- timent classifier, we introduce an alternating learning strat- e...
https://arxiv.org/abs/2505.21040v2
address the unique challenges of resource-constrained TSA tasks. Contrastive Learning. Contrastive learning has recently achieved significant success in various domains [Jaiswal et al., 2020; Zhang et al. , 2022; Luo et al. , 2024; Zhong et al. , 𝒙𝟎𝒙𝟐𝒙𝟏𝒙𝟓𝒙𝟑𝒙𝟒𝒉𝟎𝒉𝟑𝒉𝟐𝒉𝟏𝒉𝟒𝒉𝟓𝒉𝟐𝒉𝟏ExtractorMLP 𝓕𝒉...
https://arxiv.org/abs/2505.21040v2
step allows the model to effectively learn aspect-specific representations by isolat- ing each aspect, avoiding interference from others in the same sentence. It is important to note that this splitting strategy is only applied during training to facilitate the learning process. During testing, the original sentence st...
https://arxiv.org/abs/2505.21040v2
and(he,hj)are negative pairs. τis the temperature parame- ter,s(·,·)denotes the cosine similarity function, and Erepre- sents the set of all positive pairs in each sentence. 3.3 Fine-Grained Knowledge Transfer for SP In sentiment prediction task, the sentiment polarity for a given aspect is determined based on the word...
https://arxiv.org/abs/2505.21040v2
based on Eq. (5), while the remaining samples ( 1−ξ) are optimized using Eq. (8). The model parameters are optimized through the following cross-entropy loss formulation [Chen et al. , 2024a ]: Lsp=−NX i=1KX j=1yi,jlog (ξ·ˆy(ψ)i,j+ (1−ξ)·ˆy(ℓ)i,j), (9) where Nrepresents the number of samples, Kindicates the number of s...
https://arxiv.org/abs/2505.21040v2
metrics, precision, recall, and F1 score , to evaluate the FCKT. Please refer to Appendix D for more metric details. Implementation Details . All the parameter details are com- prehensively provided in Appendix E for further reference. 4.2 Main Experimental Results (RQ1) We conduct experiments on three public datasets ...
https://arxiv.org/abs/2505.21040v2
, 2024 ] 0.7206 0.6725 0.6957 0.7926 0.7952 0.7939 0.6164 0.5728 0.5938 PDGN‡[Zhuet al. , 2024 ] 0.7025 0.6812 0.6921 0.8036 0.7985 0.8010 0.6235 0.5924 0.6076 AIFI‡[Chen et al. , 2024a ] 0.7105 0.6915 0.7009 0.7925 0.8034 0.7979 0.6342 0.5911 0.6119 LLM-basedGPT-3.5-turbo Zero-Shot‡0.3462 0.4065 0.3739 0.6221 0.6605 0...
https://arxiv.org/abs/2505.21040v2
4.4 Ablation Study (RQ3) To examine the contributions of various components, we delve further into FCKT and carry out ablation studies. The results are shown in Table 4. It is evident that the removal of specific modules leads to a decrease in model performance, highlighting the indispensable nature of each module. Thi...
https://arxiv.org/abs/2505.21040v2
decoration, none of it went to the chefs.AIFI ModelFCKT(Ours) (3) I must say I am surprised by the bad reviews of the restaurant , though the menu's font is small.(2) You will obtain a gift if you buy the separateram memory.[interior decoration, pos][chefs, pos]✗✓[interior decoration, pos] [chefs, neg][separate ram mem...
https://arxiv.org/abs/2505.21040v2
ure 3. Here, we demonstrate that this strategy maintains op- timization consistency while enabling fine-grained modeling of aspect-specific interactions. First, when there are multiple aspects within a sentence, The original optimization objective Lcan be rewritten as: Lmul=Lae+Lcl+Lsp =−NX i=1mX j=1 pT s,i,jlog (ˆps,...
https://arxiv.org/abs/2505.21040v2
achieve the objective. • NN-CRF-Pipeline [Zhang et al. , 2015 ]: Unlike the afore- mentioned model, this paradigm incorporates a shallow neural network model preceding the CRF. • TAG-Pipeline [Huet al. , 2019 ]: It is a sequence tagging approach utilizing a BERT encoder. • SPAN-Pipeline [Huet al. , 2019 ]. It utilizes ...
https://arxiv.org/abs/2505.21040v2
explicit reasoning steps to enhance performance further. D. Evaluation We employ three widely used metrics—precision, recall, and F1 score—to evaluate the effectiveness of our proposed FCKT. For aspect extraction, we focus on the F1 score as the primary evaluation metric, while accuracy is adopted for sen- timent predi...
https://arxiv.org/abs/2505.21040v2
, 2020. [Kalbhor and Goyal, 2023 ]Shraddha Kalbhor and Dinesh Goyal. Survey on absa based on machine learning, deep learning and transfer learning approach. In AIP Confer- ence Proceedings , 2023. [Kingma and Ba, 2014 ]Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412...
https://arxiv.org/abs/2505.21040v2
fields for aspect-based sentiment analysis. EMNLP , 2016. [Wang et al. , 2023 ]Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. Is chatgpt a good sen- timent analyzer? a preliminary study. arXiv preprint arXiv:2304.04339 , 2023. [Weiet al. , 2022 ]Jason Wei, Xuezhi Wang, Dale Schuur- mans, Maarten Bosma, F...
https://arxiv.org/abs/2505.21040v2
arXiv:2505.21045v1 [cs.AI] 27 May 20251 Large Language Model-enhanced Reinforcement Learning for Low-Altitude Economy Networking Lingyi Cai, Ruichen Zhang, Changyuan Zhao, Yu Zhang, Jiawen Kang, Senior Member, IEEE , Dusit Niyato, Fellow, IEEE , Tao Jiang, Fellow, IEEE , and Xuemin Shen, Fellow, IEEE Abstract —Low-Alti...
https://arxiv.org/abs/2505.21045v1
these challenges, reinforcement learning (RL) emerges as a promising solution for the LAENet [3]. Specifi- cally, RL enables autonomous and adaptive control, allowing aerial vehicles to make time-sensitive decisions without re- liance on predefined models. By continuously observing and interacting with dynamic environm...
https://arxiv.org/abs/2505.21045v1
VERVIEW OF ENHANCING RL WITH LLM In this section, we comprehensively review the background knowledge of RL and LLM. Then, we highlight the potential of LLMs for enhancing RL. A. Background of RL RL is a foundational ML paradigm driven by a trial- and-error mechanism, where the agent observes the current state of the en...
https://arxiv.org/abs/2505.21045v1
to bridge abstract natural language instructions with motion control in robotic tasks, thereby enable generalization of RL agent using only raw visual observations without task-specific retraining. The sample efficiency of the proposed scheme outperformed Long Short-Term Memory (LSTM)-based base- lines by 18.5%. Simila...
https://arxiv.org/abs/2505.21045v1
. LLMs enhance offline RL with reasoning and stability . LLMs generate rule -based controllers to refine RL .2LLMs can enhance RL in the following three aspects: • Generalization and Multimodal Comprehension • Context -Aware and Reward Shaping • Structured Reasoning and Stable Decisions[13]1 2 3 4 5 61 2 3 4 5 61 23 4 ...
https://arxiv.org/abs/2505.21045v1
its central role in bridging language input and decision-making processes within the LAENet framework. in Fig. 2. Thus, the classical RL loop with the support of LLMs ensures that the LAENet can handle dynamic, uncertain, and multimodal real-world scenarios with greater flexibility, collaboration, and generalization. S...
https://arxiv.org/abs/2505.21045v1
a set of action candidates (e.g., “hover above user cluster A,” “move south 10 meters,” “move down 3 meters”). RL agents can select the most rewarding action 5 def reward _func ( ): reward = w * energy * Penalty_terms return reward Agent EvaluationQualifiedUnqualifiedRole definition : You are good at understanding task...
https://arxiv.org/abs/2505.21045v1
agents to learn from simulated experiences. In UA V trajectory optimization tasks, states (e.g., UA V location, user distribution, and remaining energy) and actions (e.g., flight direction, and speed) can be input into the LLM to generate state-action-reward sequences. Subse- quently, the LLM can produce large amounts ...
https://arxiv.org/abs/2505.21045v1
Predictive Control (MPC). 6 Step 4: Policy Update and Knowledge Integration. Based on accumulated experience, the agent uses the LLM to update and refine its policy by integrating external knowledge (such as statistical information on terminal data transmission patterns or communication conditions in certain areas of U...
https://arxiv.org/abs/2505.21045v1
Designed Reward Function TD3 Algorithm with LLM-Designed Reward Function0 205800590060006100Fig. 4. Energy consumption over episodes of different algorithm with manually designed and LLM-generated reward functions. t] 2.0 2.4 2.8 Size of packet (Mbits)45004600470048004900500051005200530054005500Energy consumptionDDPG A...
https://arxiv.org/abs/2505.21045v1
contrast, we employ GPT-4o as the LLM module to design the reward function, which incorporates richer reward factors based on the position of the UA V . Fig. 4 shows the convergence performance of DDPG and TD3 algorithms using manually designed and LLM-generated reward functions. It can be observed that algorithms with...
https://arxiv.org/abs/2505.21045v1
12, pp. 3581–3596, 2024. [6] J. Wei et al. , “Chain-of-thought prompting elicits reasoning in large language models,” in Proc. NeurIPS , vol. 35, 2022, pp. 24 824–24 837. [7] M. Kwon et al. , “Reward design with language models,” in Proc. ICLR , 2023. [8] F. Paischer et al. , “History compression via language models in...
https://arxiv.org/abs/2505.21045v1
A domain adaptation neural network for digital twin-supported fault diagnosis Zhenling Chen CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, FranceHaiwei Fu CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, France Zhiguo Zeng Chair on Risk and Resilience of Complex Systems, Laboratoie Gen...
https://arxiv.org/abs/2505.21046v1
the component-level failure modes [7]. In one of our previous works [8], we developed a digital twin model of a robot and use it to generate simulated failure data for fault diagnosis. Testing data are collected from a real robot with different injected failures to test the performance of the developed model. The exist...
https://arxiv.org/abs/2505.21046v1
test data by randomly simulate 90 trajectories following the same protocals. In the original work [8], an LSTM was trained on the simulation dataset and applied to dignose the failures on the real robot. The results showed that, although the trained model performed well on the validation set (seperated from training da...
https://arxiv.org/abs/2505.21046v1
on average. Lu et al. develop a domain adapta- tion combined with deep convolutional generative adversarial network (DADCGAN)-based methodology for diagnosing DC arc faults [29]. DADCGAN is a robust and reliable fault diagnosis scheme based on a lightweight CNN-based classifier can be achieved for the target domain. In...
https://arxiv.org/abs/2505.21046v1
parametersθdof the domain classifier have been trained to discriminate between the two feature distributions. In training to obtain domain-invariant features, we seek the parameters θfof the feature representative that maximize the loss of the domain classifier (by making the two feature distributions as similar as pos...
https://arxiv.org/abs/2505.21046v1
of Correct Predictions Total Number of Predictions =TP+TN TP+TN+FP+FN(5) where TP,TN,FP, and FN represent the number of true positives, true negatives, false positives, and false negatives, respectively. b) F1 Score: The F1 Score is the harmonic mean of precision and recall: F1 Score = 2·Precision ·Recall Precision +Re...
https://arxiv.org/abs/2505.21046v1
states where one motor has steady-state errors. When the simulation model is not accurate, the generated training data are even more difficult to distinguish between healthy and steady-state error states. The DANN, on the other hand, performs well in classifying the state of healthy. This is because after the domain ad...
https://arxiv.org/abs/2505.21046v1
Systems (Chaire EDF, Orange and SNCF). Haiwei Fu and Zhenling Chen participate in this project as lab project in their master curricum in Centralesupélec. The authors would like to thank Dr. Myriam Tami for managing this project. TABLE I: Performance Comparison of Baseline Models Model Training Accuracy (%) Validation ...
https://arxiv.org/abs/2505.21046v1
“Long short-term memory,” Neural Computation MIT-Press , 1997. [11] A. Vaswani, “Attention is all you need,” Advances in Neural Information Processing Systems , 2017. [12] Y . LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Handwritten digit recognition with a back-propagation network,”...
https://arxiv.org/abs/2505.21046v1
D. Youn, “A new parameter repurposing method for parameter transfer with small dataset and its application in fault diagnosis of rolling element bearings,” Ieee Access , vol. 7, pp. 46917–46930, 2019. [27] S. Shao, S. McAleer, R. Yan, and P. Baldi, “Highly accurate machine fault diagnosis using deep transfer learning,”...
https://arxiv.org/abs/2505.21046v1
arXiv:2505.21055v1 [cs.AI] 27 May 2025Agent-Environment Alignment via Automated Interface Generation Kaiming Liu1, Xuanyu Lei1,2, Ziyue Wang1, Peng Li2∗, Yang Liu1,2∗ 1Department of Computer Science and Technology, Tsinghua Un iversity, Beijing, China 2Institute for AI Industry Research (AIR), Tsinghua Univers ity, Bei...
https://arxiv.org/abs/2505.21055v1
47]. In these tasks, agents typically interact with the environment through manually designed interfaces such as predefined action spaces and interaction rules. Whil e substantial efforts have been devoted to improving agents and environments, comparatively littl e attention has been paid to the interface between them. ...
https://arxiv.org/abs/2505.21055v1
customization pose a significant challenge to the field: it compromises the d irect comparability across different approaches. Moreover, these modifications are often tailor ed to the specific methods proposed, mak- ing it difficult for the research community to determine whet her performance variations stem from novel agen...
https://arxiv.org/abs/2505.21055v1
influence the performance of LLM-base d agents [51, 37]. SWE-agent [54] proposes agent-computer interfaces (ACI) for coding agent s, emphasizing interface optimization. Following this research line, recent efforts aim to improve generalization [1, 36, 32] and enhance in- terfaces with auxiliary tools [6, 16, 24, 27, 53]...
https://arxiv.org/abs/2505.21055v1
disrupt the intended progress of the agent toward the goal to be disrupted, even if the action atis logically coherent under the agent’s interpretation of Iand prior observation. 3.2 ALIGN overview To alleviate the agent-environment misalignment, we intro duce ALIGN , a framework that automati- cally generate aligned i...
https://arxiv.org/abs/2505.21055v1
such as "You need to go to drawer 1 before examining it" when the Agent attempts to examine a receptacle without first moving to it. Experiment Verification Example Optimizer: <thought>...</thought> <action>init_simulator(task_id="4-293")</action> Experiment: ... Optimizer: <thought>Now I will simulate an invalid "exam...
https://arxiv.org/abs/2505.21055v1