| { |
| "File Number": "1041", |
| "Title": "In and Out-of-Domain Text Adversarial Robustness via Label Smoothing", |
| "5 Limitations": "One limitation of our work is that we focus on robustness of pre-trained transformer language models against word-level adversarial attacks, which is the most common setting in this area. Future work could extend this empirical study to other types of attacks (for example, character-level and sentence-level attacks) and for diverse types of architectures. Further, it will be very interesting to theoretically understand how label smoothing provides (1) the implicit robustness to text adversarial attacks and (2) mitigates over-confident predictions on the adversarially attacked examples.", |
| "abstractText": "Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions). While several defense techniques have been proposed, and adapted, to the discrete nature of text adversarial attacks, the benefits of general-purpose regularization methods such as label smoothing for language models, have not been studied. In this paper, we study the adversarial robustness provided by label smoothing strategies in foundational models for diverse NLP tasks in both in-domain and out-of-domain settings. Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks. We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.", |
| "1 Introduction": "Neural networks are vulnerable to adversarial attacks: small perturbations to the input ,which do not fool humans (Szegedy et al., 2013; Goodfellow et al., 2014; Madry et al., 2017). In NLP tasks, previous studies (Alzantot et al., 2018; Jin et al., 2019; Li et al., 2020; Garg and Ramakrishnan, 2020) demonstrate that simple word-level text attacks (synonym substitution, word insertion/deletion) easily fool state-of-the-art models, including pre-trained transformers like BERT (Devlin et al., 2019; Wolf et al., 2020). Further, it has recently been shown models are overconfident1 on examples which are easy to attack (Qin et al., 2021) and indeed, such over-confident predictions plague\n∗The first two authors contributed equally to this paper. Most of the work done while Soham Dan was at the University of Pennsylvania.\n1Confidence on an example is the highest softmax score of the classifier prediction on that example.\nmuch of modern deep learning (Kong et al., 2020; Guo et al., 2017; Nguyen et al., 2015; Rahimi et al., 2020). Label smoothing is a regularization method that has been proven effective in a variety of applications, and modalities (Szegedy et al., 2016; Chorowski and Jaitly, 2017; Vaswani et al., 2017). Importantly, it has been shown to reduce overconfident predictions and produce better confidence calibrated classifiers (Muller et al., 2019; Zhang et al., 2021; Dan and Roth, 2021; Desai and Durrett, 2020; Huang et al., 2021; Liu and JaJa, 2020).\nIn this work, we focus on the question: does label smoothing also implicitly help in adversarial robustness? While there has been some investigation in this direction for adversarial attacks in computer vision, (Fu et al., 2020; Goibert and Dohmatob, 2019; Shafahi et al., 2019), there is a gap in understanding of whether it helps with discrete, text adversarial attacks used against NLP systems. With the increasing need for robust NLP models in safety-critical applications and a lack of generic robustness strategies,2 there is a need to understand inherent robustness properties of popular label smoothing strategies, and the interplay between confidence and robustness of a model.\nIn this paper, we extensively study standard label smoothing and its adversarial variant, covering robustness, prediction confidence, and domain transfer properties. We observe that label smoothing provides implicit robustness against adversarial examples. Particularly, we focus on pre-trained transformer models and test robustness under various kinds of black-box and white-box word-level adversarial attacks, in both in-domain and out-ofdomain scenarios. Our experiments show that label smoothing (1) improves robustness to text adversarial attacks (both black-box and white-box), and (2) mitigates over-confident errors on adversarial textual examples. Analysing the adversarial exam-\n2which are flexible, simple and not over-specialized to very specific kinds of text adversarial attacks.\n657\nples along various quality dimensions reveals the remarkable efficacy of label smoothing as a simple add-on robustness and calibration tool.", |
| "2.1 Text Adversarial Attacks": "Our experiments evaluate the robustness of text classification models under three state-of-the-art text adversarial attacks TextFooler (black-box), BAE (black-box) and SemAttack (white-box), described below.3For a particular victim NLP model and a raw text input, the attack produces semantically-similar adversarial text as output. Importantly, only those examples are attacked, which are originally correctly predicted by the victim model. The attacks considered are word-level, i.e. they replace words in a clean text with their synonyms to maintain the meaning of the clean text, but change the prediction of the victim models.\n• TextFooler (TF): (Jin et al., 2019) proposes an attack which determines the word importance in a sentence, and then replaces the important words with qualified synonyms.\n• BAE: (Garg and Ramakrishnan, 2020) uses masked pre-trained language models to generate replacements for the important words until the victim model’s prediction is incorrect.\n• SemAttack (SemAtt): (Wang et al., 2022) introduces an attack to search perturbations in the contextualized embedding space by formulating an optimization problem as in (Carlini and Wagner, 2016). We specifically use the white-box word-level version of this attack.", |
| "2.2 Label Smoothing": "Label Smoothing is a modified fine-tuning procedure to address overconfident predictions. It introduces uncertainty to smoothen the posterior distribution over the target labels. Label smoothing has been shown to implicitly calibrate neural networks on out-of-distribution data, where calibration measures how well the model confidences are aligned with the empirical likelihoods (Guo et al., 2017).\n• Standard Label Smoothing (LS) (Szegedy et al., 2013; Muller et al., 2019) constructs\n3The black-box attacks keep querying the model with its attempts until the victim model is fooled while the white-box attack has access to the gradients to the model. Further details of the attacks are in (Jin et al., 2019; Garg and Ramakrishnan, 2020; Wang et al., 2022).\na new target vector (yLSi ) from the one-hot target vector (yi), where yLSi = (1 − α)yi + α/K for a K class classification problem. α is a hyperparameter selection and its range is from 0 to 1.\n• Adversarial Label Smoothing (ALS) (Goibert and Dohmatob, 2019) constructs a new target vector (yALSi ) with a probability of 1− α on the target label and α on the label to which the classification model assigns the minimum softmax scores, thus introducing uncertainty.\nFor both LS and ALS, the cross entropy loss is subsequently minimized between the model predictions and the modified target vectors yLSi , y ALS i .", |
| "3 Experiments": "In this section, we present a thorough empirical evaluation on the effect of label smoothing on adversarial robustness for two pre-trained transformer models: BERT and its distilled variant, dBERT, which are the victim models. 4 We attack the victim models using TF, BAE, and SemAttack. For each attack, we present results on both the standard models and the label-smoothed models on various classification tasks: text classification and natural language inference. For each dataset we evaluate on a randomly sampled subset of the test set (1000 examples), as done in prior work (Li et al., 2021; Jin et al., 2019; Garg and Ramakrishnan, 2020). We evaluate on the following tasks, and other details about the setting is in Appendix A.8:\n• Text Classification: We evaluate on movie review classification using Movie Review (MR) (Pang and Lee, 2005) and Stanford Sentiment Treebank (SST2) (Socher et al., 2013) (both binary classification), restaurant review classification: Yelp Review (Zhang et al., 2015a) (binary classification), and news category classification: AG News (Zhang et al., 2015b) (having the following four classes: World, Sports, Business, Sci/Tech).\n• Natural Language Inference: We investigate two datasets for this task: the Stanford Natural Language Inference Corpus (SNLI) (Bowman et al., 2015) and the Multi-Genre Natural Language Inference corpus (MNLI) (Williams et al., 2018), both having three classes. For MNLI, our work only evaluates performance\n4Additional results on more datasets, models, other attacks and α values, are presented in the Appendix.\nSST-2 CleanAcc (↑) Attack Success Rate (↓) Adv\nConf (↓) BERT(α) 0 0.45 0 0.45 0 0.45\nTF 91.97 92.09 96.38 88.92 78.43 63.62 BAE 91.97 92.09 57.11 53.42 86.92 68.35 SemAtt 91.97 92.09 86.41 54.05 80.12 64.55 dBERT(α) 0 0.45 0 0.45 0 0.45\nTF 89.56 89.68 96.29 89.77 76.28 61.60 BAE 89.56 89.68 59.28 56.52 83.55 66.11 SemAtt 89.56 89.68 91.68 69.69 78.93 62.42\nAG_news CleanAcc (↑) Attack Success Rate (↓) Adv\nConf (↓) BERT(α) 0 0.45 0 0.45 0 0.45\nTF 94.83 94.67 88.26 77.47 59.02 42.46 BAE 94.83 94.67 74.83 62.82 60.66 43.98 SemAtt 94.83 94.67 52.65 30.49 62.32 44.99 dBERT(α) 0 0.45 0 0.45 0 0.45\nTF 94.73 94.47 90.11 74.52 57.60 41.40 BAE 94.73 94.47 77.79 63.65 60.01 42.74 SemAtt 94.73 94.47 52.07 34.05 60.40 43.27\nYelp CleanAcc (↑) Attack Success Rate (↓) Adv\nConf (↓) BERT(α) 0 0.45 0 0.45 0 0.45\nTF 97.73 97.7 99.32 92.90 64.85 55.36 BAE 97.73 97.7 55.35 45.14 68.28 57.38 SemAtt 97.73 97.7 93.55 36.17 74.53 60.24 dBERT(α) 0 0.45 0 0.45 0 0.45\nTF 97.47 97.4 99.45 93.36 61.75 54.63 BAE 97.47 97.4 58.14 45.59 64.27 57.14 SemAtt 97.47 97.4 97.37 43.92 71.34 60.57\nSNLI CleanAcc (↑) Attack Success Rate (↓) Adv\nConf (↓) BERT(α) 0 0.45 0 0.45 0 0.45\nTF 89.56 89.23 96.5 96.15 68.27 52.61 BAE 89.56 89.23 74.95 74.82 76.13 57.42 SemAtt 89.56 89.23 99.11 91.94 75.41 58.01 dBERT(α) 0 0.45 0 0.45 0 0.45\nTF 87.27 87.1 98.12 96.86 65.19 50.80 BAE 87.27 87.1 74.08 72.91 72.89 55.49 SemAtt 87.27 87.1 98.43 92.84 71.17 54.96\nTable 1: Comparison of standard models and models fine-tuned with standard label smoothing techniques (LS) against various attacks for in-domain data. We show clean accuracy, attack success rate and average confidence on successful adversarial texts. For each dataset, the left column are the results for standard model, and the right column are for LS models where α denotes the label smoothing factor (α=0: no LS). ↑ (↓) denotes higher (lower) is better respectively. dBERT denotes the distilBERT model.\non the matched genre test-set in the OOD setting presented in subsection 3.2 .", |
| "3.1 In-domain Setting": "In the in-domain setting (iD), the pre-trained transformer models are fine-tuned on the train-set for each task and evaluated on the corresponding testset. For each case, we report the clean accuracy, the adversarial attack success rate (percentage of misclassified examples after an attack) and the average confidence on successfully attacked examples (on which the model makes a wrong prediction).5 Table 1 shows the performance of BERT and dBERT, with and without label-smoothing. We choose label smoothing factor α = 0.45 for standard labelsmoothed models in our experiments.\nWe see that label-smoothed models are more robust for every adversarial attack across different datasets in terms of the attack success rate, which is a standard metric in this area (Li et al., 2021; Lee et al., 2022). Additionally, the higher confidence of the standard models on the successfully attacked examples indicates that label smoothing helps mitigate overconfident mistakes in the adversarial setting. Importantly, the clean accuracy remains almost unchanged in all the cases. Moreover, we observe that the models gain much more robustness from LS under white-box attack, compared\n5Details of each metric are presented in Appendix A.2.\nto the black-box setting. We perform hyperparameter sweeping for the label smoothing factor α to investigate their impact to model accuracy and adversarial robustness. Figure 1 shows that the attack success rate gets lower as we increase the label smooth factor when fine-tuning the model while the test accuracy is comparable6. However, when the label smoothing factor is larger than 0.45, there is no further improvement on adversarial robustness in terms of attack success rate. Automatic search for an optimal label smoothing factor and its theoretical analysis is important future work.\nWe also investigate the impact of adversarial label smoothing (ALS) and show that the adversarial label smoothed methods also improves model’s robustness in Table 2.\n6More results for different α values are in Appendix A.9", |
| "3.2 Out-of-Domain setting": "We now evaluate the benefits of label smoothing for robustness in the out-of-domain (OOD) setting, where the pre-trained model is fine-tuned on a particular dataset and is then evaluated directly on a different dataset, which has a matching label space. Three examples of these that we evaluate on are the Movie Reviews to SST-2 transfer, the SST-2 to Yelp transfer, and the SNLI to MNLI transfer.\nhelps produce more robust models in the OOD setting although with less gain compared to iD setting. This is a challenging setting, as evidenced by the significant performance drop in the clean accuracy as compared to the in-domain setting. We also see that the standard models make over-confident errors on successfully attacked adversarial examples, when compared to label-smoothed models.", |
| "3.3 Qualitative Results": "In this section, we try to understand how the generated adversarial examples differ for label smoothed and standard models. First we look at some qualitative examples: in Table 4, we show some examples (clean text) for which the different attack schemes fails to craft an attack for the label smoothed model but successfully attacks the standard model.\nWe also performed automatic evaluation of the quality of the adversarial examples for standard and label smoothed models, adopting standard metrics from previous studies (Jin et al., 2019; Li et al., 2021). Ideally, we want the adversarial sentences to be free of grammar errors, fluent, and semantically similar to the clean text. This can be quantified using metrics such as grammar errors, perplexity, and similarity scores (compared to the clean text). The reported scores for each metric are computed over only the successful adversarial examples, for each attack and model type.7\n7Additional details can be found in AppendixA.3.\nTable 5 shows that the quality of generated adversarial examples on label smoothed models is worse than those on standard models for different metrics, suggesting that the adversarial sentences generated by standard models are easier to perceive. This further demonstrates that label smoothing makes it harder to find adversarial vulnerabilities.", |
| "4 Conclusion": "We presented an extensive empirical study to investigate the effect of label smoothing techniques on adversarial robustness for various NLP tasks, for various victim models and adversarial attacks. Our results demonstrate that label smoothing imparts implicit robustness to models, even under domain shifts. This first work on the effects of LS for text adversarial attacks, complemented with prior work on LS and implicit calibration (Desai and Durrett, 2020; Dan and Roth, 2021), is an important step towards developing robust, reliable models. In the future, it would be interesting to explore the combination of label smoothing with other regularization and adversarial training techniques to further enhance the adversarial robustness of NLP models.", |
| "6 Ethics Statement": "Adversarial examples present a severe risk to machine learning systems, especially when deployed in real-world risk sensitive applications. With the ubiquity of textual information in real-world applications, it is extremely important to defend against adversarial examples and also to understand the robustness properties of commonly used techniques like Label Smoothing. From a societal perspective, by studying the effect of this popular regularization strategy, this work empirically shows that it helps robustness against adversarial examples in\nin-domain and out-of-domain scenarios, for both white-box and black-box attacks across diverse tasks and models. From an ecological perspective, label smoothing does not incur any additional computational cost over standard fine-tuning emphasizing its efficacy as a general-purpose tool to improve calibration and robustness.", |
| "Acknowledgements": "Research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0080. This work was supported by Contract FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense, the Army Research Office or the U.S. Government. This research was also supported by a gift from AWS AI for research in Trustworthy AI.", |
| "A Appendix": "• A.1 Pictorial Overview of the Adversarial Attack Framework\n• A.2 Description of the Evaluation Metrics\n• A.3 Details of Automatic Attack Evaluation\n• A.4 Additional results on Movie Review Dataset\n• A.5 Additional white-box attack on labelsmoothed models\n• A.6 Additional results for α = 0.1\n• A.7 Additional results on ALBERT model\n• A.8 Dataset overview and expertiment details\n• A.9 Attack success rate versus label smoothing factors for different attacks (TextFooler and SemAttack)\n• A.10 Average number of word change versus Confidence\nA.1 Overview of the Framework\nA.2 Evaluation Metrics\nThe followings are the details of evaluation metrics from previous works (Lee et al., 2022; Li et al., 2021): Clean accuracy = # of correctly predicted clean examples# of clean examples Attack Succ. Rate = # of successful adversarial examples# of correctly predicted clean examples where successful adversarial examples are derived from correctly predicted examples Adv Conf = sum of confidence of successful adv examples# of successful adversarial examples\nA.3 Attack evaluation\nWe performed automatic evaluation of adversarial attacks against standard models and label smoothed models following previous studies (Jin et al., 2019; Li et al., 2021). Following are the details of the metrics we used in Table 5: Perplexity evaluates the fluency of the input using language models. We use GPT-2 (Radford et al., 2019) to compute perplexity as in (Li et al., 2021) . Similarity Score determines the similarity between two sentences. We use Sentence Transformers (Reimers and Gurevych, 2019) to compute sentence embeddings and then calculate cosine similarity score between the clean examples and the corresponding adversarially modified examples. Grammar Error The average grammar error increments between clean examples and the corresponding adversarially modified example.8\nA.4 Additional results on Movie Review Dataset\nHere we provide results of movie review datasets (Pang and Lee, 2005) under in-domain setting.\nA.5 Additional results on an additional white-box attack\nIn this section, we use another recent popular whitebox attack named Gradient-based Attack (Guo et al., 2021). This is a gradient-based approach that searches for a parameterized word-level adversarial attack distribution, and then samples adversarial examples from the distribution. We run this attack on standard and label smoothed BERT models and the results are listed below.\nWe observe that the label smoothing also help with adversarial robustness against this attack\n8we use https://pypi.org/project/ language-tool-python/ to compute grammar error.\nacross four datasets under iD setting. The results also show that, similar to SemAttack, the gradbased attack benefits more from label smoothing compared to black-box attacks like TextFooler and BAE.\nA.6 Additional results of α = 0.1\nTable 8 and 9 are the additional results to show when label smoothing α = 0.1, how the adversarial robustness of fine-tuned language models changes under iD and OOD scenarios.\nTable 10 are the additional results for adversarial label smoothing α = 0.1.\nA.7 Additional results on ALBERT\nIn this section, we include experiment results for standard ALBERT and label smoothed ALBERT in Table 11. We observe that the label smoothing technique also improves adversarial robustness of ALBERT model across different datasets.\nA.8 Dataset Overview and Experiments Details\nWe use Huggingface (Wolf et al., 2020) to load the dataset and to fine-tune the pre-trained models. All models are fine-tuned for 3 epochs using AdamW optimizer (Loshchilov and Hutter, 2017) and the learning rate starts from 5e− 6. The training and attacking are run on an NVIDIA Quadro RTX 6000 GPU (24GB). For both BAE and Textfooler attack, we use the implementation in TextAttack (Morris et al., 2020) with the default hyper-parameters (Except for AG_news, we relax the similarity threshld from 0.93 to 0.7 when using BAE attack). The SemAttack is implemented by (Wang et al., 2022) while the generating contextualized embedding space is modified from (Reif et al., 2019). The reported numbers are the average performance over 3 random runs of the experiment for iD setting, and the standard deviation is less than 2%.\nA.9 Attack success rate versus label smoothing factors\nAs mentioned in Section 3.1, we plot the attack success rate of BAE attack versus the label smoothing factors. Here, we plot the results for the TextFooler and SemAttack in Figure 3 and 4, and observe the same tendency as we discussed above.\nWe also plot the attack success rate of\nBAE/TextFooler attack versus the adversarial label smoothing factors in Figure 5 and 6.\nWe additionally plot the clean accuracy versus the label smoothing factor in Figure 7, and find out that there is not much drop in clean accuracy with increasing the label smoothing factors.\nA.10 Average number of word change versus Confidence\nWord change rate is defined as the ratio between the number of word replaced after attack and the\ntotal number of words in the sentence. Here we plot the bucket-wise word change ratio of adversarial attack versus confidence, and observe that the word change rate for high-confident examples are higher for label smoothed models compared to standard models in most cases. This indicates that it is more difficult to attack label smoothed text classification models. Also note that there is the word change rate is zero because there is no clean texts fall into those two bins.\nMoreover, we bucket the examples based on\nthe confidence scores, and plot the bucket-wise attack success rate (of the BAE attack on the Yelp dataset) versus confidence in Figure 10 and Figure 11. We observe that the label smoothing technique improves the adversarial robustness for high confidence score samples significantly. In future work, we plan to investigate the variations of robustness in label-smoothed models as a function of the model size.\nACL 2023 Responsible NLP Checklist", |
| "3 A1. Did you describe the limitations of your work?": "Section 5.", |
| "3 A2. Did you discuss any potential risks of your work?": "Section 6.", |
| "3 A3. Do the abstract and introduction summarize the paper’s main claims?": "Yes. Abstract and section 1.\n7 A4. Have you used AI writing assistants when working on this paper? Left blank.\nB 3 Did you use or create scientific artifacts? Section 3, and appendix A.8.", |
| "3 B1. Did you cite the creators of artifacts you used?": "Section 3 and appendix A.8.\nB2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank.", |
| "3 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided": "that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 3.\nB4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank.\nB5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank.\n3 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Appendix A.8\nC 3 Did you run computational experiments? Section 3.", |
| "3 C1. Did you report the number of parameters in the models used, the total computational budget": "(e.g., GPU hours), and computing infrastructure used? Appendix A.8\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.", |
| "3 C2. Did you discuss the experimental setup, including hyperparameter search and best-found": "hyperparameter values? Section 3.1 and Appendix A.8.", |
| "3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary": "statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3 and Appendix A.8.", |
| "3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did": "you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 3 and Appendix A.3, A.8.\nD 7 Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.\nD1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response.\nD2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? No response.\nD3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response.\nD4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response.\nD5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response." |
| } |