Add 1 files
Browse files- 2310/2310.01818.md +431 -0
2310/2310.01818.md
ADDED
|
@@ -0,0 +1,431 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: A Parameter-Free Automated Robust Fine-Tuning Framework
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2310.01818
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
Xilie Xu 1, Jingfeng Zhang 2,3, Mohan Kankanhalli 1
|
| 7 |
+
|
| 8 |
+
1 School of Computing, National University of Singapore
|
| 9 |
+
|
| 10 |
+
2 School of Computer Science, University of Auckland
|
| 11 |
+
|
| 12 |
+
3 RIKEN Center for Advanced Intelligence Project (AIP)
|
| 13 |
+
|
| 14 |
+
###### Abstract
|
| 15 |
+
|
| 16 |
+
Robust Fine-Tuning (RFT) is a low-cost strategy to obtain adversarial robustness in downstream applications, without requiring a lot of computational resources and collecting significant amounts of data. This paper uncovers an issue with the existing RFT, where optimizing both adversarial and natural objectives through the feature extractor (FE) yields significantly divergent gradient directions. This divergence introduces instability in the optimization process, thereby hindering the attainment of adversarial robustness and rendering RFT highly sensitive to hyperparameters. To mitigate this issue, we propose a low-rank (LoRa) branch that disentangles RFT into two distinct components: optimizing natural objectives via the LoRa branch and adversarial objectives via the FE. Besides, we introduce heuristic strategies for automating the scheduling of the learning rate and the scalars of loss terms. Extensive empirical evaluations demonstrate that our proposed automated RFT disentangled via the LoRa branch (AutoLoRa) achieves new state-of-the-art results across a range of downstream tasks. AutoLoRa holds significant practical utility, as it automatically converts a pre-trained FE into an adversarially robust model for downstream tasks without the need for searching hyperparameters.
|
| 17 |
+
|
| 18 |
+
1 Introduction
|
| 19 |
+
--------------
|
| 20 |
+
|
| 21 |
+
With the emergence of foundation models(Bommasani et al., [2021](https://arxiv.org/html/2310.01818#bib.bib1)), fine-tuning the pre-trained feature extractor (FE) has become a low-cost strategy to obtain superior performance in downstream tasks. Notably, GPT-3(Brown et al., [2020](https://arxiv.org/html/2310.01818#bib.bib2)) can achieve state-of-the-art (SOTA) performance on GLUE benchmarks(Wang et al., [2018](https://arxiv.org/html/2310.01818#bib.bib29)) via parameter-efficient fine-tuning(Hu et al., [2021](https://arxiv.org/html/2310.01818#bib.bib16)). Due to the ubiquitous existence of adversarial attacks(Goodfellow et al., [2014](https://arxiv.org/html/2310.01818#bib.bib11); Madry et al., [2018](https://arxiv.org/html/2310.01818#bib.bib25)), adopting pre-trained FEs to safety-critical downstream areas such as medicine(Buch et al., [2018](https://arxiv.org/html/2310.01818#bib.bib3)) and autonomous cars(Kurakin et al., [2018](https://arxiv.org/html/2310.01818#bib.bib20)) necessitates the strategy of robust fine-tuning(Hendrycks et al., [2019](https://arxiv.org/html/2310.01818#bib.bib14)) that can yield adversarial robustness in downstream applications.
|
| 22 |
+
|
| 23 |
+
Robust fine-tuning (RFT)(Hendrycks et al., [2019](https://arxiv.org/html/2310.01818#bib.bib14)) that contains an adversarial objective to learn features of adversarial data(Madry et al., [2018](https://arxiv.org/html/2310.01818#bib.bib25)) can gain adversarial robustness in downstream tasks. To further improve generalization, vanilla RFT (formulated in Eq.[1](https://arxiv.org/html/2310.01818#S3.E1 "1 ‣ Vanilla RFT (Zhang et al., 2019). ‣ 3.1 Preliminaries ‣ 3 A Closer Look at Vanilla RFT and TWINS ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), shown in the left panel of Figure[0(c)](https://arxiv.org/html/2310.01818#S1.F0.sf3 "0(c) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")) optimizes both adversarial and natural objectives to learn the features of adversarial and natural data simultaneously via the FE(Zhang et al., [2019](https://arxiv.org/html/2310.01818#bib.bib38); Shafahi et al., [2019](https://arxiv.org/html/2310.01818#bib.bib27); Jiang et al., [2020](https://arxiv.org/html/2310.01818#bib.bib17)). Recently, TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)) (formulated in Eq.[2](https://arxiv.org/html/2310.01818#S3.E2 "2 ‣ TWINS (Liu et al., 2023). ‣ 3.1 Preliminaries ‣ 3 A Closer Look at Vanilla RFT and TWINS ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")) further enhances performance in downstream tasks by incorporating vanilla RFT with a dual batch normalization (BN)(Xie et al., [2020](https://arxiv.org/html/2310.01818#bib.bib32); Wang et al., [2020](https://arxiv.org/html/2310.01818#bib.bib30)) module. TWINS takes advantage of extra information from the pre-trained FE (i.e., pre-trained statistics in a frozen BN) via the dual BN module, thus improving performance.
|
| 24 |
+
|
| 25 |
+
However, we empirically find that vanilla RFT and TWINS have a common issue, where optimizing both adversarial and natural objectives via the FE leads to significantly divergent gradient directions. As shown in Figure[0(a)](https://arxiv.org/html/2310.01818#S1.F0.sf1 "0(a) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), the cosine similarity between the gradient of natural and adversarial objective w.r.t. the FE, dubbed as gradient similarity, achieved by vanilla RFT (blue lines) and TWINS (green lines) is very low. It indicates that optimizing both natural and adversarial objectives through the FE can result in a divergent and even conflicting optimization direction.
|
| 26 |
+
|
| 27 |
+
(a) (b)
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
(c)
|
| 32 |
+
|
| 33 |
+
Figure 1: Figure[0(a)](https://arxiv.org/html/2310.01818#S1.F0.sf1 "0(a) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") shows the cosine similarity between the gradients of natural and adversarial objective w.r.t. the feature extractor (FE) (dubbed as gradient similarity) on DTD-57(Cimpoi et al., [2014](https://arxiv.org/html/2310.01818#bib.bib7)). Figure[0(b)](https://arxiv.org/html/2310.01818#S1.F0.sf2 "0(b) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") shows the robust test accuracy evaluated via PGD-10 on DTD-57. Extra empirical results are shown in Figures[1(a)](https://arxiv.org/html/2310.01818#A1.F1.sf1 "1(a) ‣ Figure 2 ‣ A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") and[1(b)](https://arxiv.org/html/2310.01818#A1.F1.sf2 "1(b) ‣ Figure 2 ‣ A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") (Appendix[A.2](https://arxiv.org/html/2310.01818#A1.SS2 "A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")). Figure[0(c)](https://arxiv.org/html/2310.01818#S1.F0.sf3 "0(c) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") shows the framework of vanilla RFT that learns both adversarial and natural data via the FE while our proposed AutoLoRa learns adversarial and natural data via the FE and the LoRa branch, respectively.
|
| 34 |
+
|
| 35 |
+
The divergent optimization directions make the optimization process of RFT unstable, thus impeding obtaining robustness in downstream tasks and making RFT sensitive to hyperparameters. Compared to TWINS, vanilla RFT has a lower gradient similarity (in Figure[0(a)](https://arxiv.org/html/2310.01818#S1.F0.sf1 "0(a) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")) while gaining a lower robust test accuracy (in Figure[0(b)](https://arxiv.org/html/2310.01818#S1.F0.sf2 "0(b) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")). It indicates that the issue of divergent optimization direction could prevent gaining adversarial robustness in downstream tasks. TWINS tackles this issue to some extent via the dual BN module while gaining only slightly improved robustness. Thus, we conjecture that mitigating the aforementioned issue can further enhance adversarial robustness.
|
| 36 |
+
|
| 37 |
+
To this end, we propose to disentangle RFT via a low-rank (LoRa) branch(Hu et al., [2021](https://arxiv.org/html/2310.01818#bib.bib16)) (details in Section[4.1](https://arxiv.org/html/2310.01818#S4.SS1 "4.1 Disentangling RFT via a Low-Rank Branch ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")). As shown in the right panel of Figure[0(c)](https://arxiv.org/html/2310.01818#S1.F0.sf3 "0(c) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), we disentangle the RFT by forwarding adversarial and natural data through the FE (blue module) and the LoRa branch (yellow module), respectively. In this way, the FE parameters are updated only by the adversarial objective to learn the features of adversarial data, which exactly solves the aforementioned issue of the divergent optimization direction. Besides, the FE also learns the knowledge of the natural objective via minimizing the Kullback-Leibler (KL) loss between adversarial logits and natural soft labels provided by the LoRa branch to avoid degrading generalization. Therefore, benefiting from the parameter-efficient LoRa branch, RFT disentangled via a LoRa branch can be a low-cost strategy to further improve adversarial robustness while maintaining generalization.
|
| 38 |
+
|
| 39 |
+
Moreover, we propose heuristic strategies of automatically scheduling the learning rate (LR) as well as the scalars λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (details in Section[4.2](https://arxiv.org/html/2310.01818#S4.SS2 "4.2 Automating Scheduling Hyperparameters ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")). Lin et al. ([2019](https://arxiv.org/html/2310.01818#bib.bib21)) analogized the generation process of adversarial data to the training process of the neural model. It motivates us to employ the automatic step size scheduler in AutoAttack(Croce & Hein, [2020](https://arxiv.org/html/2310.01818#bib.bib8)), which is proven to enhance the convergence of adversarial attacks, for scheduling the LR, thus helping the convergence of RFT. Zhu et al. ([2021](https://arxiv.org/html/2310.01818#bib.bib39)) has shown that high-quality soft labels are beneficial in improving robustness. It inspires us to take λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT to be negatively and positively proportional to the standard accuracy of natural training data, respectively. λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT encourages the LoRa branch to quickly learn natural data and then output high-quality natural soft labels; λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT controls the LoRa branch’s confidence in its soft labels that will be learned by the FE. In this way, our automated scheduler of the scalars helps improve robustness.
|
| 40 |
+
|
| 41 |
+
Our comprehensive experimental results validate that our proposed automated robust fine-tuning disentangled via a LoRa branch (AutoLoRa) is effective in improving adversarial robustness among various downstream tasks. We conducted experiments using different robust pre-trained models (ResNet-18 and ResNet-50 adversarially pre-trained on ImageNet-1K(Salman et al., [2020](https://arxiv.org/html/2310.01818#bib.bib26))) and both low-resolution(Krizhevsky, [2009](https://arxiv.org/html/2310.01818#bib.bib19)) and high-resolution(Cimpoi et al., [2014](https://arxiv.org/html/2310.01818#bib.bib7); Khosla et al., [2011](https://arxiv.org/html/2310.01818#bib.bib18); Wah et al., [2011](https://arxiv.org/html/2310.01818#bib.bib28); Griffin et al., [2007](https://arxiv.org/html/2310.01818#bib.bib12)) downstream tasks. Empirical results validate that our proposed AutoLoRa can consistently yield new state-of-the-art adversarial robustness performance without tunning hyperparameters compared to TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)).
|
| 42 |
+
|
| 43 |
+
2 Related Work
|
| 44 |
+
--------------
|
| 45 |
+
|
| 46 |
+
Here, we introduce the related work of fine-tuning and robust fine-tuning.
|
| 47 |
+
|
| 48 |
+
#### Fine-tuning.
|
| 49 |
+
|
| 50 |
+
With recent advances in self-supervised pre-training(Chen et al., [2020a](https://arxiv.org/html/2310.01818#bib.bib5); [b](https://arxiv.org/html/2310.01818#bib.bib6)), fine-tuning foundation models via self-supervision on large-scale unlabelled datasets can efficiently achieve powerful performance in downstream tasks. As the number of parameters of pre-trained FE grows significantly, it requires the parameter-efficient fine-tuning (PEFT) strategy that decreases the trainable parameters during fine-tuning. One popular strategy is to introduce an adapter layer during fine-tuning(Houlsby et al., [2019](https://arxiv.org/html/2310.01818#bib.bib15); Lin et al., [2020](https://arxiv.org/html/2310.01818#bib.bib22)); however, it brings extra inference latency. Recent studies propose to freeze the pre-trained FE parameters and inject trainable decomposition matrices that are low-rank and thus parameter-efficient(Hu et al., [2021](https://arxiv.org/html/2310.01818#bib.bib16); Chavan et al., [2023](https://arxiv.org/html/2310.01818#bib.bib4)). Notably, only fine-tuning low-rank matrices for GPT-3(Brown et al., [2020](https://arxiv.org/html/2310.01818#bib.bib2)) can achieve the SOTA performance on GLUE benchmarks(Wang et al., [2018](https://arxiv.org/html/2310.01818#bib.bib29)) without incurring inference latency(Hu et al., [2021](https://arxiv.org/html/2310.01818#bib.bib16); Chavan et al., [2023](https://arxiv.org/html/2310.01818#bib.bib4)).
|
| 51 |
+
|
| 52 |
+
#### Robust fine-tuning (RFT).
|
| 53 |
+
|
| 54 |
+
RFT(Shafahi et al., [2019](https://arxiv.org/html/2310.01818#bib.bib27); Hendrycks et al., [2019](https://arxiv.org/html/2310.01818#bib.bib14)) is a low-cost strategy to obtain adversarially robust models in downstream tasks by fine-tuning the pre-trained FEs on adversarial training data(Madry et al., [2018](https://arxiv.org/html/2310.01818#bib.bib25)). To further improve generalization in downstream tasks, recent studies propose to learn the features of natural and adversarial data together (i.e., vanilla RFT)(Zhang et al., [2019](https://arxiv.org/html/2310.01818#bib.bib38); Shafahi et al., [2019](https://arxiv.org/html/2310.01818#bib.bib27)). Note that vanilla RFT has been widely applied to fine-tuning adversarially self-supervised pre-trained models(Jiang et al., [2020](https://arxiv.org/html/2310.01818#bib.bib17); Fan et al., [2021](https://arxiv.org/html/2310.01818#bib.bib10); Zhang et al., [2022](https://arxiv.org/html/2310.01818#bib.bib37); Yu et al., [2022](https://arxiv.org/html/2310.01818#bib.bib35); Xu et al., [2023a](https://arxiv.org/html/2310.01818#bib.bib33); [b](https://arxiv.org/html/2310.01818#bib.bib34)) and achieved powerful robustness in downstream tasks. Furthermore, various strategies about how to utilize extra information from the pre-trained FEs for improving performance have been proposed. Liu et al. ([2022](https://arxiv.org/html/2310.01818#bib.bib23)) proposed to jointly fine-tune on extra training data selected from the pre-training datasets and the whole downstream datasets to further improve RFT. TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)) is the existing SOTA RFT method that incorporates vanilla RFT with a dual BN framework(Xie et al., [2020](https://arxiv.org/html/2310.01818#bib.bib32)). TWINS uses the dual BN framework to take advantage of the pre-trained statistics in a frozen BN branch, which is the extra useful information of the pre-trained FEs, thus resulting in superior performance.
|
| 55 |
+
|
| 56 |
+
3 A Closer Look at Vanilla RFT and TWINS
|
| 57 |
+
----------------------------------------
|
| 58 |
+
|
| 59 |
+
In this section, we first introduce preliminaries of vanilla RFT(Zhang et al., [2019](https://arxiv.org/html/2310.01818#bib.bib38); Jiang et al., [2020](https://arxiv.org/html/2310.01818#bib.bib17)) and TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)). Then, we empirically disclose the issues of vanilla RFT and TWINS.
|
| 60 |
+
|
| 61 |
+
### 3.1 Preliminaries
|
| 62 |
+
|
| 63 |
+
Let (𝒳,d∞)𝒳 subscript 𝑑(\mathcal{X},d_{\infty})( caligraphic_X , italic_d start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ) be the input space 𝒳 𝒳\mathcal{X}caligraphic_X with the infinity distance metric d∞(x,x′)=‖x−x′‖∞subscript 𝑑 𝑥 superscript 𝑥′subscript norm 𝑥 superscript 𝑥′d_{\infty}({x},{x}^{\prime})=\|{x}-{x}^{\prime}\|_{\infty}italic_d start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ( italic_x , italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = ∥ italic_x - italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT, and ℬ ϵ[x]={x′∈𝒳∣d∞(x,x′)≤ϵ}subscript ℬ italic-ϵ delimited-[]𝑥 conditional-set superscript 𝑥′𝒳 subscript 𝑑 𝑥 superscript 𝑥′italic-ϵ\mathcal{B}_{\epsilon}[{x}]=\{{x}^{\prime}\in\mathcal{X}\mid d_{\infty}({x},{x% }^{\prime})\leq\epsilon\}caligraphic_B start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT [ italic_x ] = { italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_X ∣ italic_d start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ( italic_x , italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ≤ italic_ϵ } be the closed ball of radius ϵ>0 italic-ϵ 0\epsilon>0 italic_ϵ > 0 centered at x∈𝒳 𝑥 𝒳{x}\in\mathcal{X}italic_x ∈ caligraphic_X. ϵ italic-ϵ\epsilon italic_ϵ is also denoted as adversarial budget. Let 𝒟={(x i,y i)}i=1 n 𝒟 subscript superscript subscript 𝑥 𝑖 subscript 𝑦 𝑖 𝑛 𝑖 1\mathcal{D}=\{({x}_{i},y_{i})\}^{n}_{i=1}caligraphic_D = { ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT be a downstream dataset, where x i∈ℝ d subscript 𝑥 𝑖 superscript ℝ 𝑑{x}_{i}\in\mathbb{R}^{d}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, d 𝑑 d italic_d is the dimensionality of the input data, and y i∈𝒴={0,1,…,C−1}subscript 𝑦 𝑖 𝒴 0 1…𝐶 1 y_{i}\in\mathcal{Y}=\{0,1,...,C-1\}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ caligraphic_Y = { 0 , 1 , … , italic_C - 1 } is the ground-truth label. Let f θ 1:ℝ d→ℝ v:subscript 𝑓 subscript 𝜃 1→superscript ℝ 𝑑 superscript ℝ 𝑣 f_{\theta_{1}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{v}italic_f start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT be a pre-trained feature extractor parameterized by θ 1∈ℝ d×v subscript 𝜃 1 superscript ℝ 𝑑 𝑣\theta_{1}\in\mathbb{R}^{d\times v}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d × italic_v end_POSTSUPERSCRIPT where v 𝑣 v italic_v is the dimensionality of the hidden features, g θ 2:ℝ v→ℝ z:subscript 𝑔 subscript 𝜃 2→superscript ℝ 𝑣 superscript ℝ 𝑧 g_{\theta_{2}}:\mathbb{R}^{v}\rightarrow\mathbb{R}^{z}italic_g start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT : blackboard_R start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT be a randomly-initialized linear classifier parameterized by θ 2∈ℝ v×z subscript 𝜃 2 superscript ℝ 𝑣 𝑧\theta_{2}\in\mathbb{R}^{v\times z}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_v × italic_z end_POSTSUPERSCRIPT where z=|𝒴|𝑧 ���� z=|\mathcal{Y}|italic_z = | caligraphic_Y | is the dimensionality of the predicted logits. For notational simplicity, we denote h θ(⋅)=g θ 2∘f θ 1(⋅)subscript ℎ 𝜃⋅subscript 𝑔 subscript 𝜃 2 subscript 𝑓 subscript 𝜃 1⋅h_{\theta}(\cdot)=g_{\theta_{2}}\circ f_{\theta_{1}}(\cdot)italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( ⋅ ) = italic_g start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∘ italic_f start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( ⋅ ) where θ={θ 1,θ 2}𝜃 subscript 𝜃 1 subscript 𝜃 2\theta=\{\theta_{1},\theta_{2}\}italic_θ = { italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT }.
|
| 64 |
+
|
| 65 |
+
#### Vanilla RFT(Zhang et al., [2019](https://arxiv.org/html/2310.01818#bib.bib38)).
|
| 66 |
+
|
| 67 |
+
The training loss of vanilla RFT is formulated as follows:
|
| 68 |
+
|
| 69 |
+
ℒ vanilla(𝒟;θ,β)=1 n∑(x,y)∈𝒟{ℓ CE(h θ(x),y)⏟_natural objective_+β⋅ℓ KL(h θ(x~),h θ(x))⏟_adversarial objective_},subscript ℒ vanilla 𝒟 𝜃 𝛽 1 𝑛 subscript 𝑥 𝑦 𝒟 subscript⏟subscript ℓ CE subscript ℎ 𝜃 𝑥 𝑦 _natural objective_ subscript⏟⋅𝛽 subscript ℓ KL subscript ℎ 𝜃~𝑥 subscript ℎ 𝜃 𝑥 _adversarial objective_\displaystyle\mathcal{L}_{\mathrm{vanilla}}(\mathcal{D};\theta,\beta)=\frac{1}% {n}\sum_{(x,y)\in\mathcal{D}}\bigg{\{}\underbrace{\ell_{\mathrm{CE}}(h_{\theta% }(x),y)}_{\text{{\emph{natural objective}}}}+\underbrace{\beta\cdot\ell_{% \mathrm{KL}}(h_{\theta}(\tilde{{x}}),h_{\theta}(x))}_{\text{{\emph{adversarial% objective}}}}\bigg{\}},caligraphic_L start_POSTSUBSCRIPT roman_vanilla end_POSTSUBSCRIPT ( caligraphic_D ; italic_θ , italic_β ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT ( italic_x , italic_y ) ∈ caligraphic_D end_POSTSUBSCRIPT { under⏟ start_ARG roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x ) , italic_y ) end_ARG start_POSTSUBSCRIPT natural objective end_POSTSUBSCRIPT + under⏟ start_ARG italic_β ⋅ roman_ℓ start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_x end_ARG ) , italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x ) ) end_ARG start_POSTSUBSCRIPT adversarial objective end_POSTSUBSCRIPT } ,(1)
|
| 70 |
+
|
| 71 |
+
where β>0 𝛽 0\beta>0 italic_β > 0 is a scalar of the adversarial objective, ℓ CE(⋅,⋅)subscript ℓ CE⋅⋅\ell_{\mathrm{CE}}(\cdot,\cdot)roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( ⋅ , ⋅ ) is the Cross-Entropy (CE) loss function, ℓ KL(⋅,⋅)subscript ℓ KL⋅⋅\ell_{\mathrm{KL}}(\cdot,\cdot)roman_ℓ start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( ⋅ , ⋅ ) is the Kullback–Leibler (KL) loss function, and x~~𝑥\tilde{{x}}over~ start_ARG italic_x end_ARG is the adversarial data generated via projected gradient descent (PGD)(Madry et al., [2018](https://arxiv.org/html/2310.01818#bib.bib25)). Natural and adversarial objectives are used to learn the features of natural and adversarial data, respectively. In our paper, following Liu et al. ([2023](https://arxiv.org/html/2310.01818#bib.bib24)), the adversarial data x~~𝑥\tilde{{x}}over~ start_ARG italic_x end_ARG is generated by maximizing the CE loss using PGD, i.e., x~=argmax x~∈ℬ ϵ[x]ℓ CE(h θ(x~),y).~𝑥 subscript arg max~𝑥 subscript ℬ italic-ϵ delimited-[]𝑥 subscript ℓ CE subscript ℎ 𝜃~𝑥 𝑦\tilde{{x}}=\operatorname*{arg\,max}\nolimits_{\tilde{{x}}\in\mathcal{B}_{% \epsilon}[{x}]}\ell_{\mathrm{CE}}(h_{\theta}(\tilde{{x}}),y).over~ start_ARG italic_x end_ARG = start_OPERATOR roman_arg roman_max end_OPERATOR start_POSTSUBSCRIPT over~ start_ARG italic_x end_ARG ∈ caligraphic_B start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT [ italic_x ] end_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_x end_ARG ) , italic_y ) .
|
| 72 |
+
|
| 73 |
+
#### TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)).
|
| 74 |
+
|
| 75 |
+
TWINS proposed to combine vanilla RFT with a dual BN(Xie et al., [2020](https://arxiv.org/html/2310.01818#bib.bib32)) framework to take advantage of pre-trained statistics, whose loss function is shown below:
|
| 76 |
+
|
| 77 |
+
ℒ TWINS(𝒟;θ,β,γ)=ℒ vanilla(𝒟;θ¯,β)+γ⋅ℒ vanilla(𝒟;θ,β),subscript ℒ TWINS 𝒟 𝜃 𝛽 𝛾 subscript ℒ vanilla 𝒟¯𝜃 𝛽⋅𝛾 subscript ℒ vanilla 𝒟 𝜃 𝛽\displaystyle\mathcal{L}_{\mathrm{TWINS}}(\mathcal{D};\theta,\beta,\gamma)=% \mathcal{L}_{\mathrm{vanilla}}(\mathcal{D};\bar{\theta},\beta)+\gamma\cdot% \mathcal{L}_{\mathrm{vanilla}}(\mathcal{D};\theta,\beta),caligraphic_L start_POSTSUBSCRIPT roman_TWINS end_POSTSUBSCRIPT ( caligraphic_D ; italic_θ , italic_β , italic_γ ) = caligraphic_L start_POSTSUBSCRIPT roman_vanilla end_POSTSUBSCRIPT ( caligraphic_D ; over¯ start_ARG italic_θ end_ARG , italic_β ) + italic_γ ⋅ caligraphic_L start_POSTSUBSCRIPT roman_vanilla end_POSTSUBSCRIPT ( caligraphic_D ; italic_θ , italic_β ) ,(2)
|
| 78 |
+
|
| 79 |
+
where β>0 𝛽 0\beta>0 italic_β > 0 and γ>0 𝛾 0\gamma>0 italic_γ > 0 are hyperparameters. During conducting TWINS, all the parameters of θ 𝜃\theta italic_θ are adaptively updated; the BN statistics of θ¯¯𝜃\bar{\theta}over¯ start_ARG italic_θ end_ARG are frozen as the pre-trained statistics and other parameters except for BN statistics of θ¯¯𝜃\bar{\theta}over¯ start_ARG italic_θ end_ARG are copied from θ 𝜃\theta italic_θ. Note that the adversarial data x~~𝑥\tilde{{x}}over~ start_ARG italic_x end_ARG is generated according to the parameters θ 𝜃\theta italic_θ via PGD.
|
| 80 |
+
|
| 81 |
+
### 3.2 Issues of Vanilla RFT and TWINS
|
| 82 |
+
|
| 83 |
+
We empirically discover vanilla RFT and TWINS have the issue of the optimization directions of minimizing both adversarial and natural objectives being significantly divergent. This issue can make the optimization unstable, thus impeding obtaining robustness and making RFT sensitive to hyperparameters. To validate the aforementioned issue, we calculate the gradient similarity (GS) as the cosine similarity between the gradient of the natural objective and that of the adversarial objective w.r.t. the FE parameters. To be specific, given a data point (x,y)∈𝒟 𝑥 𝑦 𝒟(x,y)\in\mathcal{D}( italic_x , italic_y ) ∈ caligraphic_D, the GS of vanilla RFT and TWINS w.r.t. the FE (i.e., θ 1 subscript 𝜃 1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT) are calculated as follows:
|
| 84 |
+
|
| 85 |
+
GS vanilla(x,y;θ,β)=subscript GS vanilla 𝑥 𝑦 𝜃 𝛽 absent\displaystyle\mathrm{GS}_{\mathrm{vanilla}}(x,y;\theta,\beta)=roman_GS start_POSTSUBSCRIPT roman_vanilla end_POSTSUBSCRIPT ( italic_x , italic_y ; italic_θ , italic_β ) =sim(∇θ 1 ℓ CE(h θ(x),y),∇θ 1 ℓ KL(h θ(x~),h θ(x)));sim subscript∇subscript 𝜃 1 subscript ℓ CE subscript ℎ 𝜃 𝑥 𝑦 subscript∇subscript 𝜃 1 subscript ℓ KL subscript ℎ 𝜃~𝑥 subscript ℎ 𝜃 𝑥\displaystyle\mathrm{sim}\big{(}\nabla_{\theta_{1}}\ell_{\mathrm{CE}}(h_{% \theta}(x),y),\nabla_{\theta_{1}}\ell_{\mathrm{KL}}(h_{\theta}(\tilde{{x}}),h_% {\theta}(x))\big{)};roman_sim ( ∇ start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x ) , italic_y ) , ∇ start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_x end_ARG ) , italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x ) ) ) ;(3)
|
| 86 |
+
GS TWINS(x,y;θ,β,γ)=subscript GS TWINS 𝑥 𝑦 𝜃 𝛽 𝛾 absent\displaystyle\mathrm{GS}_{\mathrm{TWINS}}(x,y;\theta,\beta,\gamma)=roman_GS start_POSTSUBSCRIPT roman_TWINS end_POSTSUBSCRIPT ( italic_x , italic_y ; italic_θ , italic_β , italic_γ ) =sim(∇θ 1(ℓ CE(h θ¯(x),y)+γ⋅ℓ CE(h θ(x),y)),\displaystyle\mathrm{sim}\Big{(}\nabla_{\theta_{1}}\big{(}\ell_{\mathrm{CE}}(h% _{\bar{\theta}}(x),y)+\gamma\cdot\ell_{\mathrm{CE}}(h_{\theta}(x),y)\big{)},roman_sim ( ∇ start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT over¯ start_ARG italic_θ end_ARG end_POSTSUBSCRIPT ( italic_x ) , italic_y ) + italic_γ ⋅ roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x ) , italic_y ) ) ,
|
| 87 |
+
∇θ 1(ℓ KL(h θ¯(x~),h θ¯(x))+γ⋅ℓ KL(h θ(x~),h θ(x)))),\displaystyle\nabla_{\theta_{1}}\big{(}\ell_{\mathrm{KL}}(h_{\bar{\theta}}(% \tilde{{x}}),h_{\bar{\theta}}(x))+\gamma\cdot\ell_{\mathrm{KL}}(h_{\theta}(% \tilde{{x}}),h_{\theta}(x))\big{)}\Big{)},∇ start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( roman_ℓ start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT over¯ start_ARG italic_θ end_ARG end_POSTSUBSCRIPT ( over~ start_ARG italic_x end_ARG ) , italic_h start_POSTSUBSCRIPT over¯ start_ARG italic_θ end_ARG end_POSTSUBSCRIPT ( italic_x ) ) + italic_γ ⋅ roman_ℓ start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_x end_ARG ) , italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x ) ) ) ) ,(4)
|
| 88 |
+
|
| 89 |
+
where sim(⋅,⋅)sim⋅⋅\mathrm{sim}(\cdot,\cdot)roman_sim ( ⋅ , ⋅ ) is the cosine similarity function. The smaller the GS is, the more divergent the gradient direction of optimizing natural and adversarial adversarial is. We report the average GS over all training data of vanilla RFT (blue lines) and TWINS (green lines) on DTD-57(Cimpoi et al., [2014](https://arxiv.org/html/2310.01818#bib.bib7)) in Figure[0(a)](https://arxiv.org/html/2310.01818#S1.F0.sf1 "0(a) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") as well as extensive datasets(Cimpoi et al., [2014](https://arxiv.org/html/2310.01818#bib.bib7); Wah et al., [2011](https://arxiv.org/html/2310.01818#bib.bib28)) in Figure[1(a)](https://arxiv.org/html/2310.01818#A1.F1.sf1 "1(a) ‣ Figure 2 ‣ A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") (in Appendix[A.2](https://arxiv.org/html/2310.01818#A1.SS2 "A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")).
|
| 90 |
+
|
| 91 |
+
Noticed from Figures[0(a)](https://arxiv.org/html/2310.01818#S1.F0.sf1 "0(a) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") and[1(a)](https://arxiv.org/html/2310.01818#A1.F1.sf1 "1(a) ‣ Figure 2 ‣ A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), we can observe the GS achieved by vanilla RFT and TWINS is quite low, which indicates that optimizing both natural and adversarial objectives via the FE can make the optimization direction orthogonal and even conflicting, thus leading to optimization oscillation. As shown in Figures[0(b)](https://arxiv.org/html/2310.01818#S1.F0.sf2 "0(b) ‣ Figure 1 ‣ 1 Introduction ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") and[1(b)](https://arxiv.org/html/2310.01818#A1.F1.sf2 "1(b) ‣ Figure 2 ‣ A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), compared to TWINS, vanilla RFT yields worse adversarial robustness in downstream tasks while achieving a lower GS. It indicates that the divergent optimization direction can lead to a lower robust test accuracy, which impedes obtaining adversarial robustness.
|
| 92 |
+
|
| 93 |
+
Besides, the issue of unstable optimization makes vanilla RFT and TWINS sensitive to hyperparameters such as the learning rate and the scalars (e.g., β 𝛽\beta italic_β and γ 𝛾\gamma italic_γ). To achieve superior performance in downstream tasks, the authors of TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)) conducted a grid search to find appropriate hyperparameters for each downstream task, which is extremely time-consuming and inconvenient for practical usage.
|
| 94 |
+
|
| 95 |
+
4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch
|
| 96 |
+
------------------------------------------------------------
|
| 97 |
+
|
| 98 |
+
To mitigate the aforementioned issue, we propose to disentangle RFT via a low-rank branch and then introduce heuristic strategies for automating scheduling hyperparameters.
|
| 99 |
+
|
| 100 |
+
### 4.1 Disentangling RFT via a Low-Rank Branch
|
| 101 |
+
|
| 102 |
+
To resolve the issue caused by optimizing both adversarial and natural objectives via the FE, we propose to leverage an auxiliary branch to disentangle the optimization procedure of natural and adversarial objectives. Inspired by PEFT(Hu et al., [2021](https://arxiv.org/html/2310.01818#bib.bib16); Chavan et al., [2023](https://arxiv.org/html/2310.01818#bib.bib4)), we introduce a low-rank (LoRa) branch as an auxiliary parameter-efficient branch composed of two rank decomposition matrices 𝑩∈ℝ d×r nat 𝑩 superscript ℝ 𝑑 subscript 𝑟 nat\bm{B}\in\mathbb{R}^{d\times r_{\mathrm{nat}}}bold_italic_B ∈ blackboard_R start_POSTSUPERSCRIPT italic_d × italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and 𝑨∈ℝ r nat×v 𝑨 superscript ℝ subscript 𝑟 nat 𝑣\bm{A}\in\mathbb{R}^{r_{\mathrm{nat}}\times v}bold_italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT × italic_v end_POSTSUPERSCRIPT where r nat∈ℕ subscript 𝑟 nat ℕ r_{\mathrm{nat}}\in\mathbb{N}italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT ∈ blackboard_N is the rank, d 𝑑 d italic_d and v 𝑣 v italic_v are the dimensionality of the input data and hidden features, respectively. Therefore, 𝑩𝑨∈ℝ d×v 𝑩 𝑨 superscript ℝ 𝑑 𝑣\bm{B}\bm{A}\in\mathbb{R}^{d\times v}bold_italic_B bold_italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_d × italic_v end_POSTSUPERSCRIPT has the same size as the parameters of the FE (i.e., θ 1∈ℝ d×v subscript 𝜃 1 superscript ℝ 𝑑 𝑣\theta_{1}\in\mathbb{R}^{d\times v}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_d × italic_v end_POSTSUPERSCRIPT).
|
| 103 |
+
|
| 104 |
+
To disentangle the RFT, we propose to optimize the natural and adversarial objectives via the LoRa branch and the FE, respectively. Thus, we formulate the loss function of the RFT disentangled via the LoRa branch as follows:
|
| 105 |
+
|
| 106 |
+
ℒ LoRa(𝒟;\displaystyle\mathcal{L}_{\mathrm{LoRa}}(\mathcal{D};caligraphic_L start_POSTSUBSCRIPT roman_LoRa end_POSTSUBSCRIPT ( caligraphic_D ;θ,𝑨,𝑩,λ 1,λ 2)=1 n∑(x,y)∈𝒟{λ 1⋅ℓ CE(h{θ¯1+𝑩𝑨,θ 2}(x),y)⏟_natural objective_\displaystyle\theta,\bm{A},\bm{B},\lambda_{1},\lambda_{2})=\frac{1}{n}\sum_{(x% ,y)\in\mathcal{D}}\bigg{\{}\underbrace{\lambda_{1}\cdot\ell_{\mathrm{CE}}(h_{% \{\bar{\theta}_{1}+\bm{B}\bm{A},\theta_{2}\}}(x),y)}_{\text{{\emph{natural % objective}}}}italic_θ , bold_italic_A , bold_italic_B , italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT ( italic_x , italic_y ) ∈ caligraphic_D end_POSTSUBSCRIPT { under⏟ start_ARG italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⋅ roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT { over¯ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + bold_italic_B bold_italic_A , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT } end_POSTSUBSCRIPT ( italic_x ) , italic_y ) end_ARG start_POSTSUBSCRIPT natural objective end_POSTSUBSCRIPT
|
| 107 |
+
+(1−λ 1)⋅ℓ CE(h θ(x~),y)+λ 2⋅ℓ KL(h θ(x~),h{θ¯1+𝑩𝑨,θ 2}(x))⏟_adversarial objective_},\displaystyle+\underbrace{(1-\lambda_{1})\cdot\ell_{\mathrm{CE}}(h_{\theta}(% \tilde{{x}}),y)+\lambda_{2}\cdot\ell_{\mathrm{KL}}(h_{\theta}(\tilde{{x}}),h_{% \{\bar{\theta}_{1}+\bm{B}\bm{A},\theta_{2}\}}(x))}_{\text{{\emph{adversarial % objective}}}}\bigg{\}},+ under⏟ start_ARG ( 1 - italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ⋅ roman_ℓ start_POSTSUBSCRIPT roman_CE end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_x end_ARG ) , italic_y ) + italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⋅ roman_ℓ start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_x end_ARG ) , italic_h start_POSTSUBSCRIPT { over¯ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + bold_italic_B bold_italic_A , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT } end_POSTSUBSCRIPT ( italic_x ) ) end_ARG start_POSTSUBSCRIPT adversarial objective end_POSTSUBSCRIPT } ,(5)
|
| 108 |
+
|
| 109 |
+
where λ 1≥0 subscript 𝜆 1 0\lambda_{1}\geq 0 italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≥ 0 and λ 2≥0 subscript 𝜆 2 0\lambda_{2}\geq 0 italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≥ 0 are the scalars, θ={θ 1,θ 2}𝜃 subscript 𝜃 1 subscript 𝜃 2\theta=\{\theta_{1},\theta_{2}\}italic_θ = { italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT } denotes all the trainable parameters composed of the FE parameters θ 1 subscript 𝜃 1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and the classifier parameters θ 2 subscript 𝜃 2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and θ¯1 subscript¯𝜃 1\bar{\theta}_{1}over¯ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT denotes that the parameters θ 1 subscript 𝜃 1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT do not require gradients.
|
| 110 |
+
|
| 111 |
+
According to Eq.[4.1](https://arxiv.org/html/2310.01818#S4.Ex2 "4.1 Disentangling RFT via a Low-Rank Branch ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), RFT disentangled via the LoRa branch can separate the optimization procedure of adversarial and natural objectives. Thus, disentangled RFT will update the FE parameters θ 1 subscript 𝜃 1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT only by minimizing the adversarial objective that aims to learn the features of adversarial data; whereas, the gradient incurred by the natural objective will affect the LoRa branch instead of the FE. Therefore, the auxiliary LoRA branch solves the issue of divergent optimization directions, thus being able to improve adversarial robustness.
|
| 112 |
+
|
| 113 |
+
Besides, the FE also indirectly learns the knowledge of the natural objective via distilling knowledge from the LoRa branch, so that it prevents RFT from degrading the generalization. We can regard the LoRa branch as the teacher model that learns features of natural data and provides high-quality natural soft labels for the student model (i.e., the FE). Thus, the KL loss term in Eq.[4.1](https://arxiv.org/html/2310.01818#S4.Ex2 "4.1 Disentangling RFT via a Low-Rank Branch ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), which is used to penalize the KL divergence between the adversarial logits and natural soft labels generated by the LoRa branch, can be regarded as the knowledge distillation loss. In this way, the FE can implicitly learn the knowledge of natural objectives from the LoRa branch, thus maintaining the standard generalization.
|
| 114 |
+
|
| 115 |
+
Note that our proposed disentangled RFT via a LoRa branch is still low-cost thanks to the parameter-efficient LoRa branch. We empirically verify that the LoRa branch only introduces a quite small amount of extra trainable parameters that are less than 5% of the FE parameters (i.e., |𝑨|+|𝑩|<0.05⋅|θ 1|𝑨 𝑩⋅0.05 subscript 𝜃 1|\bm{A}|+|\bm{B}|<0.05\cdot|\theta_{1}|| bold_italic_A | + | bold_italic_B | < 0.05 ⋅ | italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT | when r nat≤8 subscript 𝑟 nat 8 r_{\mathrm{nat}}\leq 8 italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT ≤ 8 validated in Table[3](https://arxiv.org/html/2310.01818#S5.T3 "Table 3 ‣ The sharpening hyperparameter 𝛼. ‣ 5.2 Ablation Study ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")). Besides, the auxiliary LoRa branch does not incur extra inference latency since we drop this LoRa branch and only use the parameters θ 𝜃\theta italic_θ for predicting test data during inference. Therefore, disentangled RFT can be an efficient and effective strategy to improve adversarial robustness while maintaining generalization in downstream tasks.
|
| 116 |
+
|
| 117 |
+
### 4.2 Automating Scheduling Hyperparameters
|
| 118 |
+
|
| 119 |
+
In this subsection, we introduce heuristic strategies of automating scheduling the hyperparameters including the learning rate and the scalars λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. We demonstrate the algorithm of our proposed automated RFT disentangled via a LoRa branch (AutoLoRa) in Algorithm[1](https://arxiv.org/html/2310.01818#alg1 "Algorithm 1 ‣ Automated scheduler of the learning rate (LR) 𝜂. ‣ 4.2 Automating Scheduling Hyperparameters ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework").
|
| 120 |
+
|
| 121 |
+
#### Automated scheduler of the learning rate (LR) η 𝜂\eta italic_η.
|
| 122 |
+
|
| 123 |
+
Lin et al. ([2019](https://arxiv.org/html/2310.01818#bib.bib21)) analogized the adversarial example generation process to the neural model training process. Therefore, recent studies(Wang & He, [2021](https://arxiv.org/html/2310.01818#bib.bib31); Yuan et al., [2022](https://arxiv.org/html/2310.01818#bib.bib36)) have taken a similar strategy to control the step size in the adversarial example generation process inspired by the scheduler of LR in the neural model training process in order to improve the convergence of adversarial attack. Note that the step size and the LR are used to adjust the change rate of the adversarial example and the model parameters, respectively. Conversely, we conjecture that we can adopt the strategy of adjusting the step size for scheduling the LR as well. AutoAttack(Croce & Hein, [2020](https://arxiv.org/html/2310.01818#bib.bib8)) proposed an automated scheduler of the step size guided by the classification loss, which has been validated as effective in improving the convergence of adversarial attacks. Therefore, we use a similar strategy to automatically adjust the LR.
|
| 124 |
+
|
| 125 |
+
Here, we introduce our proposed dynamic scheduler of the LR η 𝜂\eta italic_η based on the robust validation accuracy during RFT inspired by AutoAttack(Croce & Hein, [2020](https://arxiv.org/html/2310.01818#bib.bib8)). We start with the LR η(0)=0.01 superscript 𝜂 0 0.01\eta^{(0)}=0.01 italic_η start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT = 0.01 at Epoch 0 and identify whether it is necessary to halve the current LR at checkpoints of Epoch c 0,c 2,…,c n subscript 𝑐 0 subscript 𝑐 2…subscript 𝑐 𝑛 c_{0},c_{2},\dots,c_{n}italic_c start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_c start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. Given a validation set 𝒟 val={(x i,y i)}i=1 n val subscript 𝒟 val superscript subscript subscript 𝑥 𝑖 subscript 𝑦 𝑖 𝑖 1 subscript 𝑛 val\mathcal{D}_{\mathrm{val}}=\{(x_{i},y_{i})\}_{i=1}^{n_{\mathrm{val}}}caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT = { ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT end_POSTSUPERSCRIPT of n val subscript 𝑛 val n_{\mathrm{val}}italic_n start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT data points and maximum training epoch E∈ℕ 𝐸 ℕ E\in\mathbb{N}italic_E ∈ blackboard_N, we set the following two conditions:
|
| 126 |
+
|
| 127 |
+
1. 1.
|
| 128 |
+
∑e=c j−1 c j−1 𝟙[RA(𝒟 val;θ(e+1))<RA(𝒟 val;θ(e))]≤0.75⋅(c j−c j−1)subscript superscript subscript 𝑐 𝑗 1 𝑒 subscript 𝑐 𝑗 1 1 delimited-[]RA subscript 𝒟 val superscript 𝜃 𝑒 1 RA subscript 𝒟 val superscript 𝜃 𝑒⋅0.75 subscript 𝑐 𝑗 subscript 𝑐 𝑗 1\sum^{c_{j}-1}_{e=c_{j-1}}\mathbbm{1}[\mathrm{RA}(\mathcal{D}_{\mathrm{val}};% \theta^{(e+1)})<\mathrm{RA}(\mathcal{D}_{\mathrm{val}};\theta^{(e)})]\leq 0.75% \cdot(c_{j}-c_{j-1})∑ start_POSTSUPERSCRIPT italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_e = italic_c start_POSTSUBSCRIPT italic_j - 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT blackboard_1 [ roman_RA ( caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT ; italic_θ start_POSTSUPERSCRIPT ( italic_e + 1 ) end_POSTSUPERSCRIPT ) < roman_RA ( caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT ; italic_θ start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT ) ] ≤ 0.75 ⋅ ( italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_c start_POSTSUBSCRIPT italic_j - 1 end_POSTSUBSCRIPT );
|
| 129 |
+
|
| 130 |
+
2. 2.
|
| 131 |
+
η(c j−1)≡η(c j)superscript 𝜂 subscript 𝑐 𝑗 1 superscript 𝜂 subscript 𝑐 𝑗\eta^{(c_{j-1})}\equiv\eta^{(c_{j})}italic_η start_POSTSUPERSCRIPT ( italic_c start_POSTSUBSCRIPT italic_j - 1 end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ≡ italic_η start_POSTSUPERSCRIPT ( italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT and RA(𝒟 val;θ(c j−1))max≡RA(𝒟 val;θ(c j))max RA subscript subscript 𝒟 val superscript 𝜃 subscript 𝑐 𝑗 1 RA subscript subscript 𝒟 val superscript 𝜃 subscript 𝑐 𝑗\mathrm{RA}(\mathcal{D}_{\mathrm{val}};\theta^{(c_{j-1})})_{\mathrm{\max}}% \equiv\mathrm{RA}(\mathcal{D}_{\mathrm{val}};\theta^{(c_{j})})_{\mathrm{\max}}roman_RA ( caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT ; italic_θ start_POSTSUPERSCRIPT ( italic_c start_POSTSUBSCRIPT italic_j - 1 end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ≡ roman_RA ( caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT ; italic_θ start_POSTSUPERSCRIPT ( italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT,
|
| 132 |
+
|
| 133 |
+
where 𝟙[⋅]1 delimited-[]⋅\mathbbm{1}[{\cdot}]blackboard_1 [ ⋅ ] is an indicator function, θ(e)superscript 𝜃 𝑒\theta^{(e)}italic_θ start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT refers to the parameters at Epoch e∈{0,1,…,E−1}𝑒 0 1…𝐸 1 e\in\{0,1,\dots,E-1\}italic_e ∈ { 0 , 1 , … , italic_E - 1 }, 𝒟 val subscript 𝒟 val\mathcal{D}_{\mathrm{val}}caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT denotes a validation set, RA(𝒟 val;θ(e))RA subscript 𝒟 val superscript 𝜃 𝑒\mathrm{RA}(\mathcal{D}_{\mathrm{val}};\theta^{(e)})roman_RA ( caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT ; italic_θ start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT ) refers to the robust accuracy (RA) evaluated on the adversarial validation data using the parameter θ(e)superscript 𝜃 𝑒\theta^{(e)}italic_θ start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT, RA(𝒟 val;θ(c j))max RA subscript subscript 𝒟 val superscript 𝜃 subscript 𝑐 𝑗\mathrm{RA}(\mathcal{D}_{\mathrm{val}};\theta^{(c_{j})})_{\mathrm{\max}}roman_RA ( caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT ; italic_θ start_POSTSUPERSCRIPT ( italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT denotes the highest robust validation accuracy found in the first c j subscript 𝑐 𝑗 c_{j}italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT epochs.
|
| 134 |
+
|
| 135 |
+
If one of the conditions is triggered, then the LR at Epoch e=c j 𝑒 subscript 𝑐 𝑗 e=c_{j}italic_e = italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is halved and η(e)=η(c j)/2 superscript 𝜂 𝑒 superscript 𝜂 subscript 𝑐 𝑗 2\eta^{(e)}=\eta^{(c_{j})}/2 italic_η start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT = italic_η start_POSTSUPERSCRIPT ( italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT / 2 for every e∈{c j+1,…,c j+1}𝑒 subscript 𝑐 𝑗 1…subscript 𝑐 𝑗 1 e\in\{c_{j}+1,\ldots,c_{j+1}\}italic_e ∈ { italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + 1 , … , italic_c start_POSTSUBSCRIPT italic_j + 1 end_POSTSUBSCRIPT }. If at a checkpoint at Epoch c j subscript 𝑐 𝑗 c_{j}italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, the LR gets halved, then we take the parameters of the checkpoint that achieves the best robust validation accuracy (i.e., RA(𝒟 val;θ(c j))max RA subscript subscript 𝒟 val superscript 𝜃 subscript 𝑐 𝑗\mathrm{RA}(\mathcal{D}_{\mathrm{val}};\theta^{(c_{j})})_{\mathrm{\max}}roman_RA ( caligraphic_D start_POSTSUBSCRIPT roman_val end_POSTSUBSCRIPT ; italic_θ start_POSTSUPERSCRIPT ( italic_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT) as the initialization at next epoch.
|
| 136 |
+
|
| 137 |
+
Algorithm 1 Automated RFT disentangled via a LoRa branch (AutoLoRa)
|
| 138 |
+
|
| 139 |
+
1:Input: Training set
|
| 140 |
+
|
| 141 |
+
D 𝐷 D italic_D
|
| 142 |
+
, pre-trained feature extractor
|
| 143 |
+
|
| 144 |
+
θ 1 subscript 𝜃 1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT
|
| 145 |
+
, maximum training epoch
|
| 146 |
+
|
| 147 |
+
E 𝐸 E italic_E
|
| 148 |
+
|
| 149 |
+
2:Output: Adversarially robust model
|
| 150 |
+
|
| 151 |
+
θ 𝜃\theta italic_θ
|
| 152 |
+
|
| 153 |
+
3:Initialize classifier parameters
|
| 154 |
+
|
| 155 |
+
θ 2 subscript 𝜃 2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT
|
| 156 |
+
,
|
| 157 |
+
|
| 158 |
+
θ={θ 1,θ 2}𝜃 subscript 𝜃 1 subscript 𝜃 2\theta=\{\theta_{1},\theta_{2}\}italic_θ = { italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT }
|
| 159 |
+
, and LoRa branch
|
| 160 |
+
|
| 161 |
+
𝑨=𝒩(𝟎,𝝈)𝑨 𝒩 0 𝝈\bm{A}=\mathcal{N}(\bm{0},\bm{\sigma})bold_italic_A = caligraphic_N ( bold_0 , bold_italic_σ )
|
| 162 |
+
and
|
| 163 |
+
|
| 164 |
+
𝑩=𝟎 𝑩 0\bm{B}=\bm{0}bold_italic_B = bold_0
|
| 165 |
+
|
| 166 |
+
4:Initialize learning rate
|
| 167 |
+
|
| 168 |
+
η=0.01 𝜂 0.01\eta=0.01 italic_η = 0.01
|
| 169 |
+
, epoch
|
| 170 |
+
|
| 171 |
+
e=0 𝑒 0 e=0 italic_e = 0
|
| 172 |
+
, batch size
|
| 173 |
+
|
| 174 |
+
τ=128 𝜏 128\tau=128 italic_τ = 128
|
| 175 |
+
, training flag
|
| 176 |
+
|
| 177 |
+
FLAG=True 𝐹 𝐿 𝐴 𝐺 𝑇 𝑟 𝑢 𝑒 FLAG=True italic_F italic_L italic_A italic_G = italic_T italic_r italic_u italic_e
|
| 178 |
+
|
| 179 |
+
5:while
|
| 180 |
+
|
| 181 |
+
FLAG 𝐹 𝐿 𝐴 𝐺 FLAG italic_F italic_L italic_A italic_G
|
| 182 |
+
do
|
| 183 |
+
|
| 184 |
+
6:Update scalars
|
| 185 |
+
|
| 186 |
+
λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT
|
| 187 |
+
and
|
| 188 |
+
|
| 189 |
+
λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT
|
| 190 |
+
according to Eq.[6](https://arxiv.org/html/2310.01818#S4.E6 "6 ‣ Automated scheduler of the scalars 𝜆₁ and 𝜆₂. ‣ 4.2 Automating Scheduling Hyperparameters ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") and Eq.[7](https://arxiv.org/html/2310.01818#S4.E7 "7 ‣ Automated scheduler of the scalars 𝜆₁ and 𝜆₂. ‣ 4.2 Automating Scheduling Hyperparameters ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), respectively
|
| 191 |
+
|
| 192 |
+
7:for batch
|
| 193 |
+
|
| 194 |
+
m=1 𝑚 1 m=1 italic_m = 1
|
| 195 |
+
,
|
| 196 |
+
|
| 197 |
+
……\dots…
|
| 198 |
+
,
|
| 199 |
+
|
| 200 |
+
⌈|D|/τ⌉𝐷 𝜏\lceil|D|/\tau\rceil⌈ | italic_D | / italic_τ ⌉
|
| 201 |
+
do
|
| 202 |
+
|
| 203 |
+
8:Sample a minibatch
|
| 204 |
+
|
| 205 |
+
S m subscript 𝑆 𝑚 S_{m}italic_S start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT
|
| 206 |
+
from
|
| 207 |
+
|
| 208 |
+
D 𝐷 D italic_D
|
| 209 |
+
|
| 210 |
+
9:Calculate training loss
|
| 211 |
+
|
| 212 |
+
ℒ*=ℒ LoRa(S m;θ,𝑨,𝑩,λ 1,λ 2)superscript ℒ subscript ℒ LoRa subscript 𝑆 𝑚 𝜃 𝑨 𝑩 subscript 𝜆 1 subscript 𝜆 2\mathcal{L}^{*}=\mathcal{L}_{\mathrm{LoRa}}(S_{m};\theta,\bm{A},\bm{B},\lambda% _{1},\lambda_{2})caligraphic_L start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = caligraphic_L start_POSTSUBSCRIPT roman_LoRa end_POSTSUBSCRIPT ( italic_S start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ; italic_θ , bold_italic_A , bold_italic_B , italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT )
|
| 213 |
+
|
| 214 |
+
10:Update parameters
|
| 215 |
+
|
| 216 |
+
θ←θ−η⋅∇θ ℒ*←𝜃 𝜃⋅𝜂 subscript∇𝜃 superscript ℒ\theta\leftarrow\theta-\eta\cdot\nabla_{\theta}\mathcal{L}^{*}italic_θ ← italic_θ - italic_η ⋅ ∇ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT caligraphic_L start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT
|
| 217 |
+
,
|
| 218 |
+
|
| 219 |
+
𝑨←𝑨−η⋅∇𝑨 ℒ*←𝑨 𝑨⋅𝜂 subscript∇𝑨 superscript ℒ\bm{A}\leftarrow\bm{A}-\eta\cdot\nabla_{\bm{A}}\mathcal{L}^{*}bold_italic_A ← bold_italic_A - italic_η ⋅ ∇ start_POSTSUBSCRIPT bold_italic_A end_POSTSUBSCRIPT caligraphic_L start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT
|
| 220 |
+
,
|
| 221 |
+
|
| 222 |
+
𝑩←𝑩−η⋅∇𝑩 ℒ*←𝑩 𝑩⋅𝜂 subscript∇𝑩 superscript ℒ\bm{B}\leftarrow\bm{B}-\eta\cdot\nabla_{\bm{B}}\mathcal{L}^{*}bold_italic_B ← bold_italic_B - italic_η ⋅ ∇ start_POSTSUBSCRIPT bold_italic_B end_POSTSUBSCRIPT caligraphic_L start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT
|
| 223 |
+
|
| 224 |
+
11:end for
|
| 225 |
+
|
| 226 |
+
12:if Condition[1](https://arxiv.org/html/2310.01818#S4.I1.i1 "item 1 ‣ Automated scheduler of the learning rate (LR) 𝜂. ‣ 4.2 Automating Scheduling Hyperparameters ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")or Condition[2](https://arxiv.org/html/2310.01818#S4.I1.i2 "item 2 ‣ Automated scheduler of the learning rate (LR) 𝜂. ‣ 4.2 Automating Scheduling Hyperparameters ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")then
|
| 227 |
+
|
| 228 |
+
13:
|
| 229 |
+
|
| 230 |
+
η←η/2←𝜂 𝜂 2\eta\leftarrow\eta/2 italic_η ← italic_η / 2
|
| 231 |
+
|
| 232 |
+
14:end if
|
| 233 |
+
|
| 234 |
+
15:if
|
| 235 |
+
|
| 236 |
+
η<1e−5 𝜂 1 𝑒 5\eta<1e-5 italic_η < 1 italic_e - 5
|
| 237 |
+
or
|
| 238 |
+
|
| 239 |
+
e≡E−1 𝑒 𝐸 1 e\equiv E-1 italic_e ≡ italic_E - 1
|
| 240 |
+
then
|
| 241 |
+
|
| 242 |
+
16:
|
| 243 |
+
|
| 244 |
+
FLAG←False←𝐹 𝐿 𝐴 𝐺 𝐹 𝑎 𝑙 𝑠 𝑒 FLAG\leftarrow False italic_F italic_L italic_A italic_G ← italic_F italic_a italic_l italic_s italic_e
|
| 245 |
+
|
| 246 |
+
17:else
|
| 247 |
+
|
| 248 |
+
18:
|
| 249 |
+
|
| 250 |
+
e←e+1←𝑒 𝑒 1 e\leftarrow e+1 italic_e ← italic_e + 1
|
| 251 |
+
|
| 252 |
+
19:end if
|
| 253 |
+
|
| 254 |
+
20:end while
|
| 255 |
+
|
| 256 |
+
#### Automated scheduler of the scalars λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
|
| 257 |
+
|
| 258 |
+
Given a training set 𝒟 train={(x i,y i)}i=1 n subscript 𝒟 train superscript subscript subscript 𝑥 𝑖 subscript 𝑦 𝑖 𝑖 1 𝑛\mathcal{D}_{\mathrm{train}}=\{(x_{i},y_{i})\}_{i=1}^{n}caligraphic_D start_POSTSUBSCRIPT roman_train end_POSTSUBSCRIPT = { ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT of n 𝑛 n italic_n training data points, we set
|
| 259 |
+
|
| 260 |
+
λ 1(e)=1−SA(𝒟 train;{θ 1(e)+𝑩(e)𝑨(e),θ 2(e)})α,superscript subscript 𝜆 1 𝑒 1 SA superscript subscript 𝒟 train superscript subscript 𝜃 1 𝑒 superscript 𝑩 𝑒 superscript 𝑨 𝑒 superscript subscript 𝜃 2 𝑒 𝛼\displaystyle\lambda_{1}^{(e)}=1-\mathrm{SA}(\mathcal{D}_{\mathrm{train}};\{% \theta_{1}^{(e)}+\bm{B}^{(e)}\bm{A}^{(e)},\theta_{2}^{(e)}\})^{\alpha},italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT = 1 - roman_SA ( caligraphic_D start_POSTSUBSCRIPT roman_train end_POSTSUBSCRIPT ; { italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT + bold_italic_B start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT bold_italic_A start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT } ) start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT ,(6)
|
| 261 |
+
λ 2(e)=6⋅SA(𝒟 train;{θ 1(e)+𝑩(e)𝑨(e),θ 2(e)})α,superscript subscript 𝜆 2 𝑒⋅6 SA superscript subscript 𝒟 train superscript subscript 𝜃 1 𝑒 superscript 𝑩 𝑒 superscript 𝑨 𝑒 superscript subscript 𝜃 2 𝑒 𝛼\displaystyle\lambda_{2}^{(e)}=6\cdot\mathrm{SA}(\mathcal{D}_{\mathrm{train}};% \{\theta_{1}^{(e)}+\bm{B}^{(e)}\bm{A}^{(e)},\theta_{2}^{(e)}\})^{\alpha},italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT = 6 ⋅ roman_SA ( caligraphic_D start_POSTSUBSCRIPT roman_train end_POSTSUBSCRIPT ; { italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT + bold_italic_B start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT bold_italic_A start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT } ) start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT ,(7)
|
| 262 |
+
|
| 263 |
+
where λ 1(e)superscript subscript 𝜆 1 𝑒\lambda_{1}^{(e)}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT and λ 2(e)superscript subscript 𝜆 2 𝑒\lambda_{2}^{(e)}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT denote the weight terms at Epoch e 𝑒 e italic_e, SA(𝒟 train;{θ 1(e)+𝑩(e)𝑨(e),θ 2(e)})SA subscript 𝒟 train superscript subscript 𝜃 1 𝑒 superscript 𝑩 𝑒 superscript 𝑨 𝑒 superscript subscript 𝜃 2 𝑒\mathrm{SA}(\mathcal{D}_{\mathrm{train}};\{\theta_{1}^{(e)}+\bm{B}^{(e)}\bm{A}% ^{(e)},\theta_{2}^{(e)}\})roman_SA ( caligraphic_D start_POSTSUBSCRIPT roman_train end_POSTSUBSCRIPT ; { italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT + bold_italic_B start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT bold_italic_A start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_e ) end_POSTSUPERSCRIPT } ) refers to the standard accuracy (SA) of natural training data at Epoch e 𝑒 e italic_e evaluated via the LoRa branch, and α>0 𝛼 0\alpha>0 italic_α > 0 is used for shapen the SA inspired by Zhu et al. ([2021](https://arxiv.org/html/2310.01818#bib.bib39)). We set α=1.0 𝛼 1.0\alpha=1.0 italic_α = 1.0 by default.
|
| 264 |
+
|
| 265 |
+
Next, we provide the explanations for our design of the scheduler. From the perspective of knowledge distillation, the quality of natural soft labels from the teacher model (i.e., the LoRa branch) is critical to improving the performance of the student model (i.e., the FE)(Zhu et al., [2021](https://arxiv.org/html/2310.01818#bib.bib39)). Therefore, during the early stage of RFT, the standard training error (i.e., λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT) is large, which first encourages the student model to quickly learn natural training data, thus making the LoRa branch output high-quality natural soft labels with high natural training accuracy. As the standard training accuracy increases (i.e., 1−λ 1 1 subscript 𝜆 1 1-\lambda_{1}1 - italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT increases), the model gradually focuses less on learning natural data while paying more attention to learning adversarial data to improve adversarial robustness.
|
| 266 |
+
|
| 267 |
+
The scalar λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT can be reagrded as the confidence of the teacher model in its soft labels. Inspired by Zhu et al. ([2021](https://arxiv.org/html/2310.01818#bib.bib39)), when the teacher model’s standard training accuracy becomes higher, the teacher model should be more confident in the correctness of its outputs. Therefore, λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is positively correlated to standard training accuracy evaluated on the LoRa branch.
|
| 268 |
+
|
| 269 |
+
5 Experiments
|
| 270 |
+
-------------
|
| 271 |
+
|
| 272 |
+
In this section, we first conduct robustness benchmarks on various downstream tasks to validate the effectiveness of our proposed automated RFT via disentangled LoRa branch (AutoLoRa) shown in Algorithm[1](https://arxiv.org/html/2310.01818#alg1 "Algorithm 1 ‣ Automated scheduler of the learning rate (LR) 𝜂. ‣ 4.2 Automating Scheduling Hyperparameters ‣ 4 AutoLoRa: Automated RFT Disentangled via a Low-Rank Branch ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"). Then, we conduct ablation studies on the rank r nat subscript 𝑟 nat r_{\mathrm{nat}}italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT, the adversarial budgets ϵ pt subscript italic-ϵ pt\epsilon_{\mathrm{pt}}italic_ϵ start_POSTSUBSCRIPT roman_pt end_POSTSUBSCRIPT during robust pre-training, the sharpening hyperparameter α 𝛼\alpha italic_α, and the automated scheduler of LR.
|
| 273 |
+
|
| 274 |
+
#### Baselines.
|
| 275 |
+
|
| 276 |
+
We take vanilla RFT(Zhang et al., [2019](https://arxiv.org/html/2310.01818#bib.bib38); Jiang et al., [2020](https://arxiv.org/html/2310.01818#bib.bib17)) and TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)) as the baseline methods. As for configurations of the learning rate and the scalars β 𝛽\beta italic_β and γ 𝛾\gamma italic_γ, we exactly follow TWINS and also provide the detailed configurations in Table[7](https://arxiv.org/html/2310.01818#A1.T7 "Table 7 ‣ A.1 Configurations for Baselines ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") (Appendix[A.1](https://arxiv.org/html/2310.01818#A1.SS1 "A.1 Configurations for Baselines ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")).
|
| 277 |
+
|
| 278 |
+
#### Pre-trained feature extractors.
|
| 279 |
+
|
| 280 |
+
In our work, we utilized ResNet-18(He et al., [2016](https://arxiv.org/html/2310.01818#bib.bib13)) and ResNet-50 that are adversarially pre-trained on ImageNet-1K(Deng et al., [2009](https://arxiv.org/html/2310.01818#bib.bib9)) of 224×224 224 224 224\times 224 224 × 224 resolution. To be specific, we downloaded the pre-trained weights from the [official GitHub](https://github.com/microsoft/robust-models-transfer) of Salman et al. ([2020](https://arxiv.org/html/2310.01818#bib.bib26)). Following the settings of TWINS(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)), we used pre-trained models with adversarial budget ϵ pt=4/255 subscript italic-ϵ pt 4 255\epsilon_{\mathrm{pt}}=4/255 italic_ϵ start_POSTSUBSCRIPT roman_pt end_POSTSUBSCRIPT = 4 / 255 by default.
|
| 281 |
+
|
| 282 |
+
#### Downstream tasks.
|
| 283 |
+
|
| 284 |
+
We considered six datasets as the downstream tasks. ➊ CIFAR-10 with 10 classes and ➋ CIFAR-100(Krizhevsky, [2009](https://arxiv.org/html/2310.01818#bib.bib19)) with 100 classes are low-resolution image datasets, whose training and test sets have 50,000 and 10,000 images, respectively. ➌ Describable textures dataset with 57 classes (DTD-57)(Cimpoi et al., [2014](https://arxiv.org/html/2310.01818#bib.bib7)) is a collection of high-resolution textural images in the wild, which contains 2,760 training images and test 1,880 images. ➍ Stanford Dogs dataset with 120 dog categories (DOG-120)(Khosla et al., [2011](https://arxiv.org/html/2310.01818#bib.bib18)) contains 12,000 training images and 8,580 test images. ➎ Caltech-UCSD Birds-200-2011 with 200 categories of birds (CUB-200)(Wah et al., [2011](https://arxiv.org/html/2310.01818#bib.bib28)) is a high-resolution bird image dataset for fine-grained image classification, which contains 5,994 training images and 5,794 validation images. ➏ Caltech-256 with 257 classes(Griffin et al., [2007](https://arxiv.org/html/2310.01818#bib.bib12)) is a high-resolution dataset composed of 42,099 images in total. We randomly split it into 38,550 training data and 3,549 test data. Following Liu et al. ([2023](https://arxiv.org/html/2310.01818#bib.bib24)), we resized the images from both low-resolution image datasets (CIFAR-10 and CIFAR-100) and high-resolution datasets (DTD-57, DOG-120, CUB-200, Caltech-256) to 224×224 224 224 224\times 224 224 × 224 resolution. In this way, the input sizes are the same for pre-training and fine-tuning.
|
| 285 |
+
|
| 286 |
+
#### Training configurations.
|
| 287 |
+
|
| 288 |
+
For the fair comparison, we set maximum training epoch E=60 𝐸 60 E=60 italic_E = 60 following(Liu et al., [2023](https://arxiv.org/html/2310.01818#bib.bib24)). We used SGD as the optimizer, froze the weight decay of SGD as 1e−4 1 𝑒 4 1e-4 1 italic_e - 4, and set the rank of the LoRa branch r nat=8 subscript 𝑟 nat 8 r_{\mathrm{nat}}=8 italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT = 8 by default. We randomly selected 5% of the entire training data as the validation set. During training, we used PGD-10 with an adversarial budget of 8/255 8 255 8/255 8 / 255 and step size of 2/255 2 255 2/255 2 / 255 to generate the adversarial training and validation data.
|
| 289 |
+
|
| 290 |
+
#### Evaluation metrics.
|
| 291 |
+
|
| 292 |
+
We take standard test accuracy (SA) as the measurement of the generalization ability in downstream tasks. To evaluate the adversarial robustness, we use robust test accuracy evaluated by PGD-10 and AutoAttack (AA)(Croce & Hein, [2020](https://arxiv.org/html/2310.01818#bib.bib8)) of the adversarial budget being 8/255 8 255 8/255 8 / 255. For each method, we select the checkpoint that has the best PGD-10 test accuracy as the best checkpoint and report the performance of this best checkpoint in our results. We repeated the experiments 3 times and then conducted t-tests between the results of baselines (i.e., vanilla RFT and TWINS) and the results of our proposed AutoLoRa. We report the p-value of the t-test in Table[8](https://arxiv.org/html/2310.01818#A1.T8 "Table 8 ‣ A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") (Appendix[A.3](https://arxiv.org/html/2310.01818#A1.SS3 "A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")), which validates the significance of the improvement achieved by our AutoLoRa.
|
| 293 |
+
|
| 294 |
+
### 5.1 Robustness Benchmarks on Various Downstream Tasks
|
| 295 |
+
|
| 296 |
+
Table 1: Performance benchmarks using ResNet-18. “SA” refers to the standard test accuracy. “PGD-10” and “AA” refer to the robust test accuracy evaluated by PGD-10 and AutoAttack, respectively. We report p-values of t-tests in Table[8](https://arxiv.org/html/2310.01818#A1.T8 "Table 8 ‣ A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") to justify the significance of performance gain.
|
| 297 |
+
|
| 298 |
+
Table 2: Performance benchmarks using ResNet-50. We report p-values of t-tests in Table[8](https://arxiv.org/html/2310.01818#A1.T8 "Table 8 ‣ A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") to justify the significance of performance gain.
|
| 299 |
+
|
| 300 |
+
In Tables[1](https://arxiv.org/html/2310.01818#S5.T1 "Table 1 ‣ 5.1 Robustness Benchmarks on Various Downstream Tasks ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") and[2](https://arxiv.org/html/2310.01818#S5.T2 "Table 2 ‣ 5.1 Robustness Benchmarks on Various Downstream Tasks ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), we demonstrate the performance benchmarks on six downstream tasks achieved by ResNet18 and ResNet-50, respectively. We annotate the robust test accuracy achieved by our proposed AutoLoRa in bold with underlining in Table[1](https://arxiv.org/html/2310.01818#S5.T1 "Table 1 ‣ 5.1 Robustness Benchmarks on Various Downstream Tasks ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") and[2](https://arxiv.org/html/2310.01818#S5.T2 "Table 2 ‣ 5.1 Robustness Benchmarks on Various Downstream Tasks ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"). We can observe that AutoLoRa consistently obtains a higher robust test accuracy under both PGD-10 and AA for each downstream task and each backbone. Besides, the p-values obtained by t-tests in Table[8](https://arxiv.org/html/2310.01818#A1.T8 "Table 8 ‣ A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") (Appendix[A.3](https://arxiv.org/html/2310.01818#A1.SS3 "A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework")) further validate that our improvement in adversarial robustness is significant. Notably, even compared with the previous SOTA method TWINS Liu et al. ([2023](https://arxiv.org/html/2310.01818#bib.bib24)), AutoLoRa achieves a 2.03% (from 25.45% to 27.48%) robustness gain using ResNet-18 on CIFAR-100 and a 3.03% (from 14.41% to 17.44%) robustness gain using ResNet-50 on the DOG-120 task.
|
| 301 |
+
|
| 302 |
+
### 5.2 Ablation Study
|
| 303 |
+
|
| 304 |
+
In this subsection, we conducted ablation studies on the rank r nat subscript 𝑟 nat r_{\mathrm{nat}}italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT, the adversarial budgets ϵ pt subscript italic-ϵ pt\epsilon_{\mathrm{pt}}italic_ϵ start_POSTSUBSCRIPT roman_pt end_POSTSUBSCRIPT during robust pre-training, the sharpening hyperparameter α 𝛼\alpha italic_α, and the automated scheduler of LR.
|
| 305 |
+
|
| 306 |
+
#### The rank r nat subscript 𝑟 nat r_{\mathrm{nat}}italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT
|
| 307 |
+
|
| 308 |
+
Table[3](https://arxiv.org/html/2310.01818#S5.T3 "Table 3 ‣ The sharpening hyperparameter 𝛼. ‣ 5.2 Ablation Study ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") shows that the test accuracy gradually rises in most cases as the rank r nat subscript 𝑟 nat r_{\mathrm{nat}}italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT increases from 2 to 8, which indicates that a higher rank yields better robustness. The reason could be that a higher rank enables the LoRa branch to have more tunable parameters for fitting natural data and outputting higher-quality natural soft labels, which coincides with the discovery in Zhu et al. ([2021](https://arxiv.org/html/2310.01818#bib.bib39)) that high-quality soft labels are beneficial to improving performance. However, when r nat≥8 subscript 𝑟 nat 8 r_{\mathrm{nat}}\geq 8 italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT ≥ 8, the performance gain is marginal which means that r nat=8 subscript 𝑟 nat 8 r_{\mathrm{nat}}=8 italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT = 8 is enough to capture the features of natural data and provide accurate natural soft labels. Therefore, we keep r nat=8 subscript 𝑟 nat 8 r_{\mathrm{nat}}=8 italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT = 8 for the experiments in Section[5.1](https://arxiv.org/html/2310.01818#S5.SS1 "5.1 Robustness Benchmarks on Various Downstream Tasks ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") by default.
|
| 309 |
+
|
| 310 |
+
#### The adversarial budgets ϵ pt subscript italic-ϵ pt\epsilon_{\mathrm{pt}}italic_ϵ start_POSTSUBSCRIPT roman_pt end_POSTSUBSCRIPT during robust pre-training.
|
| 311 |
+
|
| 312 |
+
In Table[4](https://arxiv.org/html/2310.01818#S5.T4 "Table 4 ‣ The sharpening hyperparameter 𝛼. ‣ 5.2 Ablation Study ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"), we report the performance using pre-trained FEs with different adversarial budgets ϵ pt∈{0,1/255,2/255,4/255,8/255}subscript italic-ϵ pt 0 1 255 2 255 4 255 8 255\epsilon_{\mathrm{pt}}\in\{0,1/255,2/255,4/255,8/255\}italic_ϵ start_POSTSUBSCRIPT roman_pt end_POSTSUBSCRIPT ∈ { 0 , 1 / 255 , 2 / 255 , 4 / 255 , 8 / 255 }. The results show that larger ϵ pt subscript italic-ϵ pt\epsilon_{\mathrm{pt}}italic_ϵ start_POSTSUBSCRIPT roman_pt end_POSTSUBSCRIPT is beneficial to improving performance. Our proposed AutoLoRa consistently achieves consistently better adversarial robustness than baselines.
|
| 313 |
+
|
| 314 |
+
#### The automated scheduler of the LR.
|
| 315 |
+
|
| 316 |
+
We apply our proposed automated scheduler of LR into TWINS and report the performance in Table[5](https://arxiv.org/html/2310.01818#S5.T5 "Table 5 ‣ The sharpening hyperparameter 𝛼. ‣ 5.2 Ablation Study ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"). We can observe that TWINS with an automated scheduler of LR can achieve a comparable performance compared to TWINS with tuned hyperparameters. It validates the effectiveness of our proposed automatic LR scheduler.
|
| 317 |
+
|
| 318 |
+
#### The sharpening hyperparameter α 𝛼\alpha italic_α.
|
| 319 |
+
|
| 320 |
+
We report the performance under different α 𝛼\alpha italic_α in Table[6](https://arxiv.org/html/2310.01818#S5.T6 "Table 6 ‣ The sharpening hyperparameter 𝛼. ‣ 5.2 Ablation Study ‣ 5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"). We can observe that both standard and robust test accuracy rise as α 𝛼\alpha italic_α increases from 0.2 to 1.0 while the robust test accuracy begins to degrade as α 𝛼\alpha italic_α increases from 1.0 to 5.0. It indicates that we do not need to sharpen the values of the standard test accuracy. Therefore, we keep α=1.0 𝛼 1.0\alpha=1.0 italic_α = 1.0 by default.
|
| 321 |
+
|
| 322 |
+
Table 3: We report the effect of various ranks r nat subscript 𝑟 nat r_{\mathrm{nat}}italic_r start_POSTSUBSCRIPT roman_nat end_POSTSUBSCRIPT on the performance of downstream tasks as well as the ratio of the LoRa branch’s parameters to the original parameters (denoted as “Param. Ratio”). SA and RA refer to standard test accuracy and PGD-10 robust test accuracy, respectively.
|
| 323 |
+
|
| 324 |
+
Table 4: The effect of adversarial budget ϵ pt subscript italic-ϵ pt\epsilon_{\mathrm{pt}}italic_ϵ start_POSTSUBSCRIPT roman_pt end_POSTSUBSCRIPT during robust pre-training. We keep adversarial budget as 8/255 during RFT and robustness evaluation.
|
| 325 |
+
|
| 326 |
+
Table 5: The effect of the automated scheduler of LR on TWINS.
|
| 327 |
+
|
| 328 |
+
Table 6: The effect of α 𝛼\alpha italic_α on AutoLoRa.
|
| 329 |
+
|
| 330 |
+
6 Conclusions
|
| 331 |
+
-------------
|
| 332 |
+
|
| 333 |
+
This paper proposed an automated robust fine-tuning disentangled via a low-rank branch (AutoLoRa) that can automatically convert a pre-trained feature extractor to an adversarially robust model for the downstream task. We highlighted that vanilla RFT and TWINS have the issue where the gradient directions of optimizing both adversarial and standard objectives via the FE are divergent. This issue makes optimization unstable, thus impeding obtaining adversarial robustness and making RFT sensitive to hyperparameters. To solve the issue, we proposed a low-rank (LoRa) branch to make RFT optimize adversarial and standard objectives via the FE and the LoRA branch, respectively. Besides, we proposed heuristic strategies for automating the scheduling of the hyperparameters. Comprehensive empirical results validate that our proposed AutoLoRa can consistently yield state-of-the-art adversarial robustness in downstream tasks without tuning hyperparameters. Therefore, our proposed AutoLoRa can be an effective and parameter-free RFT framework.
|
| 334 |
+
|
| 335 |
+
Acknowledgements
|
| 336 |
+
----------------
|
| 337 |
+
|
| 338 |
+
This research is supported by the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
|
| 339 |
+
|
| 340 |
+
References
|
| 341 |
+
----------
|
| 342 |
+
|
| 343 |
+
* Bommasani et al. (2021) Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. _arXiv preprint arXiv:2108.07258_, 2021.
|
| 344 |
+
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877–1901, 2020.
|
| 345 |
+
* Buch et al. (2018) Varun H Buch, Irfan Ahmed, and Mahiben Maruthappu. Artificial intelligence in medicine: current trends and future possibilities. _British Journal of General Practice_, 68(668):143–144, 2018.
|
| 346 |
+
* Chavan et al. (2023) Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, and Zhiqiang Shen. One-for-all: Generalized lora for parameter-efficient fine-tuning. _arXiv preprint arXiv:2306.07967_, 2023.
|
| 347 |
+
* Chen et al. (2020a) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_, pp.1597–1607. PMLR, 2020a.
|
| 348 |
+
* Chen et al. (2020b) Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self-supervised models are strong semi-supervised learners. _Advances in neural information processing systems_, 33:22243–22255, 2020b.
|
| 349 |
+
* Cimpoi et al. (2014) Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 3606–3613, 2014.
|
| 350 |
+
* Croce & Hein (2020) Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In _International conference on machine learning_, pp.2206–2216. PMLR, 2020.
|
| 351 |
+
* Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pp. 248–255. Ieee, 2009.
|
| 352 |
+
* Fan et al. (2021) Lijie Fan, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, and Chuang Gan. When does contrastive learning preserve adversarial robustness from pretraining to finetuning? _Advances in Neural Information Processing Systems_, 34:21480–21492, 2021.
|
| 353 |
+
* Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_, 2014.
|
| 354 |
+
* Griffin et al. (2007) Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.
|
| 355 |
+
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 770–778, 2016.
|
| 356 |
+
* Hendrycks et al. (2019) Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In _International Conference on Machine Learning_, pp.2712–2721. PMLR, 2019.
|
| 357 |
+
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_, pp.2790–2799. PMLR, 2019.
|
| 358 |
+
* Hu et al. (2021) Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. _arXiv preprint arXiv:2106.09685_, 2021.
|
| 359 |
+
* Jiang et al. (2020) Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. _Advances in Neural Information Processing Systems_, 33:16199–16210, 2020.
|
| 360 |
+
* Khosla et al. (2011) Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In _First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition_, Colorado Springs, CO, June 2011.
|
| 361 |
+
* Krizhevsky (2009) Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
|
| 362 |
+
* Kurakin et al. (2018) Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In _Artificial intelligence safety and security_, pp. 99–112. Chapman and Hall/CRC, 2018.
|
| 363 |
+
* Lin et al. (2019) Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, and John E Hopcroft. Nesterov accelerated gradient and scale invariance for adversarial attacks. _arXiv preprint arXiv:1908.06281_, 2019.
|
| 364 |
+
* Lin et al. (2020) Zhaojiang Lin, Andrea Madotto, and Pascale Fung. Exploring versatile generative language model via parameter-efficient transfer learning. _arXiv preprint arXiv:2004.03829_, 2020.
|
| 365 |
+
* Liu et al. (2022) Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Xiangyang Ji, Antoni Chan, and Rong Jin. Improved fine-tuning by better leveraging pre-training data. _Advances in Neural Information Processing Systems_, 35:32568–32581, 2022.
|
| 366 |
+
* Liu et al. (2023) Ziquan Liu, Yi Xu, Xiangyang Ji, and Antoni B Chan. Twins: A fine-tuning framework for improved transferability of adversarial robustness and generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 16436–16446, 2023.
|
| 367 |
+
* Madry et al. (2018) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In _ICLR_, 2018.
|
| 368 |
+
* Salman et al. (2020) Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adversarially robust imagenet models transfer better? _Advances in Neural Information Processing Systems_, 33:3533–3545, 2020.
|
| 369 |
+
* Shafahi et al. (2019) Ali Shafahi, Parsa Saadatpanah, Chen Zhu, Amin Ghiasi, Christoph Studer, David Jacobs, and Tom Goldstein. Adversarially robust transfer learning. _arXiv preprint arXiv:1905.08232_, 2019.
|
| 370 |
+
* Wah et al. (2011) Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
|
| 371 |
+
* Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. _arXiv preprint arXiv:1804.07461_, 2018.
|
| 372 |
+
* Wang et al. (2020) Haotao Wang, Tianlong Chen, Shupeng Gui, TingKuei Hu, Ji Liu, and Zhangyang Wang. Once-for-all adversarial training: In-situ tradeoff between robustness and accuracy for free. _Advances in Neural Information Processing Systems_, 33:7449–7461, 2020.
|
| 373 |
+
* Wang & He (2021) Xiaosen Wang and Kun He. Enhancing the transferability of adversarial attacks through variance tuning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 1924–1933, 2021.
|
| 374 |
+
* Xie et al. (2020) Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan L Yuille, and Quoc V Le. Adversarial examples improve image recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 819–828, 2020.
|
| 375 |
+
* Xu et al. (2023a) Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, and Mohan Kankanhalli. Enhancing adversarial contrastive learning via adversarial invariant regularization. _arXiv preprint arXiv:2305.00374_, 2023a.
|
| 376 |
+
* Xu et al. (2023b) Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, and Mohan Kankanhalli. Efficient adversarial contrastive learning via robustness-aware coreset selection. _arXiv preprint arXiv:2302.03857_, 2023b.
|
| 377 |
+
* Yu et al. (2022) Qiying Yu, Jieming Lou, Xianyuan Zhan, Qizhang Li, Wangmeng Zuo, Yang Liu, and Jingjing Liu. Adversarial contrastive learning via asymmetric infonce. In _European Conference on Computer Vision_, pp. 53–69. Springer, 2022.
|
| 378 |
+
* Yuan et al. (2022) Zheng Yuan, Jie Zhang, and Shiguang Shan. Adaptive image transformations for transfer-based adversarial attack. In _European Conference on Computer Vision_, pp. 1–17. Springer, 2022.
|
| 379 |
+
* Zhang et al. (2022) Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D Yoo, and In So Kweon. Decoupled adversarial contrastive learning for self-supervised adversarial robustness. In _European Conference on Computer Vision_, pp. 725–742. Springer, 2022.
|
| 380 |
+
* Zhang et al. (2019) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In _ICML_, 2019.
|
| 381 |
+
* Zhu et al. (2021) Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, and Hongxia Yang. Reliable adversarial distillation with unreliable teachers. _arXiv preprint arXiv:2106.04928_, 2021.
|
| 382 |
+
|
| 383 |
+
Appendix A Extensive Experimental Details
|
| 384 |
+
-----------------------------------------
|
| 385 |
+
|
| 386 |
+
### A.1 Configurations for Baselines
|
| 387 |
+
|
| 388 |
+
We report the hyperparameters for reproducing the results of baselines. Note that we exactly followed the hyperparameters provided by Liu et al. ([2023](https://arxiv.org/html/2310.01818#bib.bib24)) since they are obtained by a time-consuming grid search.
|
| 389 |
+
|
| 390 |
+
Table 7: The hyperparameter configurations of vanilla RFT and TWINS in our experiment following Liu et al. ([2023](https://arxiv.org/html/2310.01818#bib.bib24)). The format means (η(0),WD,γ)superscript 𝜂 0 WD 𝛾(\eta^{(0)},\mathrm{WD},\gamma)( italic_η start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , roman_WD , italic_γ ) for TWINS and (η(0),WD)superscript 𝜂 0 WD(\eta^{(0)},\mathrm{WD})( italic_η start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , roman_WD ) for baselines where η(0),WD,β 2 superscript 𝜂 0 WD subscript 𝛽 2\eta^{(0)},\mathrm{WD},\beta_{2}italic_η start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT , roman_WD , italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are the initial learning rate, the weight decay and the scalar. respectively.
|
| 391 |
+
|
| 392 |
+
### A.2 Extensive Results
|
| 393 |
+
|
| 394 |
+
Here, we provide the extra results of gradient similarity and adversarial robustness evaluated on CIFAR-10(Krizhevsky, [2009](https://arxiv.org/html/2310.01818#bib.bib19)) and CUB-200(Wah et al., [2011](https://arxiv.org/html/2310.01818#bib.bib28)). The experimental details exactly follow Section[5](https://arxiv.org/html/2310.01818#S5 "5 Experiments ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework").
|
| 395 |
+
|
| 396 |
+

|
| 397 |
+
|
| 398 |
+

|
| 399 |
+
|
| 400 |
+
(a)
|
| 401 |
+
|
| 402 |
+

|
| 403 |
+
|
| 404 |
+

|
| 405 |
+
|
| 406 |
+
(b)
|
| 407 |
+
|
| 408 |
+
Figure 2: Figure[1(a)](https://arxiv.org/html/2310.01818#A1.F1.sf1 "1(a) ‣ Figure 2 ‣ A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") shows the cosine similarity between the gradients of natural and adversarial objectives w.r.t. the feature extractor (FE). Figure[1(b)](https://arxiv.org/html/2310.01818#A1.F1.sf2 "1(b) ‣ Figure 2 ‣ A.2 Extensive Results ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") shows the robust test accuracy evaluated via PGD-10.
|
| 409 |
+
|
| 410 |
+
### A.3 Validating Significance via T-Tests
|
| 411 |
+
|
| 412 |
+
We repeated the experiments using the random seed from {0,6,66}0 6 66\{0,6,66\}{ 0 , 6 , 66 }. Therefore, for each downstream, each method has three results of SA, PGD-10, and AA, respectively. We conducted t-tests between the three results of SA/PGD-10/AA obtained by vanilla RFT and our proposed AutoLoRa as well as t-tests between the three results of SA/PGD-10/AA obtained by TWINS and our proposed AutoLoRa. We report the p-values obtained by t-tests in Table[8](https://arxiv.org/html/2310.01818#A1.T8 "Table 8 ‣ A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework"). Note that the p-value is smaller than 0.05, which means that the improvement gained by our proposed method is significant.
|
| 413 |
+
|
| 414 |
+
We annotate the p-value in bold when the p-value is smaller than 0.05 and the performance of AutoLoRa is better than the baseline. Table[8](https://arxiv.org/html/2310.01818#A1.T8 "Table 8 ‣ A.3 Validating Significance via T-Tests ‣ Appendix A Extensive Experimental Details ‣ AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework") validates that our proposed AutoLoRa achieves significant improvement in most cases.
|
| 415 |
+
|
| 416 |
+
Table 8: We report the p-values of t-tests between our proposed AutoLoRa (ours) and vanilla RFT as well as TWINS.
|
| 417 |
+
|
| 418 |
+
Model Method P-value (vanilla RFT vs. ours)P-value (TWINS vs. ours)
|
| 419 |
+
SA PGD-10 AA SA PGD-10 AA
|
| 420 |
+
ResNet-18 CIFAR-10 0.0001 0.0014 0.0020 0.0088 0.0002 0.0019
|
| 421 |
+
CIFAR-100 0.0010 8e-05 0.0008 0.0551 5e-05 0.0091
|
| 422 |
+
DTD-57 0.2393 0.0004 0.0037 0.4447 0.0036 0.0043
|
| 423 |
+
DOG-120 0.2568 0.0026 0.0002 0.4993 0.0043 0.0008
|
| 424 |
+
CUB-200 0.0215 0.0054 0.0051 0.8731 0.0244 0.0216
|
| 425 |
+
Caltech-256 0.0003 0.0023 0.0033 0.0458 0.0021 0.0026
|
| 426 |
+
ResNet-50 CIFAR-10 0.0019 0.0029 0.0056 0.0904 0.0341 0.0125
|
| 427 |
+
CIFAR-100 1e-05 3e-05 0.0024 0.0044 0.0004 0.0018
|
| 428 |
+
DTD-57 0.0320 9e-05 0.0002 0.0053 0.0031 0.0022
|
| 429 |
+
DOG-120 0.1595 0.0414 5e-06 0.0031 0.0329 6e-06
|
| 430 |
+
CUB-200 6e-06 5e-06 2e-06 0.1027 0.0010 0.0048
|
| 431 |
+
Caltech-256 1e-05 0.0457 0.0002 0.0151 0.0413 0.0050
|