SlowGuess commited on
Commit
1c9f708
·
verified ·
1 Parent(s): 4acd9bc

Add Batch 8a0df344-8624-4ce1-8663-584eb11aa50d

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. a3fladversariallyadaptivebackdoorattackstofederatedlearning/510fa39f-c467-465d-bdb6-0e240c0047ca_content_list.json +3 -0
  2. a3fladversariallyadaptivebackdoorattackstofederatedlearning/510fa39f-c467-465d-bdb6-0e240c0047ca_model.json +3 -0
  3. a3fladversariallyadaptivebackdoorattackstofederatedlearning/510fa39f-c467-465d-bdb6-0e240c0047ca_origin.pdf +3 -0
  4. a3fladversariallyadaptivebackdoorattackstofederatedlearning/full.md +523 -0
  5. a3fladversariallyadaptivebackdoorattackstofederatedlearning/images.zip +3 -0
  6. a3fladversariallyadaptivebackdoorattackstofederatedlearning/layout.json +3 -0
  7. abdiffuserfullatomgenerationofinvitrofunctioningantibodies/cb6c0a83-7567-4bb7-b689-e00aee5960f2_content_list.json +3 -0
  8. abdiffuserfullatomgenerationofinvitrofunctioningantibodies/cb6c0a83-7567-4bb7-b689-e00aee5960f2_model.json +3 -0
  9. abdiffuserfullatomgenerationofinvitrofunctioningantibodies/cb6c0a83-7567-4bb7-b689-e00aee5960f2_origin.pdf +3 -0
  10. abdiffuserfullatomgenerationofinvitrofunctioningantibodies/full.md +0 -0
  11. abdiffuserfullatomgenerationofinvitrofunctioningantibodies/images.zip +3 -0
  12. abdiffuserfullatomgenerationofinvitrofunctioningantibodies/layout.json +3 -0
  13. abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/7f9850ff-ded7-44ba-9f78-8655b5dbc4a3_content_list.json +3 -0
  14. abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/7f9850ff-ded7-44ba-9f78-8655b5dbc4a3_model.json +3 -0
  15. abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/7f9850ff-ded7-44ba-9f78-8655b5dbc4a3_origin.pdf +3 -0
  16. abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/full.md +288 -0
  17. abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/images.zip +3 -0
  18. abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/layout.json +3 -0
  19. abidebythelawandfollowtheflowconservationlawsforgradientflows/d4cb9e69-f570-490a-b038-de3edb695979_content_list.json +3 -0
  20. abidebythelawandfollowtheflowconservationlawsforgradientflows/d4cb9e69-f570-490a-b038-de3edb695979_model.json +3 -0
  21. abidebythelawandfollowtheflowconservationlawsforgradientflows/d4cb9e69-f570-490a-b038-de3edb695979_origin.pdf +3 -0
  22. abidebythelawandfollowtheflowconservationlawsforgradientflows/full.md +380 -0
  23. abidebythelawandfollowtheflowconservationlawsforgradientflows/images.zip +3 -0
  24. abidebythelawandfollowtheflowconservationlawsforgradientflows/layout.json +3 -0
  25. acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/593c13dd-0a60-4ada-ac05-8c0e10b2ddd1_content_list.json +3 -0
  26. acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/593c13dd-0a60-4ada-ac05-8c0e10b2ddd1_model.json +3 -0
  27. acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/593c13dd-0a60-4ada-ac05-8c0e10b2ddd1_origin.pdf +3 -0
  28. acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/full.md +856 -0
  29. acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/images.zip +3 -0
  30. acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/layout.json +3 -0
  31. acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/2f194880-bd7f-48af-a9ec-650f3b491eec_content_list.json +3 -0
  32. acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/2f194880-bd7f-48af-a9ec-650f3b491eec_model.json +3 -0
  33. acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/2f194880-bd7f-48af-a9ec-650f3b491eec_origin.pdf +3 -0
  34. acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/full.md +0 -0
  35. acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/images.zip +3 -0
  36. acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/layout.json +3 -0
  37. acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/86b1e1e5-defb-41f3-ba94-b000113a0604_content_list.json +3 -0
  38. acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/86b1e1e5-defb-41f3-ba94-b000113a0604_model.json +3 -0
  39. acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/86b1e1e5-defb-41f3-ba94-b000113a0604_origin.pdf +3 -0
  40. acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/full.md +607 -0
  41. acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/images.zip +3 -0
  42. acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/layout.json +3 -0
  43. acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/47208fdc-bb32-4554-b369-ea7c723de9b9_content_list.json +3 -0
  44. acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/47208fdc-bb32-4554-b369-ea7c723de9b9_model.json +3 -0
  45. acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/47208fdc-bb32-4554-b369-ea7c723de9b9_origin.pdf +3 -0
  46. acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/full.md +799 -0
  47. acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/images.zip +3 -0
  48. acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/layout.json +3 -0
  49. acceleratingexplorationwithunlabeledpriordata/ea7177fa-8e77-4951-ba37-20328d4330c6_content_list.json +3 -0
  50. acceleratingexplorationwithunlabeledpriordata/ea7177fa-8e77-4951-ba37-20328d4330c6_model.json +3 -0
a3fladversariallyadaptivebackdoorattackstofederatedlearning/510fa39f-c467-465d-bdb6-0e240c0047ca_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60506a251342763b1ef25e88cc5feef0486a313a0d2b4132999cd85b97fc7c68
3
+ size 121356
a3fladversariallyadaptivebackdoorattackstofederatedlearning/510fa39f-c467-465d-bdb6-0e240c0047ca_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b72ed7ec0aaf961d628acf5da33f70f49b75fe455c79077e5f6f61c41c263aa
3
+ size 148718
a3fladversariallyadaptivebackdoorattackstofederatedlearning/510fa39f-c467-465d-bdb6-0e240c0047ca_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33b16e946730f6e2b00940888bc99514b1d8a3b3a59adb3dcc81095c14c4257b
3
+ size 1509387
a3fladversariallyadaptivebackdoorattackstofederatedlearning/full.md ADDED
@@ -0,0 +1,523 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A3FL: Adversarily Adaptive Backdoor Attacks to Federated Learning
2
+
3
+ Hangfan Zhang, Jinyuan Jia, Jinghui Chen, Lu Lin, Dinghao Wu
4
+
5
+ {hz5148,jinyuan,jzc5917,lulin,dinghao}@psu.edu
6
+
7
+ The Pennsylvania State University
8
+
9
+ # Abstract
10
+
11
+ Federated Learning (FL) is a distributed machine learning paradigm that allows multiple clients to train a global model collaboratively without sharing their local training data. Due to its distributed nature, many studies have shown that it is vulnerable to backdoor attacks. However, existing studies usually used a predetermined, fixed backdoor trigger or optimized it based solely on the local data and model without considering the global training dynamics. This leads to sub-optimal and less durable attack effectiveness, i.e., their attack success rate is low when the attack budget is limited and decreases quickly if the attacker can no longer perform attacks anymore. To address these limitations, we propose A3FL, a new backdoor attack which adversarially adapts the backdoor trigger to make it less likely to be removed by the global training dynamics. Our key intuition is that the difference between the global model and the local model in FL makes the local-optimized trigger much less effective when transferred to the global model. We solve this by optimizing the trigger to even survive the scenario where the global model was trained to directly unlearn the trigger. Extensive experiments on benchmark datasets are conducted for twelve existing defenses to comprehensively evaluate the effectiveness of our A3FL. Our code is available at https://github.com/hfzhang31/A3FL.
12
+
13
+ # 1 Introduction
14
+
15
+ Recent years have witnessed the rapid development of Federated Learning (FL) [1, 2, 3], an advanced distributed learning paradigm. With the assistance of a cloud server, multiple clients such as smartphones or IoT devices train a global model collaboratively based on their private training data through multiple communication rounds. In each communication round, the cloud server selects a part of the clients and sends the current global model to them. Each selected client first uses the received global model to initialize its local model, then trains it based on its local dataset, and finally sends the trained local model back to the cloud server. The cloud server aggregates local models from selected clients to update the current global model. FL has been widely used in many safety- and privacy-critical applications [4, 5, 6, 7].
16
+
17
+ Numerous studies [8, 9, 10, 11, 12, 13, 14] have shown that the distributed nature of FL provides a surface to backdoor attacks, where an attacker can compromise some clients and utilize them to inject a backdoor into the global model such that the model's behaviors are the attacker desired. In particular, the backdoored global model behaves normally on clean testing inputs but predicts any testing inputs stamped with an attacker-chosen backdoor trigger as a specific target class.
18
+
19
+ Depending on whether the backdoor trigger is optimized, we can categorize existing attacks into fixed-trigger attacks [12, 11, 13, 8] and trigger-optimization attacks [10, 9]. In a fixed-trigger attack, an attacker pre-selects a fixed backdoor trigger and thus does not utilize any information from FL training process. CWhile a fixed-trigger attack can be more efficient and straightforward, it usually suffers from limited effectiveness and more obvious utility drops.
20
+
21
+ In a trigger-optimization attack, an attacker optimizes the backdoor trigger to enhance the attack. Fang et al. [10] proposed to maximize the difference between latent representations of clean and trigger-stamped samples. Lyu et al. [9] proposed to optimize the trigger and local model jointly with $\ell_2$ regularization on local model weights to bypass defenses. The major limitations of existing trigger-optimization attacks are twofold. First, they only leverage local models of compromised clients to optimize the backdoor trigger, which ignores the global training dynamics. Second, they strictly regulate the difference between the local and global model weights to bypass defenses, which in turn limits the backdoor effectiveness. As a result, the locally optimized trigger becomes much less effective when transferred to the global model as visualized in Figure 1. More details for this experiment can be found in Appendix A.1.
22
+
23
+ ![](images/fa722dfcf6982234a8c0d940abcc345a26cc8f91236605264925d2671484bda0.jpg)
24
+ Figure 1: A3FL and CerP [9] can achieve $100\%$ ASR on the local model. However, only A3FL in the mean time obtains a high global ASR.
25
+
26
+ Our contribution: In this paper, we propose Adversarially Adaptive Backdoor Attacks to Federated Learning (A3FL). Recall that existing works can only achieve sub-optimal attack performance due to ignorance of global training dynamics. A3FL addresses this problem by adversially adapting to the dynamic global model. We propose adversarial adaptation loss, in which we apply an adversarial training-like method to optimize the backdoor trigger so that the injected backdoor can remain effective in the global model. In particular, we predict the movement of the global model by assuming that the server can access the backdoor trigger and train the global model to directly unlearn the trigger. We adaptively optimize the backdoor trigger to make it survive this adversarial global model, i.e., the backdoor cannot be easily unlearned even if the server is aware of the exact backdoor trigger. We empirically validate our intuition as well as the effectiveness and durability of the proposed attack.
27
+
28
+ We further conduct extensive experiments on widely-used benchmark datasets, including CIFAR-10 [15] and TinyImageNet [16], to evaluate the effectiveness of A3FL. Our empirical results demonstrate that A3FL is consistently effective across different datasets and settings. We further compare A3FL with 4 state-of-the-art backdoor attacks [12, 11, 10, 9] under 13 defenses [2, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27], and the results suggest that A3FL remarkably outperforms all baseline attacks by up to 10 times against all defenses. In addition, we find that A3FL is significantly more durable than all baselines. Finally, we conduct extensive ablation studies to evaluate the impact of hyperparameters on the performance of A3FL.
29
+
30
+ To summarize, our contributions can be outlined as follows.
31
+
32
+ - We propose A3FL, a novel backdoor attack to the FL paradigm based on adversarial adaptation, in which the attacker optimizes the backdoor trigger using an adversarial training-like technique to enhance its persistence within the global training dynamics.
33
+ - We empirically demonstrate that A3FL remarkably improves the durability and attack effectiveness of the injected backdoor in comparison to previous backdoor attacks.
34
+ - We comprehensively evaluate A3FL towards existing defenses and show that they are insufficient for mitigating A3FL, highlighting the need for new defenses.
35
+
36
+ # 2 Related Work
37
+
38
+ Federated learning: Federated Learning (FL) was first proposed in [1] to improve communication efficiency in decentralized learning. FedAvg [2] aggregated updates from each client and trains the global model with SGD. Following studies [28, 29, 30, 31, 32] further improved the federated paradigm by making it more adaptive, general, and efficient.
39
+
40
+ Existing attacks and their limitations: In backdoor attacks to FL, an attacker aims to inject a backdoor into model updates of compromised clients such that the final global model aggregated by the server is backdoored. Existing backdoor attacks on FL can be classified into two categories: fixed-trigger attacks [12, 11, 8, 14, 13] and trigger-optimization attacks [10, 9].
41
+
42
+ Fixed-trigger attacks [8, 11, 14, 13, 12] pre-select a fixed backdoor trigger and poison the local training set with it. Since a fixed trigger may not be effective for backdoor injection, these attacks improved the backdoor effectiveness through other approaches including manually manipulating the poisoned updates. Particularly, scaling attack [8] scaled up the updates to dominate other clients to improve the attack effectiveness. DBA [11] split the trigger into several sub-triggers for poisoning, which makes DBA more stealthy from defenses. Neurotoxin [12] only attacked unimportant model parameters that are less frequently updated to prevent the backdoor from being erased shortly.
43
+
44
+ Trigger-optimization attacks [10, 9] optimize the backdoor trigger to enhance the attack. F3BA [10] optimized the trigger pattern to maximize the difference between latent representations of clean and trigger-stamped samples. F3BA also projected gradients to unimportant model parameters like Neurotoxin [12] to improve stealthiness. CerP [9] jointly optimized the trigger and the model weights with $\ell_2$ regularization to minimize the local model bias. These attacks can achieve higher attack performance than fixed-trigger attacks. However, they have the following limitations. First, they only consider the static local model and ignore the dynamic global model in FL, thus the optimized trigger could be sub-optimal on the global model. Second, they apply strict regularization on the difference between the local model and the global model, which harms the backdoor effectiveness. Therefore, they commonly need a larger attack budget (e.g., compromising more clients) to take effect. We will empirically demonstrate these limitations in Section 4.
45
+
46
+ Existing defenses: In this paper, we consider two categories of defenses in FL. The first category of defense mechanisms is deliberately designed to alleviate the risks of backdoor attacks [17, 19, 20, 18, 33] on FL. These defense strategies work by restricting clients' updates to prevent the attackers from effectively implanting a backdoor into the global model. For instance, the Norm Clipping [17] defense mechanism limits clients' behavior by clipping large updates, while the CRFL [19] defense mechanism uses parameter smoothing to impose further constraints on clients' updates.
47
+
48
+ The second category of defenses [26, 25, 24, 23, 22, 21, 34] is proposed to improve the robustness of FL against varied threats. These defense mechanisms operate under the assumption that the behavior of different clients is comparable. Therefore, they exclude abnormal clients to obtain an update that is recognized by most clients to train the global model. For instance, the Median [22] defense mechanism updates the global model using the median values of all clients' updates, while Krum [21] filters out the client with the smallest pairwise distance from other clients and trains the global model solely with the filtered client updates. These defense mechanisms can achieve superior robustness compared to those defense mechanisms that are specifically designed for backdoor attacks. Nevertheless, the drawback of this approach is evident: it often compromises the accuracy of the global model, as it tends to discard most of the information provided by clients, even if these updates are merely potentially harmful.
49
+
50
+ In this paper, we also utilize backdoor unlearning [35, 36] to approximate existing defenses. Backdoor unlearning typically minimizes the prediction loss of trigger-stamped samples to the ground truth labels. Note that backdoor unlearning disparts from so-called machine unlearning [37, 38, 39], in which the model is unlearned to "forget" specific training samples.
51
+
52
+ There exist additional defenses in FL that are beyond the scope of this paper. While these defenses may offer potential benefits, they also come with certain limitations in practice. For instance, FLTrust [40] assumed the server holds a clean validation dataset, which deviates from the typical FL setting. Cao et al. [41] proposed sample-wise certified robustness which demands hundreds of times of retraining and is computationally expensive.
53
+
54
+ # 3 Methodology
55
+
56
+ To formulate the backdoor attack scenario, we first introduce the federated learning setup and threat model. Motivated by the observation of the local-global gap in existing works due to the ignorance of global dynamics, we propose to optimize the trigger via an adversarial adaptation loss.
57
+
58
+ # 3.1 Federated Learning Setup and Threat Model
59
+
60
+ We consider a standard federated learning setup where $N$ clients aim to collaboratively train a global model $f$ with the coordination of a server. Let $\mathcal{D}_i$ be the private training dataset held by the client $i$ , where $i = 1,2,\ldots,N$ . The joint training dataset of the $N$ clients can be denoted as $\mathcal{D} = \cup_{i=1}^{N}\mathcal{D}_i$ .
61
+
62
+ In the $t$ -th communication round, the server first randomly selects $M$ clients, where $M \leq N$ . For simplicity, we use $S_{t}$ to denote the set of selected $M$ clients. The server then distributes the current version of the global model $\theta_{t}$ to the selected clients. Each selected client $i \in S_{t}$ first uses the global model to initialize its local model, then trains its local model on its local training dataset, and finally uploads the local model update (i.e., the difference between the trained local model and the received global model) to the server. We use $\Delta_{t}^{i}$ to denote the local model update of the client $i$ in the $t$ -th communication round. The server aggregates the received updates on model weights and updates the current global model weights as follows:
63
+
64
+ $$
65
+ \boldsymbol {\theta} _ {t + 1} = \boldsymbol {\theta} _ {t} + \mathcal {A} \left(\left\{\boldsymbol {\Delta} _ {t} ^ {i} \mid i \in S _ {t} \right\}\right) \tag {1}
66
+ $$
67
+
68
+ where $\mathcal{A}$ is an aggregation rule adopted by the server. For instance, a widely used aggregation rule FedAvg [2] takes an average over the local model updates uploaded by clients.
69
+
70
+ Attacker's goal: We consider an attacker aims to inject a backdoor into the global model. In particular, the attacker aims to make the injected backdoor effective and durable. The backdoor is effective if the backdoored global model predicts any testing inputs stamped with an attacker-chosen backdoor trigger as an attacker-chosen target class. The backdoor is durable if it remains in the global model even if the attacker-compromised clients stop uploading poisoned updates while the training of the global model continues. We note that a durable backdoor is essential for an attacker as the global model in a production federated learning system is periodically updated but it is impractical for the attacker to perform attacks in all time periods [12, 42]. Considering the durability of the backdoor enables us to understand the effectiveness of backdoor attacks under a strong constraint, i.e., the attacker can only attack the global model within a limited number of communication rounds.
71
+
72
+ Attacker's background knowledge and capability: Following threat models in previous studies [9, 12, 10, 11, 14, 8], we consider an attacker that can compromise a certain number of clients. In particular, the attacker can access the training datasets of those compromised clients. Moreover, the attacker can access the global model received by those clients and manipulate their uploaded updates to the server. As a practical matter, we consider the attacker can only control those compromised clients for a limited number of communication rounds [12, 11, 8, 10, 9].
73
+
74
+ # 3.2 Adversarily Adaptive Backdoor Attack (A3FL)
75
+
76
+ Our key observation is that existing backdoor attacks are less effective because they either use a fixed trigger pattern or optimize the trigger pattern only based on the local model of compromised clients. However, the global model is dynamically updated and therefore differs from the static local models. This poses two significant challenges for existing backdoor attacks. Firstly, a backdoor that works effectively on the local model may not be similarly effective on the global model. Secondly, the injected backdoor is rapidly eliminated since the global model is continuously updated by the server, making it challenging for attackers to maintain the backdoor's effectiveness over time.
77
+
78
+ We aim to address these challenges by adversarially adapting the backdoor trigger to make it persistent in the global training dynamics. Our primary objective is to optimize the backdoor trigger in a way that allows it to survive even in the scenario where the global model is trained to directly unlearn the backdoor [35, 36]. To better motivate our method, we first discuss the limitations of existing state-of-the-art backdoor attacks on federated learning.
79
+
80
+ Limitation of existing works: In recent state-of-the-art works [9, 10], the attacker optimizes the backdoor trigger to maximize its attack effectiveness and applies regularization techniques to bypass server-side defense mechanisms. Formally, given the trigger pattern $\delta$ and an arbitrary input $\mathbf{x}$ , the input is stamped with the backdoor trigger can be denoted as $\mathbf{x} \oplus \delta$ , which is called the backdoored input. Suppose the target class is $\tilde{y}$ . Since the attacker has access to the training dataset of a compromised client $i$ , the backdoor trigger $\delta$ can be optimized using the following objective:
81
+
82
+ $$
83
+ \boldsymbol {\delta} ^ {*} = \underset {\boldsymbol {\delta}} {\operatorname {a r g m i n}} \mathbb {E} _ {(\mathbf {x}, y) \sim \mathcal {D} _ {i}} \left[ \mathcal {L} (\mathbf {x} \oplus \boldsymbol {\delta}, \tilde {y}; \boldsymbol {\theta} _ {t}) \right] \tag {2}
84
+ $$
85
+
86
+ where $\theta_{t}$ represents the global model weights in the $t$ -th communication round, and $\mathcal{L}$ is the classification loss function such as cross-entropy loss. To conduct a backdoor attack locally, the attacker randomly samples a small set of inputs $\mathcal{D}_i^b$ from the local training set $\mathcal{D}_i$ , and poisons inputs in $\mathcal{D}_i^b$ with trigger stamped. The attacker then injects a backdoor into the local model by optimizing the local model on the partially poisoned local training set with regularization to limit the gap between
87
+
88
+ the local and global model, i.e., $||\pmb{\theta} - \pmb{\theta}_t||$ . While the regularization term helps bypass server-side defenses, it greatly limits the backdoor effectiveness, as it only considers the current global model $\pmb{\theta}_t$ and thus fails to adapt to future global updates.
89
+
90
+ As illustrated in Figure 1, we observe that such backdoor attack on federated learning (e.g., CerP [9]) is highly effective on the local model, suggested by a high local attack success rate (ASR). However, due to the ignorance of global dynamics, they cannot achieve similar effectiveness when transferred to the global model, resulting in a low ASR on the global model. Our method A3FL aims to bridge the local-global gap in existing approaches to make the backdoor persistent when transferred to the global model thus achieving advanced attack performance. In particular, we introduce adversarial adaptation loss that makes the backdoor persistent to global training dynamics.
91
+
92
+ Adversarial adaptation loss: To address the challenge introduced by the global model dynamics in federated learning, we propose the adversarial adaptation loss. As the attacker cannot directly control how the global model is updated as federated learning proceeds, its backdoor performance can be significantly impacted when transferred to the global model, especially when only a small number of clients are compromised by the attacker or defense strategies are deployed. For instance, local model updates from benign clients can re-calibrate the global model to indirectly mitigate the influence of the backdoored updates from the compromised clients; a defense strategy can also be deployed by the server to mitigate the backdoor. To make the backdoor survive such challenging scenarios, our intuition is that, if an attacker could anticipate the future dynamics of the global model, the backdoor trigger would be better optimized to adapt to global dynamics.
93
+
94
+ However, global model dynamics are hard to predict because 1) at each communication round, all selected clients contribute to the global model but the attacker cannot access the private training datasets from benign clients and thus cannot predict their local model updates, and 2) the attacker does not know how local model updates are aggregated to obtain the global model and is not aware of possible defense strategies adopted by the server. As directly predicting the exact global model dynamics is challenging, we instead require the attacker to foresee and survive the scenario where the global model is trained to directly unlearn the backdoor. In this paper we consider backdoor unlearning proposed in prior backdoor defenses [35, 36].
95
+
96
+ Specifically, starting from current global model $\theta_{t}$ , we foresee an adversarially crafted global model $\theta_{t}^{\prime}$ that can minimize the impact of the backdoor. We adopt an adversarial training-like method to obtain $\theta_{t}^{\prime}$ : the attacker can use the generated backdoor trigger to simulate the unlearning of the backdoor in the global model. The trigger is then optimized to simultaneously backdoor the current global model $\theta_{t}$ and the adversarially adapted global model $\theta_{t}^{\prime}$ . Formally, the adversarially adaptive backdoor attack (A3FL) can be formulated as the following optimization problem:
97
+
98
+ $$
99
+ \begin{array}{l} \delta^ {*} = \underset {\delta} {\operatorname {a r g m i n}} \mathbb {E} _ {(\mathbf {x}, y) \sim \mathcal {D} _ {i}} \left[ \mathcal {L} (\mathbf {x} \oplus \delta , \tilde {y}; \boldsymbol {\theta} _ {t}) + \lambda \mathcal {L} (\mathbf {x} \oplus \delta , \tilde {y}; \boldsymbol {\theta} _ {t} ^ {\prime}) \right] \\ s. t. \boldsymbol {\theta} _ {t} ^ {\prime} = \underset {\boldsymbol {\theta}} {\operatorname {a r g m i n}} \mathbb {E} _ {(\mathbf {x}, y) \sim \mathcal {D} _ {i}} [ \mathcal {L} (\mathbf {x} \oplus \boldsymbol {\delta}, y; \boldsymbol {\theta}) ] \tag {3} \\ \end{array}
100
+ $$
101
+
102
+ where $\theta$ is initialized with current global model weights $\theta_t$ ; $\theta_t'$ is the optimized adversarial global model which aims to correctly classify the backdoored inputs as their ground-truth label to unlearn the backdoor. In trigger optimization, $\lambda$ is a hyperparameter balancing the backdoor effect on the current global model $\theta_t$ and the adversarial one $\theta_t'$ , such that the local-global gap is bridged when the locally optimized trigger is transferred to the global model (after server-side aggregation/defenses). Note that attacking the adversarial model is an adaptation or approximation of global dynamics, as in practice the server cannot directly access and unlearn the backdoor trigger to obtain such an adversarial model.
103
+
104
+ Algorithm of A3FL: We depict the workflow of A3FL compromising a client in Algorithm 1. At the $t$ -th communication round, the client is selected by the server and receives the current global model $\theta_t$ . Lines 4-8 optimize the trigger based on the current and the adversarial global model using cross-entropy loss $\mathcal{L}_{\mathrm{ce}}$ . The adversarial global model is initialized by the global model weights in Line 1, and is updated in Line 10. Lines 12-14 train the local model on the poisoned dataset and upload local updates to the server.
105
+
106
+ Algorithm 1: The workflow of A3FL compromising a client
107
+ Input: $\theta_{t},\mathcal{D}_{i},\tilde{y},K,K_{\mathrm{trig}}$ trigger, $\alpha_{1},\alpha_{2},\lambda$
108
+ 1: $\pmb{\theta}_t^{\prime} = \pmb{\theta}_t$
109
+ 2: for $j = 1$ to $K$ do
110
+ 3: Sample a batch of training data $\mathcal{B}$ from $\mathcal{D}_i$
111
+ 4: for $k = 1$ to $K_{\mathrm{trig}}$ do
112
+ 5: // Optimize trigger pattern $\delta$ following Equation 3.
113
+ 6: $L = \frac{1}{|B|}\sum_{\mathbf{x}\in B}(\mathcal{L}_{\mathrm{ce}}(\mathbf{x}\oplus \delta ,\tilde{y};\pmb {\theta}_t) + \lambda \mathcal{L}_{\mathrm{ce}}(\mathbf{x}\oplus \delta ,\tilde{y};\pmb {\theta}_t'))$
114
+ 7: $\delta \gets \delta -\alpha_1\nabla_\delta L$
115
+ 8: end for
116
+ 9: // Optimize adversarial global model weights $\pmb{\theta}_t^\prime$ following Equation 3.
117
+ 10: $\pmb{\theta}_t^\prime \leftarrow \pmb{\theta}_t^\prime -\alpha_2\nabla_\pmb{\theta}\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{x},y)\in \mathcal{B}}\mathcal{L}_{\mathrm{ce}}(\mathbf{x}\oplus \delta ,y;\pmb{\theta}_t^\prime)$
118
+ 11: end for
119
+ 12: Poison local dataset with $\delta$ and update local model to obtain $\pmb{\theta}_{t + 1}^{i}$
120
+ 13: $\Delta_i^{t + 1} = \pmb{\theta}_{t + 1}^i -\pmb{\theta}_t$
121
+ 14: Upload $\Delta_i^{t + 1}$ to the server
122
+
123
+ # 4 Experiments
124
+
125
+ # 4.1 Experimental Setup
126
+
127
+ Datasets: We evaluate A3FL on three widely-used benchmark datasets: FEMNIST [43], CIFAR-10 [15], and TinyImageNet [16]. The FEMNIST dataset consists of 80,5263 images shaped $28 \times 28$ distributed across 10 classes. The CIFAR-10 dataset consists of 50,000 training images and 10,000 testing images that are uniformly distributed across 10 classes, with each image having a size of $32 \times 32$ pixels. The TinyImageNet dataset contains 100,000 training images and 20,000 testing images that are uniformly distributed across 200 classes, where each image has a size of $64 \times 64$ pixels.
128
+
129
+ Federated learning setup: By default, we set the number of clients $N = 100$ . At each communication round, the server randomly selects $M = 10$ clients to contribute to the global model. The global model architecture is ResNet-18 [44]. We assume a non-i.i.d data distribution with a concentration parameter $h$ of 0.9 following previous works [12, 10, 9]. We evaluate the impact of data heterogeneity by adjusting the value of $h$ in Appendix B.6. Each selected client trains the local model for 2 epochs using SGD optimizer with a learning rate of 0.01. The FL training process continues for 2,000 communication rounds.
130
+
131
+ Attack setup: We assume that the attacker compromises $P$ clients among all $N$ clients. All compromised clients are only allowed to attack in limited communication rounds called attack window. By default, the attack window starts at the 1,900th communication round and ends at the 2,000th communication round. We also discuss the impact of the attack window in Appendix B.7. When a compromised client is selected by the server during the attack window, it will upload poisoned updates trying to inject the backdoor. We adjust the number of compromised clients $P \in [1,20]$ to comprehensively evaluate the performance of each attack. Different from previous works [11, 12], in our evaluation compromised clients are selected randomly to simulate the practical scenario. Each compromised client poisons $25\%$ of the local training dataset and trains the local model on the partially poisoned dataset with the same parameter settings as benign clients unless otherwise mentioned. By default, the trigger is designed as a square at the upper left corner of the input images. We use the same trigger design for all baseline attacks to ensure the same level of data stealthiness for a fair comparison. We summarize the details of each attack in Appendix A.2. We also discuss different trigger designs of DBA [11] in Appendix B.9.
132
+
133
+ A3FL setup: By default, compromised clients optimize the trigger using Projected Gradient Descent (PGD) [45] with a step size of 0.01. The adversarial global model is optimized using SGD with a learning rate of 0.01. In practice, we set the balancing coefficient $\lambda = \lambda_0\sin (\pmb{\theta}_t',\pmb{\theta}_t)$ , where $\sin (\pmb{\theta}_t',\pmb{\theta}_t)$ denotes the cosine similarity between $\pmb{\theta}_t'$ and $\pmb{\theta}_t$ . We use similarity to automatically adjust the focus to the adversarial global model: if the adversarial global model is similar to the current global model, it will be assigned a higher weight; otherwise, the adversarial global model is assigned
134
+
135
+ Table 1: A3FL maintains the utility of the global model on CIFAR-10.
136
+
137
+ <table><tr><td>Defense</td><td>FedAvg</td><td>NC</td><td>RLR</td><td>Median</td><td>DSight</td><td>Bulyan</td><td>Krum</td><td>SFed</td><td>CRFL</td><td>DP</td><td>FedDF</td><td>FedRAD</td></tr><tr><td>ACC(%)</td><td>92.29</td><td>92.57</td><td>92.21</td><td>65.59</td><td>91.79</td><td>39.57</td><td>84.56</td><td>92.60</td><td>87.40</td><td>87.71</td><td>37.58</td><td>65.89</td></tr><tr><td>BAC(%)</td><td>92.44</td><td>92.61</td><td>92.26</td><td>65.53</td><td>91.79</td><td>39.92</td><td>84.41</td><td>92.70</td><td>87.35</td><td>87.60</td><td>40.09</td><td>65.61</td></tr></table>
138
+
139
+ a lower weight. We use the similarity to control the strength of adversarial training, since the backdoor could be fully unlearned if the adversarial global model is aggressively optimized, which makes it difficult to optimize the first term in Equation 3. In adversarial scenarios, it is important to balance the strengths of both sides to achieve better performance, which has been well studied in previous works in adversarial generation [46, 47]. When there are multiple compromised clients in $S_{t}$ , the backdoor trigger is optimized on one randomly selected compromised client, and all compromised clients use this same trigger. We also discuss the parameter setting of A3FL in experiments.
140
+
141
+ Compared attack baselines: We compare our A3FL to four representative or state-of-the-art backdoor attacks to FL: Neurotoxin [12], DBA [11], CerP [9], and F3BA [10]. We discuss these baselines in Section 2 and also provide an in-detail introduction in Appendix A.2 including specific hyperparameter settings and trigger design of each baseline.
142
+
143
+ Compared defense baselines: We evaluate A3FL under 13 state-of-the-art or representative federated learning defenses: FedAvg [2], Median [22], Norm Clipping [17], DP [17], Robust Learning Rate [18], Deepsight [20], Bulyan [23], FedDF [24], FedRAD [25], Krum [21], SparseFed [26], FLAME [27], and CRFL [19]. We summarize the details of each defense in Appendix A.3.
144
+
145
+ Evaluation metrics: Following previous works [10, 12, 11, 8], we use accuracy & backdoor accuracy (ACC & BAC), attack success rate (ASR), and lifespan to comprehensively evaluate A3FL.
146
+
147
+ - ACC & BAC: We define ACC as the accuracy of the benign global model on clean testing inputs without any attacks, and BAC as the accuracy of the backdoored global model on clean testing inputs when the attacker compromises a part of the clients to attack the global model. Given the dynamic nature of the global model, we report the mean value of ACC and BAC. BAC close to ACC means that the evaluated attack causes little or no impact on the global model utility. A smaller gap between ACC and BAC indicates that the evaluated attack has higher utility stealthiness.
148
+ - ASR: We embed a backdoor trigger to each input in the testing set. ASR is the fraction of trigger-embedded testing inputs that are successfully misclassified as the target class $\tilde{y}$ by the global model. In particular, the global model is dynamic in FL, resulting in an unstable ASR. Therefore, we use the average value of ASR over the last 10 communication rounds in the attack windows to demonstrate the attack performances. A high ASR indicates that the attack is effective.
149
+ - Lifespan: The lifespan of a backdoor is defined as the period during which the backdoor keeps effective. The lifespan of a backdoor starts at the end of the attack window and ends when the ASR decreases to less than a chosen threshold. Following previous works [12], we set the threshold as $50\%$ . A long lifespan demonstrates that the backdoor is durable, which means the backdoor remains effective in the global model long after the attack ends. When we evaluate the lifespan of attacks, we extend the FL training process to 3,000 communication rounds.
150
+
151
+ # 4.2 Experimental Results
152
+
153
+ A3FL preserves the utility of the global model: To verify whether A3FL impacts the utility of global models, we compared their ACCs to BACs. The experimental results on CIFAR-10 are shown in Table 1, where NC denotes Norm Clipping, DSight represents Deepsight, and SFed represents SparseFed. Observe that the maximum degradation in accuracy of the global model caused by A3FL is only $0.28\%$ . Therefore, we can conclude that A3FL preserves the utility of the global model during the attack, indicating that our approach is stealthy and difficult to detect. Similar results were observed in the experiments on TinyImagenet, which can be found in Appendix B.1.
154
+
155
+ A3FL achieves higher ASRs: The attack performances of A3FL and baselines on defenses designed for FL backdoors are presented in Figure 2. The experimental results demonstrate that A3FL achieves higher attack success rates (ASRs) than other baselines. For example, when the defense is Norm
156
+
157
+ ![](images/caed538b4adef5f8bb4ab79be836485f25b13f9c1c063b385f7d2b49e8351ec8.jpg)
158
+
159
+ ![](images/2ec9feedb58d218737853d533d0364ddf709e4c568968474856abe62d2ba5a3d.jpg)
160
+
161
+ ![](images/c88cd93cce78dc8b59cd97ee3e725dfb44b7294c006ffe7aff1bbfbacf1dc583.jpg)
162
+
163
+ ![](images/ab2ded8be220e781560a2487e844824d99a6da6e36d533e909542a8af48d42fa.jpg)
164
+ (d) CRFL
165
+
166
+ ![](images/7a37a9c784b33d725834413a633597956dc76448654b7ce94d5f94ff109901f1.jpg)
167
+ (e) Deepsight
168
+
169
+ ![](images/e7803e04f914ee30d8fdf5d73511ded27ecec0ac758c912c0473dfd78165dc6b.jpg)
170
+ (f) DP
171
+ Figure 2: Comparing performances of different attacks on CIFAR-10.
172
+
173
+ Clipping and only one client is compromised, A3FL achieves an ASR of $99.75\%$ , while other baselines can only achieve a maximum ASR of $13.9\%$ . Other attack baselines achieve a comparable ASR to A3FL only when the number of compromised clients significantly increases. For instance, when the defense is CRFL, F3BA cannot achieve a comparable ASR to A3FL until 10 clients are compromised. We have similar observations on other defenses and datasets, which can be found in Figure 8 and 9 in Appendix B.2.
174
+
175
+ We note that CRFL assigns a certified radio to each sample and makes sure that samples inside the certified radio would have the same prediction. This is achieved by first clipping the updates $\pmb{\Delta}_t^i$ and then adding Gaussian noise $z\sim \mathcal{N}(0,\sigma^2 I)$ to $\pmb{\Delta}_{t}^{i}$ . During the inference stage, CRFL adopts majority voting to achieve certified robustness. The strength of CRFL is controlled by the value of $\sigma$ . We discuss the performance of CRFL under different values of $\sigma$ in Appendix B.5.
176
+
177
+ A3FL has a longer lifespan: We evaluate the durability of attacks by comparing their lifespans. Recall that the attack starts at the 1,900th communication round and ends at the 2,000th communication round. Figure 3 shows the attack success rate against communication rounds when the defense is Norm Clipping and 5 clients are compromised. As we can observe, A3FL has a significantly longer lifespan than other baseline attacks. A3FL still has an ASR of more than $80\%$ at the end, indicating a lifespan of over 1,000 rounds. In contrast, the ASR of all other baseline attacks drops below $50\%$ quickly. We show more results on other defenses in Appendix B.3 and a similar phenomenon is observed. These experimental results suggest that A3FL is more durable than other baseline attacks, and challenge the consensus that backdoors in FL quickly vanish after the attack ends.
178
+
179
+ ![](images/ba05b9ed17997159f80f1679d6c32220f70bcaf7501e6efef755f38a8a668bf1.jpg)
180
+ Figure 3: A3FL has a longer lifespan. The vertical dotted lines denote the end of the lifespans of each attack when the ASR of the backdoor drops below $50\%$ . The dotted line at the 100th communication round denotes the end of all attacks.
181
+
182
+ ![](images/e45a224affd8860326aaa75776dc45adbda386ddc3d61c2fafcc38a7fe7857a2.jpg)
183
+ (a) A3FL
184
+
185
+ ![](images/5b9dab1d685bf9173e3a6575fec4361f28d9997d31532f981ed7aa0ae9881d64.jpg)
186
+ (b) F3BA
187
+
188
+ ![](images/5fa75176e2f31ff8bac999f2394d2ca7da1f815f139e70b7e1d271c826df2bf1.jpg)
189
+ (c) CerP
190
+
191
+ ![](images/11e160bd1951ff6d7cfa00a7103bd0007f8c19291855247a8915cca6b535234f.jpg)
192
+ (d) DBA
193
+
194
+ ![](images/8ee64ecdbf7c2e5da70a3fb2cdd86809cd8264ee2debd8d2d132cb19f9674e6f.jpg)
195
+ (e) Neurotoxin
196
+
197
+ ![](images/0f3306ddf957eaada700404080484eca5b03235625327f7486b7c4d8870bbed0.jpg)
198
+ (a) Norm Clipping
199
+
200
+ ![](images/30602359d40e96b5776d2fdc6283a921487410f49032a228236bb27da978ce7c.jpg)
201
+ Figure 4: Compare local ASR to global ASR.
202
+ (b) CRFL
203
+
204
+ ![](images/077cc948ad323f43ee718218730344144cd33764169ec5023728d780084c892f.jpg)
205
+ (a) Norm Clipping
206
+ Figure 5: The impact of trigger size on the attack performance.
207
+ Figure 6: The impact of $\lambda$ on the attack performance.
208
+
209
+ ![](images/4f5a8c9b747edbc04bc7595f2d4d293f5d1ad19c28fec15b3fef6d99ea94d75d.jpg)
210
+ (b) CRFL
211
+
212
+ # 4.3 Analysis and Ablation Study
213
+
214
+ A3FL achieves higher ASR when transferred to the global model: As discussed in Section 3, A3FL achieves higher attack performance by optimizing the trigger and making the backdoor persistent within the dynamic global model. To verify our intuition, we conducted empirical experiments in which we recorded the Attack Success Rate (ASR) on the local model (local ASR) and the ASR on the global model after aggregation (global ASR). For the experiments, we used FedAvg as the default defense and included five compromised clients among all clients.
215
+
216
+ The results presented in Figure 4 demonstrate that A3FL can maintain a higher ASR when transferred to the global model. While all attacks can achieve high ASR ( $\approx 100\%$ ) locally, only A3FL can also achieve high ASR on the global model after the server aggregates clients' updates, which is supported by the tiny gap between the solid line (global ASR) and the dotted line (local ASR). In contrast, other attacks cannot achieve similarly high ASR on the global model as on local models. For instance, F3BA immediately achieves a local ASR of $100\%$ once the attack starts. But it can only achieve less than $20\%$ ASR on the global model in the first few communication rounds. F3BA also takes a longer time to achieve $100\%$ ASR on the global model compared to A3FL. This observation holds for other baseline attacks. We further provide a case study in Appendix B.8 to understand why A3FL outperforms baseline attacks. In the case study, we observe that 1) A3FL has better attack performance than other baseline attacks with comparable attack budget; 2) clients compromised by A3FL are similarly stealthy to other trigger-optimization attacks. Overall, our experimental results indicate that A3FL is a more effective and persistent attack compared to baseline attacks, which makes it particularly challenging to defend against.
217
+
218
+ The impact of trigger size: We evaluate the performance of A3FL with a trigger size of $3 \times 3$ , $5 \times 5$ , $8 \times 8$ , $10 \times 10$ respectively (the default value is $5 \times 5$ ). Figure 5 shows the impact of trigger size on A3FL. In general, the attack success rate (ASR) improves as the trigger size grows larger. When the defense mechanism is Norm Clipping, we observe that the difference between the best and worst ASR is only $1.75\%$ . We also observe a larger difference with stronger defenses like CRFL. Additionally, we find that when there are at least 5 compromised clients among all clients, the impact of trigger size on the attack success rate becomes unnoticeable. Therefore, we can conclude that smaller trigger sizes may limit the performance of A3FL only when the defense is strong enough and the number of compromised clients is small. Otherwise, varying trigger sizes will not significantly affect the performance of A3FL.
219
+
220
+ The impact of $\lambda$ : Recall that $\lambda = \lambda_0 \sin(\theta_t', \theta_t)$ . We varied the $\lambda_0$ hyperparameter over a wide range of values to learn the impact of the balancing coefficient on attack performance and record results in Figure 6. Observe that different $\lambda_0$ only slightly impact attack performances with fewer compromised clients. When there are more than 5 compromised clients, the impact of $\lambda_0$ is unnoticeable. For
221
+
222
+ instance, when the defense is Norm Clipping, the gap between the highest ASR and the lowest ASR is merely $0.5\%$ . We can thus conclude that A3FL is insensitive to variations in hyperparameter $\lambda_0$ . We further provide an ablation study in Appendix B.4 for more analysis when the adversarial adaptation loss is disabled, i.e., $\lambda_0 = 0$ .
223
+
224
+ # 5 Conclusion and Future Work
225
+
226
+ In this paper, we propose A3FL, an effective and durable backdoor attack to Federated Learning. A3FL adopts adversarial adaption loss to make the injected backdoor persistent in global training dynamics. Our comprehensive experiments demonstrate that A3FL significantly outperforms existing backdoor attacks under different settings. Interesting future directions include: 1) how to build backdoor attacks towards other types of FL, such as vertical FL; 2) how to build better defenses to protect FL from A3FL.
227
+
228
+ # References
229
+
230
+ [1] Jakub Konečný, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016.
231
+ [2] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273-1282. PMLR, 2017.
232
+ [3] Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol., 10(2):12:1-12:19, 2019.
233
+ [4] Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604, 2018.
234
+ [5] David Leroy, Alice Coucke, Thibaut Lavril, Thibault Gisselbrecht, and Joseph Dureau. Federated learning for keyword spotting. In ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 6341-6345. IEEE, 2019.
235
+ [6] Viraaji Mothukuri, Reza M Parizi, Seyedamin Pouriyeh, Yan Huang, Ali Dehghantanha, and Gautam Srivastava. A survey on security and privacy of federated learning. Future Generation Computer Systems, 115:619-640, 2021.
236
+ [7] KIMBERLY Powell. Nvidia clara federated learning to deliver ai to hospitals while protecting patient data. Accessed: Dec, 1, 2019.
237
+ [8] Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics, pages 2938-2948. PMLR, 2020.
238
+ [9] Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Bin Wang, Jiqiang Liu, and Xiangliang Zhang. Poisoning with cerberus: Stealthy and colluded backdoor attack against federated learning. 2023.
239
+ [10] Pei Fang and Jinghui Chen. On the vulnerability of backdoor defenses for federated learning. arXiv preprint arXiv:2301.08170, 2023.
240
+ [11] Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. Dba: Distributed backdoor attacks against federated learning. In International conference on learning representations, 2020.
241
+ [12] Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael Mahoney, Prateek Mittal, Ramchandran Kannan, and Joseph Gonzalez. Neurotoxin: Durable backdoors in federated learning. In International Conference on Machine Learning, pages 26429-26446. PMLR, 2022.
242
+
243
+ [13] Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. Attack of the tails: Yes, you really can backdoor federated learning. Advances in Neural Information Processing Systems, 33:16070-16084, 2020.
244
+ [14] Gilad Baruch, Moran Baruch, and Yoav Goldberg. A little is enough: Circumventing defenses for distributed learning. Advances in Neural Information Processing Systems, 32, 2019.
245
+ [15] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
246
+ [16] Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015.
247
+ [17] Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963, 2019.
248
+ [18] Mustafa Safa Ozdayi, Murat Kantarcioglu, and Yulia R Gel. Defending against backdoors in federated learning with robust learning rate. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 9268-9276, 2021.
249
+ [19] Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li. Crfl: Certifiably robust federated learning against backdoor attacks. In International Conference on Machine Learning, pages 11372-11382. PMLR, 2021.
250
+ [20] Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, and Ahmad-Reza Sadeghi. Deepsight: Mitigating backdoor attacks in federated learning through deep model inspection. arXiv preprint arXiv:2201.00763, 2022.
251
+ [21] Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems, 30, 2017.
252
+ [22] Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pages 5650–5659. PMLR, 2018.
253
+ [23] El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Rouault. The hidden vulnerability of distributed learning in byzantium. arXiv preprint arXiv:1802.07927, 2018.
254
+ [24] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems, 33:2351-2363, 2020.
255
+ [25] Stefan Päll Sturluson, Samuel Trew, Luis Muñoz-González, Matei Grama, Jonathan Passerat-Palmbach, Daniel Rueckert, and Amir Alansary. Fedrad: Federated robust adaptive distillation. arXiv preprint arXiv:2112.01405, 2021.
256
+ [26] Ashwinee Panda, Saeed Mahloujifar, Arjun Nitin Bhagoji, Supriyo Chakraborty, and Prateek Mittal. Sparsefed: Mitigating model poisoning attacks in federated learning with sparsification. In International Conference on Artificial Intelligence and Statistics, pages 7587-7624. PMLR, 2022.
257
+ [27] Thien Duc Nguyen, Phillip Rieger, Roberta De Viti, Huili Chen, Björn B Brandenburg, Hossein Yalame, Helen Möllering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, et al. {FLAME}: Taming backdoors in federated learning. In 31st USENIX Security Symposium (USENIX Security 22), pages 1415–1432, 2022.
258
+ [28] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning, pages 5132-5143. PMLR, 2020.
259
+ [29] Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine, 37(3):50-60, 2020.
260
+
261
+ [30] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems, 33:2351-2363, 2020.
262
+ [31] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in neural information processing systems, 33:7611-7623, 2020.
263
+ [32] Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. arXiv preprint arXiv:2003.00295, 2020.
264
+ [33] Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. Mitigating sybils in federated learning poisoning. arXiv preprint arXiv:1808.04866, 2018.
265
+ [34] Krishna Pillutla, Sham M Kakade, and Zaid Harchaoui. Robust aggregation for federated learning. IEEE Transactions on Signal Processing, 70:1142-1154, 2022.
266
+ [35] Yi Zeng, Si Chen, Won Park, Z Morley Mao, Ming Jin, and Ruoxi Jia. Adversarial unlearning of backdoors via implicit hypergradient. arXiv preprint arXiv:2110.03735, 2021.
267
+ [36] Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. Neural cleansse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 707-723. IEEE, 2019.
268
+ [37] Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning, 2020.
269
+ [38] Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. Adaptive machine unlearning, 2021.
270
+ [39] Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17-21, 2015, pages 463-480. IEEE Computer Society, 2015.
271
+ [40] Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. arXiv preprint arXiv:2012.13995, 2020.
272
+ [41] Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Provably secure federated learning against malicious clients. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 6885-6893, 2021.
273
+ [42] Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage. Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1354–1371. IEEE, 2022.
274
+ [43] Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings, 2019.
275
+ [44] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
276
+ [45] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
277
+ [46] Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of gans. arXiv preprint arXiv:1705.07215, 2017.
278
+ [47] Siyuan Xu and Minghui Zhu. Efficient gradient approximation method for constrained bilevel optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10):12509-12517, 2023.
279
+
280
+ [48] Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, volume 96, pages 226-231, 1996.
281
+ [49] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
282
+
283
+ ![](images/4a1fc610379516d807a250e2f78313d90092a0bbcc3563ff8e1e00d052ca6c61.jpg)
284
+ (a) FLTrojan
285
+
286
+ ![](images/316ad17a057d1ff45e1b048814548036f39fad2d407f51975ad151b5b15b2a4a.jpg)
287
+ (b) F3BA
288
+
289
+ ![](images/6f1cf8975ec048403b443f52358ec3d949a3713eaa743c8b4b3d84a91b36bcf5.jpg)
290
+ (c) CerP
291
+ Figure 7: Trigger patterns of evaluated attacks on FedAvg, with $\mathrm{P} = 2$ compromised clients.
292
+
293
+ ![](images/afb75dbf9a9deee2db7467e33f2d4db6614c217252afdec6e6b84f276d4be095.jpg)
294
+ (d) Neurotoxin
295
+
296
+ ![](images/8f8c8bf073ea442743283fc217d6fdd505ff5b50c05ccfa83aa919803060e75a.jpg)
297
+ (e) DBA
298
+
299
+ # A Additional Experiment Details
300
+
301
+ # A.1 Experimental Setup in Figure 1
302
+
303
+ The preliminary experiment in Figure 1 has the same experimental setup as described in Section 4.1. In particular, We use FedAvg [2] as the server-side aggregation rule. We set the number of compromised clients $P = 1$ in the preliminary experiment. We denote the attack success rate on the global model as global ASR. We further denote the ASR on the local model after local training as the local ASR. When the compromised client is selected by the server, we calculate and update the local ASR after the compromised client optimizes the backdoor trigger and trains its local model on the poisoned local training dataset.
304
+
305
+ # A.2 Details of Attacks
306
+
307
+ A3FL: A3FL formulates the trigger optimization as a bi-level optimization problem. A3FL jointly optimizes the adversarial model $f_{\boldsymbol{\theta}_t'}$ with the trigger pattern $\Delta$ . A3FL optimizes the adversarial model using SGD with a learning rate of 0.01, a momentum of 0.9, and a weight decay of 0.0005. A3FL updates the trigger pattern using PGD with a step size of 0.01 until convergence. We show the trigger pattern of A3FL in Figure 7a.
308
+
309
+ F3BA [10]: F3BA directly manipulates a part of local model weights to inject the backdoor via sign flipping. F3BA further jointly optimizes the trigger pattern and the local model weights to maximize the difference between latent representations of clean and backdoored samples, thus achieving higher attack performance. The trigger of F3BA is a squared patch. We show the trigger pattern of F3BA in Figure 7b.
310
+
311
+ CerP [9]: CerP jointly optimizes the trigger pattern and the local model weights to improve the backdoor effectiveness. Furthermore, CerP aims to improve the backdoor stealthiness by adopting L2-norm regularization to limit the difference between local model weights and global model weights. Therefore CerP can tune the local model to fit the backdoor-poisoned data without inducing large biases in the local model weights. The trigger of CerP is shown in Figure 7c.
312
+
313
+ Neurotoxin [12]: Neurotoxin only updates unimportant model weights to avoid conflicts with other clean clients. The importance of model weights is determined by the magnitude of their gradients. Model weights with a higher gradient in previous rounds are considered to be more important (frequently updated by other clients). Following the settings in [12], we only update the last $95\%$ important model weights. Neurotoxin uses a fixed trigger pattern, as shown in Figure 7d.
314
+
315
+ DBA [11]: DBA is a distributed backdoor attack designed to utilize the distributed nature of FL. DBA splits the trigger into different clients. Each client uses a different trigger to attack the FL system during the training stage. In the inference stage, the attacker uses the joint trigger to activate the injected backdoor. The trigger in [11] was designed as several parallel white lines placed at the upper left corner of the input images. This trigger design is not compatible with our attack setting and we can hardly control the attack budget introduced by the trigger following [11]. Therefore in our implementation, we also use a squared patch as the trigger for DBA, as shown in Figure 7e. We randomly split the squared patch into four sub-triggers and these sub-triggers are iteratively used during the attack.
316
+
317
+ # A.3 FL defenses
318
+
319
+ Norm Clipping (NC) [17]: NC clips clients' updates that are larger than a pre-defined threshold. NC can effectively limit clients' behavior to prevent the global model from being overwhelmed by a few clients. By default, we set the threshold to 1.
320
+
321
+ (weak) Differential Privacy (DP) [17]: DP adds Gaussian noise $z \sim \mathcal{N}(0, \sigma^2 I)$ to clients' updates to perturb carefully crafted malicious updates. Note that this defense is not designed for privacy, so the Gaussian noise is relatively smaller than that adopted in differential privacy. By default, we set $\sigma = 0.002$ .
322
+
323
+ Robust Learning Rate (RLR) [18]: RLR aims to maximize the agreement on updating direction across clients to mitigate potential attacks. It is inspired by that the behavior of a compromised client is commonly different from other benign clients. For instance, a compromised client may want to enlarge some model parameters while most benign clients are trying to reduce them. When clients disagree on the updating direction of a parameter, RLR flips the learning rate on the parameter to maximize the loss instead.
324
+
325
+ CRFL [19]: CRFL adopts three techniques to mitigate backdoor attacks on FL. CRFL first clips clients' updates as Norm Clipping does. In our experiments, we set the clipping threshold as 1. CRFL then adds Gaussian noise $z \sim \mathcal{N}(0, \sigma^2 I)$ to clients' updates as DP does. In our experiments, we set $\delta = 0.002$ and we discuss the impact of $\sigma$ on CRFL in Appendix B.5. Finally, CRFL creates several perturbed models by adding independently sampled Gaussian noise to the global model and adopts majority voting for prediction. In our experiments, CRFL creates 5 different perturbed models for prediction at each FL communication round.
326
+
327
+ Median [22]: Median uses the coordinate-wise median value of updates from all clients to update the global model. Median can effectively exclude clients that upload overwhelming updates. However, the Median tends to heavily degrade the model utility.
328
+
329
+ Deepsight [20]: Deepsight adopts three different distance matrices to measure the distances between each client. Deepsight then clusters clients according to different distance matrices and only accepts clients that are in the same cluster across different matrices. The first distance matrix is smaller when the updates in the last layer from clients are similar. The second distance matrix is the L2 distance between the last layer's weight across each client. The third distance matrix is the L2 distance between the outputs of two local models given a batch of randomly generated input images. Deepsight adopts DBSCAN [48] to cluster selected clients. Finally, clusters including potentially malicious clients that have a larger distance from other clusters will be excluded. In our experiments, we set the batch size of randomly generated inputs to 256.
330
+
331
+ Bulyan [23]: Bulyan first excludes potentially malicious clients from all selected clients and then uses the coordinate-wise median value of updates from remaining clients to update the global model. In the first step, $2f$ clients with the highest pairwise Euclidean distances are excluded. In the second step, Bulyan picks $M - 4f$ clients from the remaining $M - 2f$ clients that are closest to the median by coordinate. In our experiments, we set $f = 2$ .
332
+
333
+ FedDF [24]: FedDF uses the mean output of all client models as the supervisory signal to distill the next round global model. In particular, FedDF firstly aggregates all selected clients (the same as FedAvg) to obtain a teacher model. Then the server trains the global model to minimize the Kullback Leibler divergence between the logits of the global and teacher model on a set of unlabeled inputs. In our experiments, the learning rate for updating the global model is 0.002 and we train the global model for one epoch at each FL communication round.
334
+
335
+ FedRAD [25]: FedRAD is an extension of FedDF, which assigns a weight to each client model based on their median scores. These scores indicate the frequency with which the prediction of the client model becomes the median value of predictions from all client models. FedRAD then utilizes weighted model aggregation to produce the next round global model. In our experiments, we also update the global model with a learning rate of 0.002 for one epoch at each FL communication round.
336
+
337
+ Krum [21]: Krum selects clients that have the smallest L2 distances to other clients. Only the clients selected by Krum will be used to update the global model. Since Kurm drops most updates from clients, it can achieve strong robustness. However, Krum also affects the accuracy of the model.
338
+
339
+ Table 2: A3FL maintains the utility of global models on TinyImageNet.
340
+
341
+ <table><tr><td>Defense</td><td>FedAvg</td><td>NC</td><td>RLR</td><td>Median</td><td>DSight</td><td>Bulyan</td><td>Krum</td><td>SFed</td><td>CRFL</td><td>DP</td><td>FedDF</td><td>FedRAD</td></tr><tr><td>ACC(%)</td><td>55.45</td><td>55.31</td><td>55.34</td><td>17.12</td><td>53.71</td><td>11.19</td><td>42.87</td><td>57.39</td><td>53.58</td><td>53.38</td><td>25.31</td><td>23.12</td></tr><tr><td>BAC(%)</td><td>55.25</td><td>54.98</td><td>55.28</td><td>20.92</td><td>53.44</td><td>7.33</td><td>42.35</td><td>57.08</td><td>53.45</td><td>53.17</td><td>24.90</td><td>22.57</td></tr></table>
342
+
343
+ ![](images/58e164402a8acaf3b8d099d12d2c409fdfdbb9d81a49abb7fce800b8a8a47893.jpg)
344
+
345
+ ![](images/d67c2c3cd8976b8ce6b0eeab391e8a23b60d7923a85ffa00a7e240c1b3a73753.jpg)
346
+
347
+ ![](images/537aed287410a4fa1bc63691f0c8d7885e6a150a46fc65294a0a48eccf1e50b9.jpg)
348
+
349
+ ![](images/a2e0900ac8b1038e8810f53ff30a5f38e156db96202c937624e926f2d6615b65.jpg)
350
+ (d) FedDF
351
+
352
+ ![](images/be59bd1a5732b13d9c05ba08e524e16e5dcf09717db26bab9d0e82bc5d4647de.jpg)
353
+ (e) Bulyan
354
+
355
+ ![](images/f8b4628c268acdd17ad5796f7dbee10567d59dd188854ae7672eb6a19b100b30.jpg)
356
+ (f) SparseFed
357
+ Figure 8: Comparing performances of different attacks on CIFAR-10.
358
+
359
+ SparseFed [26]: SparseFed is proposed to mitigate model poisoning attacks in FL. SparseFed aggregates client updates normally but only updates the top-k highest magnitude elements. It is inspired by that attackers commonly move in distinct directions from the majority of clean clients. Therefore the top-k highest magnitude elements involve less poisoned updates from attackers. In our experiments, we update the top-95% highest magnitude elements.
360
+
361
+ FLAME [27]: FLAME adopts dynamic clustering, adaptive clipping, and adaptive noising to exclude potentially malicious clients. Following the settings for image classification [27], we set $\epsilon = 3705$ and $\delta = 0.001$ controlling the strength of adaptive noising.
362
+
363
+ # B Additional Experimental Results
364
+
365
+ # B.1 A3FL maintains the model utility
366
+
367
+ We show the accuracy of the global model on TinyImagenet when the attacker presents (BAD) or not (ACC) in Table 2. In particular, we record the accuracy on clean tasks when no attackers are involved to obtain the accuracy (ACC). We further record the accuracy on clean tasks when there are 20 compromised clients among all clients to obtain the backdoor accuracy (BAC). We set the number of compromised clients $P$ to 20 since more compromised clients are likely to result in a higher decrease in clean accuracy. Therefore if A3FL can maintain the model utility even with 20 compromised clients, we can conclude that A3FL is highly stealthy. Note that we use the mean value of ACC and BAC in the attack window (between the 1,900th communication round and the 2,000th communication round) to verify the utility of global models since the server continuously updates the global model. Therefore, using the mean accuracy as the measurement standard can accurately reflect the impact of attacks on the model utility, and eliminate randomness.
368
+
369
+ As shown in Table 2, the accuracy of the global model does not degrade much when attackers are presented. This indicates that A3FL preserves the accuracy of global models so it is stealthy enough to not be discovered. The differences between ACCs and BACs are within $0.5\%$ in most cases. The highest drop in clean accuracy is observed when the defense mechanism is Bulyan. However, Bulyan significantly degrades the model's accuracy to only $11.19\%$ . The low accuracy indicates that the
370
+
371
+ ![](images/09c27cc401cc15d60172ed1217ec6314f04fbdf766328bfce87579fd6c861774.jpg)
372
+
373
+ ![](images/dd0e6adddac4aaa5334ad9dd12d731d8809e39220f9472332cf0d74883958f45.jpg)
374
+
375
+ ![](images/17b39b3e98b1fa1559f19cb7f7666cc1c37c44c9188407858dcef449ae8ef5a2.jpg)
376
+
377
+ ![](images/334d15cf1de0781d3159bf5a870a8bb79a2d048d95106d7c8da744ec72d34bfe.jpg)
378
+
379
+ ![](images/053dfa6c24a92a1bccdeba33f4f8da13064ba12ce5c64ad50514dc1ecae28a34.jpg)
380
+
381
+ ![](images/b7dded96aed7a98468d45e51edbdad2552a01df7f3efae72565a0645bb186b3e.jpg)
382
+
383
+ ![](images/6dc3f4a21cc888d146b6fb6ba42251a8e7574144df453fcaefd34c7e87988060.jpg)
384
+
385
+ ![](images/bb68e9d3b8bbc5990b84356a8bce1bf600c146242558df8a0213f1b0b158a511.jpg)
386
+
387
+ ![](images/c5ca67da6b1dd820c410278d3d1cbf24a4f116c675cca874b2396c67fb602f44.jpg)
388
+
389
+ ![](images/9cfffdfc82807513b4852e2007bda467b55679670a789940993ea550179840ec.jpg)
390
+ (j) FedRAD
391
+
392
+ ![](images/fd183537a40c1bbf4bf47829b32aa85e8efc6733269949420fb1927de3fcc145.jpg)
393
+ (k) Bulyan
394
+
395
+ ![](images/5d31b82ea27e750628062f3ec2b9d2da2a2450e58469e1336e65d47dd2a9e915.jpg)
396
+ (1) SparseFed
397
+ Figure 9: Comparing performances of different attacks on TinyImageNet.
398
+
399
+ model is highly random, so even though A3FL causes the model's accuracy to drop to $7.33\%$ , we cannot solely conclude that A3FL will reduce the model utility. In general, A3FL does not influence the global model utility. We also observe a similar phenomenon on CIFAR-10, as shown in Table 1.
400
+
401
+ # B.2 A3FL achieves higher ASRs
402
+
403
+ We compare the performance of attacks on CIFAR-10 against defenses that are not designed for backdoor attacks in Figure 8. Observe that A3FL achieves the highest ASR under most settings. When the defense is Median, A3FL is the only attack that can achieve high ASR (over $80\%$ ). We further show the attacker performance of A3FL on TinyImagenet in Figure 9 and we can observe a similar phenomenon.
404
+
405
+ # B.3 A3FL has a longer lifespan
406
+
407
+ In Figure 10, we show that A3FL has a significantly longer lifespan than other baselines with different defenses applied. For instance, when the defense is RobustLR, A3FL can still achieve an ASR of
408
+
409
+ ![](images/ecc0ec0343d1c2ac528920a7c59dbde5c8d627a39f253b139b1040d32ef0e456.jpg)
410
+
411
+ ![](images/730e2cdab25687a75feaef6a342a531dfa045be14d09246458544a5666cec162.jpg)
412
+
413
+ ![](images/cf639ed49d541474a95e1dbcc75f782a22fd11be99f5ea001e0f950d6011e48c.jpg)
414
+ (a) FedAvg, $\mathrm{P} = 5$
415
+ (c) Deepsight, $\mathrm{P} = 5$
416
+
417
+ ![](images/00dba8cc4ddf280e25a299b2e6d457f4eb713f589b47ad6d963a8cb442b8194b.jpg)
418
+ (b) RobustLR, $\mathrm{P} = 5$
419
+ (d) CRFL, $\mathrm{P} = 20$
420
+
421
+ ![](images/01743c988cde173fa24c18cf9237819d608365763e88cf6c4e0ef4c613d24742.jpg)
422
+ (a) ACC
423
+ Figure 11: Attack performances against CRFL with different $\sigma$ .
424
+
425
+ ![](images/f75809f1ce345ad182ec5444cc0f81f157ae5b607c668a8fe3f6dec07b9180df.jpg)
426
+ Figure 10: A3FL has a longer lifespan.
427
+ (b) ASR
428
+
429
+ ![](images/0353d5b0e6cf58e40db34616476600b8692cc54bcf492e07b77bb33a5ab6c66c.jpg)
430
+ (a) Norm Clipping
431
+ Figure 12: Attack performances under different Dirichlet concentration parameters.
432
+
433
+ ![](images/13ea4886a1c00080f0739c2f3d6028367dce8cacf03b3ce5db644d17d24f3a01.jpg)
434
+ (b) CRFL
435
+
436
+ $62.37\%$ at 1000 rounds after the attack ends. In contrast, the attack success rates of other attacks drop below $50\%$ in less than 150 rounds. Note that when we use CRFL, we set the number of compromised clients $P = 20$ since when there are only 5 compromised clients, all attacks except A3FL failed to achieve high ASR (see Figure 2).
437
+
438
+ # B.4 Ablation study on component importance
439
+
440
+ We study the effectiveness of A3FL with or without the adversarial adaptation loss to test the effectiveness of components under FedAvg with $P = 20$ compromised clients among all clients. As shown in Table 3, the adversarial adaptation loss can effectively improve the durability of A3FL. Observe that A3FL can achieve an ASR of $97.66\%$ at 500 communication rounds after the attack and $86.65\%$ at 1,000 communication rounds after the attack. In comparison, A3FL without the adversarial adaptation loss exhibits ASRs that are $4.31\%$ and $15.64\%$ lower than A3FL at these two points.
441
+
442
+ Table 3: Effect of different components in A3FL.
443
+
444
+ <table><tr><td>ASR(%) ↓ Rounds after attack →</td><td>0</td><td>500</td><td>1000</td></tr><tr><td>A3FL without adversarial adaptation</td><td>100.0</td><td>93.35</td><td>69.01</td></tr><tr><td>A3FL</td><td>100.0</td><td>97.66</td><td>84.65</td></tr></table>
445
+
446
+ ![](images/08b1cfb805dde83fc90eb4e544b2fd451408553aac2203a195363bdebf2ac30e.jpg)
447
+ (a) Norm Clipping
448
+
449
+ ![](images/b67aecf9b85634c6b1c2e196ae4dc5fd07eea09eabc7ebb4d6dbdf3ee632cbba.jpg)
450
+ (b) Krum
451
+
452
+ ![](images/e33deb7b9789b3b6abf539ba55399cb68f206139010d1f2c3d0f4bcceb7283c6.jpg)
453
+ (c) CRFL
454
+
455
+ ![](images/6948e9b74a0653362bf88a780f3434d15617706363519dbea986fa9133408b25.jpg)
456
+ Figure 13: Attack performances when the attack starts at the first communication round.
457
+ (a) A3FL
458
+
459
+ ![](images/d426b4743faee2964f0abd030d2b53691f644477aa354711ac7a685e27e9222a.jpg)
460
+ (b) F3BA
461
+
462
+ ![](images/8a5b7e4d89679c6079da6f50f6962e1ad7e14f50eb43d878fb1df173d630075c.jpg)
463
+ (c) CerP
464
+ Figure 14: ASRs against Krum.
465
+
466
+ ![](images/bf35db0ef667d6ed681cb1ac1b4ecbb5b00f2275e688aabdfd8d4d1012c8a337.jpg)
467
+ (d) DBA
468
+
469
+ ![](images/a3152e44147d95942d7e21086265fd7d16a571706ed6239c8d80a506e81b1417.jpg)
470
+ (e) Neurotoxin
471
+
472
+ # B.5 Impact of $\sigma$ on CRFL Effectiveness
473
+
474
+ Figure 11 shows the ACC and ASR when applying CRFL with different $\sigma$ . Observe that as the $\sigma$ increases, CRFL can achieve better robustness, indicated by lower ASR. However, the ACC of the global model also drops from $90.25\%$ to $67.33\%$ rapidly, as $\sigma$ increases from 0.001 to 0.01, which is unacceptable. Furthermore, when there are more compromised clients, A3FL can still achieve high ASR even with a large $\sigma = 0.01$ . We can thus conclude that CRFL can not sufficiently mitigate A3FL with different $\sigma$ .
475
+
476
+ # B.6 Impact of Data Heterogeneity
477
+
478
+ We adjust the Dirichlet concentration parameter $h = 0.09, 0.9, 9$ to study whether data heterogeneity influences the performance of A3FL. As shown in Figure 12, A3FL can achieve high ASR regardless of different $h$ . When the defense is Norm Clipping and $h = 0.09$ , A3FL achieves lower ASR. This can be explained by that a smaller $h$ indicates a more non-i.i.d data distribution. Therefore, the local training set held by the attacker is far from the global data distribution, which increases the difficulty of injecting the backdoor. However, the attack success rate is still high (over $60\%$ ) and quickly increases as the number of compromised clients increases.
479
+
480
+ # B.7 The impact of attack window
481
+
482
+ We evaluate A3FL against baseline attacks when the attack window starts at the first communication round and ends at the 100th communication round. As shown in Figure 13, A3FL can still remarkably outperform other baseline attacks. For instance, when the defense mechanism is Norm Clipping and there are 5 compromised clients, the gaps of ASR between A3FL and other baseline attacks are at least $62.4\%$ , which is even larger than the gap under default settings. However, we also observe that when the attack starts from the first communication round and there are only a few compromised clients (1 or 2), ASRs of all attacks decrease in comparison to ASRs under default settings. This can be explained by that at the beginning of the training process, the global model changes a lot so the backdoor is easily erased when there are only a few compromised clients. We also observe that when the attack starts from scratch, all attacks fail to have a satisfying lifespan since the model is far from convergence at the 100-th communication round. Therefore, our evaluation of lifespan is conducted following the configuration provided by Neurotoxin [12].
483
+
484
+ ![](images/a46a80e773dd72f5463763db5024ae23c85c9340cc6ba41cc1896a0a52aca712.jpg)
485
+ (a) ASR
486
+
487
+ ![](images/5bb8b49ba3ce3f59835ae874b7340552f1bc9d551caef9b5e2e76a99c07ce0d4.jpg)
488
+ (b) Trigger Size
489
+ Figure 15: Attack performances of DBA using original trigger design. (a) DBA-bar denotes DBA attack with the original trigger design proposed in [11], in which the trigger consists of four white bars. While DBA denotes the DBA attack with the trigger designed as a red square. (b) Trigger size refers to the length of each white bar. (c) Trigger gap $\{\mathrm{Gap}_x, \mathrm{Gap}_y\}$ refers to the distance between each bar. (d) Trigger location $\{\mathrm{Shift}_x, \mathrm{Shift}_y\}$ represents the distance from the trigger to the edge of the image.
490
+
491
+ ![](images/caa14fbf21cfcffb1a2491e81930b699f778082249fbe48ccb682b73990833a6.jpg)
492
+ (c) Trigger Gap
493
+
494
+ ![](images/dfd144fa669075c95dffebf7a7342f7774248eab20eb44008c4e01e289120f57.jpg)
495
+ (d) Trigger Location
496
+
497
+ # B.8 Case study on Krum
498
+
499
+ We perform a case study on Krum to gain insight into why A3FL outperforms other baselines. In Figure 14 we record the ASRs and put a ". ." notation on the line if Krum selects an attacker-compromised client at that round. Recall that Krum selects one client at each round and only uses the selected client updates to update the global model. Therefore, the chance that a compromised client is selected by the server increases if the backdoor is more stealthy. We have the following observations: 1) fixed-trigger attacks are more frequently selected by the server, while trigger-optimization attacks are selected twice only; 2) fixed-trigger attacks achieve lower ASR even if selected by the server. However, observe that once selected, A3FL quickly achieve $100\%$ ASR, which is because A3FL can maintain higher ASR when transferred to the global model as stated above. A3FL is also durable after being selected, leading to a higher ASR at the end of the attack. In comparison, F3BA is selected on the 26th round and achieves $\approx 80\%$ ASR. But the ASR quickly drops after that. CerP is also selected twice, but it cannot achieve as high ASR as A3FL and F3BA do, which is caused by the strict regularization on the local model bias. In addition, the ASR of CerP also drops quickly when the compromised clients are not selected by the server.
500
+
501
+ # B.9 The impact of DBA trigger pattern
502
+
503
+ In our experiments, we set the trigger pattern of DBA to be a red square at the upper left corner. However, in [11], the trigger is designed as four white lines. We, therefore, discuss the performance of DBA when using the original trigger design. The original trigger design of DBA is determined by three hyperparameters: trigger size (TS), trigger gap (TG), and trigger location (TL). In particular, the trigger gap consists of a horizontal gap $(\mathrm{Gap}_x)$ and a vertical gap $(\mathrm{Gap}_y)$ . The trigger location consists of a horizontal shift $(\mathrm{Shift}_x)$ and a vertical shift $(\mathrm{Shift}_y)$ . We explain these hyperparameters in Figure 15b, 15c, and 15d respectively. Following the default settings in [11], we set $\{\mathrm{TS}, \mathrm{TG}, \mathrm{TL}\} = \{4, (6, 6), (0, 0)\}$ .
504
+
505
+ We compare the attack performance of DBA and DBA-bar (DBA with original trigger design) in Figure 15a. Observe that with the original trigger design, DBA-bar achieves an even lower ASR. This phenomenon supports that the default trigger design in our experiments does not degrade the attack performance of DBA. In contrast, DBA can even achieve a higher ASR without the original trigger design.
506
+
507
+ # B.10 Transferability of A3FL under different settings
508
+
509
+ We further evaluate the attack performance of A3FL on more datasets, defenses, and model architectures. In particular, we record the performance of A3FL on FLAME [27] in Figure 16a. We also evaluate A3FL on other model architectures [49] and datasets [43] in Figure 16b,16c. Observe that A3FL can always outperform baseline attacks under different settings.
510
+
511
+ ![](images/e551bb067535438033e740910c0cb0cbd3c6f565e383ada1567329c5a1d8316c.jpg)
512
+ (a) FLAME
513
+
514
+ ![](images/90207167c70ae36ba96d8c218f77df1a60b7a3141fd7f2f758889945b70f9f3d.jpg)
515
+ (b) VGG-16
516
+
517
+ ![](images/51dafc36ecaf0eb1946b827daa71c62f42786435f97ea1e8d86e4ababffd2129.jpg)
518
+ (c) FEMNIST
519
+ Figure 16: Attack performances of A3FL under different settings.
520
+
521
+ # B.11 Discussion on Ethical Implications
522
+
523
+ We admit that the discovery of a new backdoor attack in federated learning results in potential ethical implications. A3FL mainly focuses on image classification, which can be deployed in security-sensitive applications such as human face recognition. However, discovering a new backdoor attack and mitigating its threats is necessary to improve the robustness of the federated learning paradigm. Safeguarding the integrity and ethical dimensions of federated learning is crucial to ensure the best interests of individuals and society. We believe that future work can eliminate the threat of proposed attacks, and it is important to focus research on FL defenses.
a3fladversariallyadaptivebackdoorattackstofederatedlearning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f3127f59c6528c37353fe493ae919fb341077be955aac57d94d637cab2efeeb
3
+ size 859899
a3fladversariallyadaptivebackdoorattackstofederatedlearning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c225eb748957c6f4e6a45cf171e43094bc6db421b7d3f733044900cc54e628c
3
+ size 680998
abdiffuserfullatomgenerationofinvitrofunctioningantibodies/cb6c0a83-7567-4bb7-b689-e00aee5960f2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fb700f4963ae0bcdde3554c61889f28fb75995da632f2174d340c7c47e8eb56
3
+ size 201025
abdiffuserfullatomgenerationofinvitrofunctioningantibodies/cb6c0a83-7567-4bb7-b689-e00aee5960f2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:031b8c5886ccc5dd1b8a2749016c99676722929097b23d277ad8196ea9b36f80
3
+ size 236860
abdiffuserfullatomgenerationofinvitrofunctioningantibodies/cb6c0a83-7567-4bb7-b689-e00aee5960f2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bbd5187ce3a2a484ea5bfc875979969344f89ebfa8d6df7e17f20e525894b90
3
+ size 13466495
abdiffuserfullatomgenerationofinvitrofunctioningantibodies/full.md ADDED
The diff for this file is too large to render. See raw diff
 
abdiffuserfullatomgenerationofinvitrofunctioningantibodies/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c467f80903838433a908f416a3cf6d94ea454eb4ddaaf1898c40d7053489bdcb
3
+ size 1156839
abdiffuserfullatomgenerationofinvitrofunctioningantibodies/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31aad860e3472b7fb8006bf5782ab1771a56c746b433c4fde76e3c6b9e8e797e
3
+ size 972710
abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/7f9850ff-ded7-44ba-9f78-8655b5dbc4a3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:829705de0fcaf95db7924ea60747fdec3c9b1ebac594b1f3bfd18a32a1abcbd0
3
+ size 91112
abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/7f9850ff-ded7-44ba-9f78-8655b5dbc4a3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30f75b50c2bc83e139ce685fe725587e267e2852051fe2feab99aa0eb80cab6a
3
+ size 120799
abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/7f9850ff-ded7-44ba-9f78-8655b5dbc4a3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a70518dbdd30b1953db3233bf733c9018a19c36ba40720050a30cc2df7f1b83e
3
+ size 8980786
abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/full.md ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AbdomenAtlas-8K: Annotating 8,000 CT Volumes for Multi-Organ Segmentation in Three Weeks
2
+
3
+ Chongyu Qu $^{1}$ Tiezheng Zhang $^{1}$ Hualin Qiao $^{2}$ Jie Liu $^{3}$ Yucheng Tang $^{4}$ Alan L. Yuille $^{1}$ Zongwei Zhou $^{1,*}$
4
+
5
+ <sup>1</sup>Johns Hopkins University <sup>2</sup>Rutgers University <sup>3</sup>City University of Hong Kong <sup>4</sup>NVIDIA
6
+
7
+ Code & Data: https://github.com/MrGiovanni/AbdomenAtlas
8
+
9
+ # Abstract
10
+
11
+ Annotating medical images, particularly for organ segmentation, is laborious and time-consuming. For example, annotating an abdominal organ requires an estimated rate of 30–60 minutes per CT volume based on the expertise of an annotator and the size, visibility, and complexity of the organ. Therefore, publicly available datasets for multi-organ segmentation are often limited in data size and organ diversity. This paper proposes an active learning procedure to expedite the annotation process for organ segmentation and creates the largest multi-organ dataset (by far) with the spleen, liver, kidneys, stomach, gallbladder, pancreas, aorta, and IVC annotated in 8,448 CT volumes, equating to 3.2 million slices. The conventional annotation methods would take an experienced annotator up to 1,600 weeks (or roughly 30.8 years) to complete this task. In contrast, our annotation procedure has accomplished this task in three weeks (based on an 8-hour workday, five days a week) while maintaining a similar or even better annotation quality. This achievement is attributed to three unique properties of our method: (1) label bias reduction using multiple pre-trained segmentation models, (2) effective error detection in the model predictions, and (3) attention guidance for annotators to make corrections on the most salient errors. Furthermore, we summarize the taxonomy of common errors made by AI algorithms and annotators. This allows for continuous improvement of AI and annotations, significantly reducing the annotation costs required to create large-scale datasets for a wider variety of medical imaging tasks.
12
+
13
+ # 1 Introduction
14
+
15
+ Medical segmentation is a rapidly advancing task that plays a vital role in diagnosing, treating, and radiotherapy planning [37, 10, 32, 103, 40]. Building datasets of a substantial number of annotated medical images is critical for training and testing artificial intelligence (AI) models<sup>1</sup> [104, 52, 70]. However, medical datasets carefully annotated by an annotator are infeasible to create at a huge scale using conventional annotation methods [65] because performing per-voxel annotations is expensive and time-consuming [89, 55, 53]. As a result, publicly available datasets for multi-organ segmentation are often limited in data size (a few hundred) and organ diversity<sup>2</sup> as reviewed in Figure 1(a).
16
+
17
+ ![](images/11ce199b7f6a0f440b15dac04fea5cebf21bcf8c630df801150e1126fe4956e6.jpg)
18
+ (b)
19
+ Figure 1: (a) An overview of public datasets. AbdomenAtlas-8K stands out from other datasets due to its large number of annotated CT volumes. We have reviewed dataset names (and licenses). (b) Volume distribution of eight organs. The significant variations within and across organs presented in our AbdomenAtlas-8K present challenges for the multi-organ segmentation and the generalizability of models to different domains. More comparisons can be found in Appendix Table 3 and Figure 9.
20
+
21
+ There is a pressing need to expedite the annotating procedure. The latest endeavors to construct a fully annotated dataset remained to ask radiologists to manually annotate each and every missing label [58, 36, 41, 78, 72, 9, 85, 21, 55, 99]—the same annotating procedure used a decade ago [20] or even earlier [64, 17]. Such a procedure is extremely costly, particularly for medical images, and will be time-consuming to create a large-scale dataset for every medical specialty [69, 90, 13, 83, 65]. To improve the annotation efficiency, active learning has been widely explored by combining the radiologist's competence and the computer's capability [106, 107, 71, 8, 11]. However, most studies in the active learning literature have been retrospective in nature, in which the annotating procedure was simulated by simply retrieving labels for the data without physically involving the radiologists in the loop to create/revise the labels [105, 66, 87, 94]. In contrast, our study is proactive, not only proposing a novel active learning procedure but also implementing it to actually construct a large dataset of 8,000 fully annotated CT volumes within a very short span of time, leveraging the synergy between medical professionals and AI algorithms in practice.
22
+
23
+ To overcome the deficiency of the conventional annotation method—which involves manually annotating each volume, slice, and voxel—we propose an efficient method to enable rapid organ annotation across enormous CT volumes. The efficiency has been demonstrated on 15 datasets and 8,448 abdominal CT volumes (1.2 TB in total). Our method enables high-quality, per-voxel annotations for eight organs and anatomical structures in all the CT volumes in three weeks (5 days per week; 8 hours per day), justified in §3.3. The constructed dataset is named AbdomenAtlas-8K, and its detailed statistics can be found in Figure 1. As a comparison, the conventional annotation methods [45, 26, 29, 51, 6] would take up to 1,600 weeks (30.8 years) to complete such a task<sup>3</sup>.
24
+
25
+ Our key innovation is that rather than time-consuming annotating organs voxel by voxel, we leverage existing data and incomplete labels from an assembly of 16 public datasets. AI models are trained on the labeled part of data and generate predictions for a large number of unlabeled parts of data. We then actively revise the model predictions by only selecting the most salient part of the regions for annotators to correct. Moreover, our study provides a taxonomy of common errors made by AI algorithms and annotators. This taxonomy minimizes error duplication and augments data diversity, ensuring the sustainability of continuous revision of AI algorithms and organ annotations. Our novel active learning procedure involves only one trained annotator, three AI models, and commercial software $(\mathrm{Pair}^4)$ . After the procedure, annotators confirm the annotation of all the CT volumes by visual inspection. A large-scale, but private, dataset [92, 93] is used for external validation.
26
+
27
+ In summary, we made two major contributions. Firstly, AbdomenAtlas-8K was a composite dataset that unified datasets from 26 different hospitals worldwide. In total, over $60.6 \times 10^{9}$ voxels were annotated in comparison with $4.3 \times 10^{9}$ voxels annotated in the public datasets. We scaled up the organ annotation by a factor of 15 and released the masks for 5,195 of the 8,448 CT volumes. This large-scale, multi-center dataset can also impact downstream clinical tasks such as surgery, treatment, abdomen atlas, and anomaly detection. Secondly, the proposed active learning procedure can generate an attention map to highlight the regions to be revised by radiologists, reducing the annotation time from 30.8 years to three weeks and accelerating the annotation process by an impressive factor of 533. This strategy can quickly scale up annotations for creating many other medical imaging datasets.
28
+
29
+ # 2 Related Work
30
+
31
+ Large dataset construction. Kirillov et al. [43] created a huge natural image dataset of 1B masks and 11M images, but it lacks semantic information, and its efficacy is limited when applied to 3D volumetric medical images [52, 34]. Our AbdomenAtlas-8K, containing 8,448 CT volumes with per-voxel annotated eight abdominal organs, is the largest annotated CT dataset at the time this paper is written. We hereby review the existing public datasets that contained over 500 CT volumes with per-voxel annotated organs [23]. For example, AMOS [40], TotalSegmentator [89], and AbdomenCT-1K [55] provided 500, 1,024, and 1,112 annotated CT volumes, respectively. Both TotalSegmentator and AMOS derived their data from a single country, with the former reflecting the Central European population from Switzerland and the latter representing the East Asian population from China. In comparison, AbdomenAtlas-8K presented a greater data diversity because the CT volumes were collected and assembled from at least 26 different hospitals worldwide. While AbdomenCT-1K sourced data from 12 hospitals, AbdomenAtlas-8K contained approximately eight times the CT volumes (8,448 vs. 1,000) and twice the variety of annotated organs (8 vs. 4). Concurrently, we are actively expanding the classes covered by AbdomenAtlas-8K. Starting with the set of 104 classes found in TotalSegmentator, we aim to significantly diversify the range of classes covered.
32
+
33
+ Active learning for segmentation. Uncertainty and diversity are key criteria in active learning. Uncertainty-based criteria assess the value of annotating a data point based on the uncertainty (e.g., entropy) of AI predictions [18, 19, 56, 75, 5, 63, 11]. On the other hand, diversity-based criteria aim to select unannotated samples that differ from each other and from those already annotated [48, 25, 44, 59, 81, 82, 76]. For additional active learning methods, we refer the reader to comprehensive literature reviews [83, 62, 31, 71]; but these methods face computational complexity challenges with segmentation tasks and large unannotated data pools. To overcome this, we summarized typical errors made by humans and computers. Our active learning procedure considered the anatomical priors, uncertainty in AI prediction, and data diversity, and, importantly, pivoted a prospective application of active learning rather than retrospective studies. Moreover, the derived criteria can generate an attention map, pinpointing areas necessitating revision, thereby enabling precise detection of high-risk prediction errors (evidenced in Table 1). Consequently, our strategy markedly diminished the workload and annotation time for annotators by a factor of 533.
34
+
35
+ # 3 AbdomenAtlas-8K
36
+
37
+ Overview. We propose an active learning procedure comprising two components: error detection from AI predictions (§3.1) and manual revision performed by radiologists to review and edit the most significant errors detected (§3.2). By repeatedly implementing these two components, it is possible to expedite the creation of fully annotated datasets for multi-organ semantic segmentation. Finally, we describe the data construction strategies (§3.3). In this work, we have applied our approach to 8,448 CT volumes in portal (44%), arterial (37%), pre- (16%), and post-contrast (3%) phases.
38
+
39
+ # 3.1 Label Error Detection Revealed by Attention Maps
40
+
41
+ Figure 2 shows the process to generate attention maps for eight target organs. The attention maps can localize the potential error regions for human annotators to review and edit AI predictions.
42
+
43
+ (1) Inconsistency. To quantify inconsistency, we calculate the standard deviation of the soft predictions produced by multiple AI architectures, including Swin UNETR, nnU-Net, and U-Net. Regions
44
+
45
+ ![](images/a22da57c87a54170ad0f5629df987e58918ef7964057bd2abda7ccc881ef9775.jpg)
46
+ Figure 2: Attention map generation. The criteria inconsistency, uncertainty, and overlap refer to regions where model predictions diverge, exhibit high entropy values, and where multiple organ predictions overlap, respectively. The attention map visualizes a combination of these regions, drawing radiologists' attention to where AI predictions might falter. A standard color scheme helps in highlighting regions that merit closer review and revision. More examples are in Appendix Figure 11.
47
+
48
+ with high standard deviation indicate higher inconsistency and may require further revision.
49
+
50
+ $$
51
+ \text {I n c o n s i s t e n c y} _ {i, c} = \sqrt {\frac {\sum_ {n = 1} ^ {N} \left(p _ {i , c} ^ {n} - \mu_ {i , c}\right) ^ {2}}{N}}, \tag {1}
52
+ $$
53
+
54
+ where the subscript $c$ represents class $c$ of our eight target organs. For each voxel $i$ , $p_{i,c}^{n}$ represents the soft prediction value obtained from the $n$ -th AI architecture of class $c$ at that voxel's index $i$ , ranging from 0 to 1. $\mu_{i,c}$ represents the average prediction value obtained by combining the results of three AI architectures at the same voxel index. In this study, there are three AI architectures, so $N$ is equal to three. Then, the Inconsistency $_{i,c}$ value is determined by the standard deviation of the soft prediction values from the three AI architectures.
55
+
56
+ (2) Uncertainty. To estimate the degree of certainty linked with the AI prediction of eight target organs, we determine the entropy of the soft predictions for each organ. Regions of higher entropy values suggest diminished confidence and increased ambiguity, potentially escalating the chances of encountering prediction errors within that specific area [98], which may necessitate further revision.
57
+
58
+ $$
59
+ \text {U n c e r t a i n t y} _ {i, c} = - \frac {\sum_ {n = 1} ^ {N} p _ {i , c} ^ {n} \times \log \left(p _ {i , c} ^ {n}\right)}{N}. \tag {2}
60
+ $$
61
+
62
+ The Uncertainty $\mathbf{\mu}_{i,c}$ is averaged over different AI architectures $(N = 3)$ .
63
+
64
+ (3) Overlap. The overlap in organ prediction can indicate potential errors. If a voxel is predicted to be a part of both the liver and kidneys, even without ground truth, we can reasonably forecast a prediction mistake. We use the following measure to detect organ overlap in predictions.
65
+
66
+ $$
67
+ \operatorname {O v e r l a p} _ {i, c} = \left\{ \begin{array}{l l} 1 & \text {i f} p _ {i, c} ^ {n} > 0. 5 \text {a n d} \exists p _ {i, c \notin} ^ {n} > 0. 5 \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {3}
68
+ $$
69
+
70
+ We generate pseudo labels by applying a threshold of 0.5 to the probability values. Pseudo labels refer to organ labels predicted by AI models without any additional revision or validation by human annotators. The overlap value, denoted as $\text{Overlap}_{i,c}$ , is determined based on the following criteria: if the prediction value for class $c$ exceeds the threshold of 0.5 for at least one AI architecture and there exists a prediction value not belonging to class $c$ that exceeds 0.5 for the same voxel index $i$ , then the overlap value is set to 1; otherwise, it is set to 0.
71
+
72
+ As a result, an attention map is generated to help annotators quickly locate regions that require revision or confirmation. We combine the inconsistency, uncertainty, and overlapping regions to produce the attention map. Consequently, a higher $\text{Attention}_{i,c}$ value in the 3D attention map indicates a greater risk of a prediction error for that voxel.
73
+
74
+ $$
75
+ \text {A t t e n t i o n} _ {i, c} = \text {I n c o n s i s t e n c y} _ {i, c} + \text {U n c e r t a i n t y} _ {i, c} + \text {O v e r l a p} _ {i, c} \tag {4}
76
+ $$
77
+
78
+ To assess the attention map, we identified error regions which is the summation of all false positive (FP) and false negative (FN) areas between the ground truth annotations (available in JHH [92]) and the pseudo labels predicted by AI. We then compare the attention maps with the error regions and calculate the sensitivity and precision, reported in §4.1.
79
+
80
+ # 3.2 Active Learning Procedure
81
+
82
+ Algorithm. Our active learning procedure has eight steps. $①$ Train an AI model from scratch, denoted as $\mathcal{M}_0$ , using 2,100 CT volumes from 16 partially labeled public datasets. This takes approximately 40 hours. $②$ Direct test the current model (e.g., $\mathcal{M}_0$ ) on all 8,448 CT volumes to segment eight organs. This takes around 12 hours. $③$ Compute organ-wise attention maps for each CT volume using the criteria of inconsistency, uncertainty, and overlap ( $\S 3.1$ ), which highlight the regions that potentially have prediction errors and require human revision. $④$ Compile a priority list sorted by the sum of attention maps. The larger the attention map is, the more urgent the CT volume requires revision. $⑤$ Ask the annotators to review the top $5\%$ (analyzed from Figure 4) CT volumes from the list and revise the pseudo labels guided by the attention maps. $⑥$ Reassemble the annotation of each revised CT volume based on the label priority (detailed in the next paragraph). $⑦$ Fine-tune the current model $(\mathcal{M}_t)$ using the reassembled annotation to obtain $\mathcal{M}_{t+1}$ . $⑧$ Repeat steps $②-⑦$ until the annotators confirm that the CT volume on the top of the prioritized list in step $④$ does not need further revision, suggesting that AI predictions of the most important CT volumes have minimal errors.
83
+
84
+ Technical details in $⑥$ : label priority. In the assembly process, the utmost priority is given to the original annotations supplied by each public dataset. Subsequently, we assign secondary priority to the revised labels from our annotators. The pseudo labels, generated by AI models, are accorded the lowest priority. Based on this label priority, we reassemble the labels for each CT volume. This involves three different scenarios: inherited from the original labels of the public dataset if available, revised (if our annotator confirms that the pseudo labels have errors), and unchanged (if our annotator confirms that the pseudo labels predicted by the AI are correct). Figure 3 displays examples of the reassembled annotations from AbdomenAtlas-8K.
85
+
86
+ ![](images/eeae77599469d75aa26c8e6e92509cc2ec7a4e79fcef4836f50ef60c1517657b.jpg)
87
+ Figure 3: AbdomenAtlas-8K annotations. The annotations for our eight target organs (opaque) have been revised by our annotator or inherited from the original annotations of the partially labeled public datasets. The remaining organs and tumors (transparent) can be used for future research.
88
+
89
+ ![](images/d25ccebd1a9791c08ebec68f7c639519eb0d2d7d1b7648e73d5a6ab8d4515d64.jpg)
90
+ Figure 4: Attention size distribution. The y-axis denotes the attention size, the sum of Equation 4 over eight classes; each point corresponds to a distinct CT volume. A larger attention size implies a greater need for revision in various regions. While most CT volumes exhibit a small attention size, a few notable outliers (marked in red) stand out. These outliers are of high priority for revision by human experts. According to the figure, the ratio of outliers is about $5\%$ (highlighted in red). The $5\%$ is estimated by the plot and also related to the budget of human revision for each Step in the active learning procedure. It is essential to emphasize that roughly $5\%$ of CT volumes within each dataset are highly likely to contain predicted errors, requiring further revision by our annotator.
91
+
92
+ Figure 4 illustrates the attention size of CT volumes within each partially labeled public dataset. It is important to note that the BTCV [46] and AMOS22 [40] datasets already have original annotations for our eight target organs. By analyzing the curve depicting the decreasing value of the attention map, we observed that among the 5,195 CT volumes from the 16 partially labeled abdominal datasets, only
93
+
94
+ ![](images/56675d4145626412cda9e3616d0ec496552059961265dd52d6362982f2f61143.jpg)
95
+ Figure 5: (a) Error taxonomy. To minimize repetitive errors during human revision, we collate and analyze common mistakes that occur during the data made by either AI or human annotators. (b) Errors in pseudo labels. For example, the cavity in the stomach is missing; the enlarged pancreas tail caused by a pancreatic tumor is missing; the enlarged gall bladder is missing. (c) Errors in human annotations. The high intra-annotator variability can be attributed to the ambiguity in defining organ or tumor boundaries, whereas the high inter-annotator variability is often a result of inconsistent annotation protocols and guidelines across different institutes.
96
+
97
+ ![](images/6eb5e07abe20ce942b54a1d3ea7f2625a6239c2faa2f90bdd3b118688f77f29c.jpg)
98
+
99
+ ![](images/f1c98e3e539e93b9a807bdfead9b92c63c0be3ae03c5bc46a55ec6e5bc071367.jpg)
100
+
101
+ $5\%$ of the volumes exhibited significantly large attention map values. These high values indicate a higher risk of prediction errors, suggesting the need for further revision. Extrapolating this finding to a larger dataset of 8,000 CT volumes, we can estimate that the annotator will need to confirm and revise approximately 400 CT volumes. Assuming a rate of 15 minutes per CT volume and an 8-hour workday, this process would take approximately 12.5 days to complete. The attention map values are expected to decrease after fine-tuning $\mathcal{M}_0$ with revised labels as a new benchmark.
102
+
103
+ Figure 5(a) summarizes typical errors encountered in the active learning procedure into two categories: pseudo label errors and ground truth errors. As illustrated in Figure 5(b), pseudo label errors refer to the errors in the AI predictions. These errors often arise from irregular organ shapes, such as the absence of a cavity in the stomach or the omission of an enlarged pancreas and gall bladder. Discrepancies in CT volumes between the training and testing data, such as variations in scanners, protocols, reconstruction methods, and contrast enhancement, can also contribute to these inaccuracies. These issues result in the model's predictions lacking accuracy in representing these specific anatomical structures. As examples in Figure 5(c), ground truth errors are errors in AI model predictions that result from inaccuracies in the human annotations used for training the model. These inaccuracies may arise due to unclear organ boundaries or inconsistency in labeling protocols across different institutions, introducing variations into the model's predictions.
104
+
105
+ # 3.3 Dataset Construction
106
+
107
+ Annotators. Our study recruited three annotators, comprising a senior radiologist with over 15 years of experience and two junior radiologists with three years of experience. The senior radiologist undertook the task of annotation revision in the active learning procedure. Before releasing AbdomenAtlas-8K, two junior radiologists looked through the masks in the entire AbdomenAtlas-8K and made revisions if needed (i.e., our method missed the error regions). In addition, the two junior radiologists conducted the inter-annotator variability analysis (Figure 8) and recorded the time for conventional methods when each organ must be annotated voxel by voxel (Appendix Table 4).
108
+
109
+ Efficiency. Why 30.8 years? We considered an 8-hour workday, five days a week. A trained annotator typically needs 60 minutes per organ per CT volume [65]. Our AbdomenAtlas-8K has a total of eight organs and around 8,000 CT volumes. Therefore, annotating the entire dataset requires $60 \times 8 \times 8000$ (minutes) / $60/8/5 = 1600$ (weeks) = 30.8 (years). Why three weeks? Using our
110
+
111
+ Table 1: Evaluation metrics on JHH. The sensitivity and precision are evaluated between our organ attention maps and organ error regions. Our attention map exhibits high average sensitivity and precision, indicating its effectiveness and accuracy in detecting false positives (FP) and false negatives (FN). This highlights its capability to precisely identify regions that need revision.
112
+
113
+ <table><tr><td>Metrics</td><td>Spl</td><td>RKid</td><td>LKid</td><td>Gall</td><td>Liv</td></tr><tr><td>Sensitivity</td><td>0.91 ± 0.25</td><td>0.93 ± 0.17</td><td>0.92 ± 0.18</td><td>0.99 ± 0.07</td><td>0.74 ± 0.33</td></tr><tr><td>Precision</td><td>0.68 ± 0.40</td><td>0.90 ± 0.22</td><td>0.85 ± 0.23</td><td>0.67 ± 0.33</td><td>0.88 ± 0.12</td></tr><tr><td>Metrics</td><td>Sto</td><td>Aor</td><td>IVC</td><td>Pan</td><td>Avg.</td></tr><tr><td>Sensitivity</td><td>0.98 ± 0.10</td><td>0.98 ± 0.11</td><td>0.98 ± 0.09</td><td>0.90 ± 0.21</td><td>0.93 ± 0.17</td></tr><tr><td>Precision</td><td>0.85 ± 0.16</td><td>0.82 ± 0.21</td><td>0.75 ± 0.22</td><td>0.91 ± 0.15</td><td>0.81 ± 0.22</td></tr></table>
114
+
115
+ ![](images/207cc644a55e7e75109bbb5d725fbf8a19e25295922f9075108342df962f2a34.jpg)
116
+ Figure 6: Error region vs. attention map. The discrepancy between the model's pseudo labels and the ground truth delineates the error regions in predictions, comprising both false positives (FP) and false negatives (FN). These error regions serve as the benchmark for evaluating the sensitivity and precision of our attention map ( $\S 4.1$ ).
117
+
118
+ active learning strategy, only 400 CT volumes require manual revision from the human annotator (15 minutes per volume). That is, we managed to accelerate the annotation process by a factor of $60 \times 8 / 15 = 32$ per CT volume. Therefore, we completed the entire annotation procedure within three weeks, as reported in the paper. Human efforts: $400 \times 15$ (minutes) / $60 / 8 = 12.5$ (days) plus the time commitment for training and testing AI models taking approximately 8.5 (days).
119
+
120
+ Dataset splits. For 8,448 CT volumes in AbdomenAtlas-8K, we split them into training (5500 CT volumes), validation (500), and test (2448) sets. Each volume contains per-voxel annotations of the spleen, liver, left& right kidneys, stomach, gallbladder, pancreas, aorta, and IVC. Note that the JHH dataset is a proprietary, multi-resolution, multi-phase collected from Johns Hopkins Hospital [92, 42, 100, 47]. This dataset is used for external validation, including the assessment of the attention map quality (Table 1), AI generalizability (Table 2), the improvement of label quality in the active learning procedure (Appendix Table 5), and the performance of AI trained on a combination of public datasets and our AbdomenAtlas-8K(Appendix Table 6).
121
+
122
+ # 4 Experiment & Result
123
+
124
+ # 4.1 Attention Map and Annotation Evaluation
125
+
126
+ Evaluation of attention map. Our attention map is evaluated on 1,000 CT volumes of the JHH dataset [92], which is not included in the training process. The evaluation is performed across the eight target organs. Once an error is detected, it counts as a hit; otherwise, as a miss. To evaluate the quality of the attention map, two metrics are used: Sensitivity $= \mathrm{TP} / (\mathrm{TP} + \mathrm{FN})$ and precision $= \mathrm{TP} / (\mathrm{TP} + \mathrm{FP})$ , where a true positive (TP) means the attention map found real mistakes the AI made; a false negative (FN) means it missed some of the AI's mistakes; and a false positive (FP) means it found mistakes where the AI was actually right. Sensitivity and precision are all calculated at the volume (a group of voxels) level rather than the voxel level. We chose sensitivity and precision because this experiment is designed to evaluate an error detection task rather than a segmentation task (comparing the boundary of attention maps and error regions). They can measure how well the attention maps detect the real error regions and whether the errors being detected are real errors, respectively. The results of the attention map evaluation are presented in Table 1. The mean values of sensitivity and precision metrics for our attention maps, with respect to eight target organs, are $0.93 \pm 0.17$ and $0.81 \pm 0.22$ . Figure 6 shows the error regions and our attention map. These results
127
+
128
+ ![](images/2fb9f2ca151528cc1aceb5f5468357b723b64fd4d14a60a7b0572d65e5fe7ada.jpg)
129
+ Figure 7: (a) Pseudo vs. revised labels. Pseudo labels are predicted by AI and then revised by annotators based on attention maps, resulting in revised labels. (b) AI predictions before vs. after fine-tuning. The post-fine-tuning results exhibit superior accuracy in segmenting the organ compared to the pre-fine-tuning result. Appendix Table 5 quantifies the DSC scores before and after fine-tuning.
130
+
131
+ ![](images/f2c46d620114ad86138328d0e7ef4e0d501f92ab36953b6861a396b743978be9.jpg)
132
+ (a) Al architecture variability
133
+
134
+ ![](images/0b5233053a8c028024a20920a7adbaf46e2b5732b6b164713cacc2bfe4cafd64.jpg)
135
+ (b) Inter-annotator variability
136
+ Figure 8: The DSC score in each cell is calculated between the corresponding row and column. (a) AI architecture variability. The segmentation predictions made by different AI architectures display minor variations for eight organs, as evidenced by the corresponding DSC scores. Overall, the AI predictions align closely with the annotations provided by human experts. (b) Inter-annotator variability. The DSC scores between AI predictions and human annotators consistently outperform the score between the two human annotators, suggesting a comparable annotation quality between AI predictions and human annotations in the segmentation of six organs.
137
+
138
+ suggest that our attention map effectively captures the false positives and false negatives in error regions, demonstrating a high degree of accuracy in identifying prediction errors.
139
+
140
+ Evaluation of human annotation quality. We provided the visualization of pseudo labels (generated by AI) and revised labels (annotated by humans) in Figure 7(a). The revised labels were used to fine-tune the AI; we then compared the pseudo labels predicted by the AI before and after fine-tuning for the same CT volume from the JHH dataset, as shown in Figure 7(b). After fine-tuning the AI using the revised labels, the AI was able to accurately segment the entire stomach on the previously unseen CT volumes. This demonstrated the high quality of the revised labels and their efficacy in enhancing AI performance. We further quantified the human annotation quality measured by Dice Similarity Coefficient (DSC) and normalized surface Dice (NSD) using 1,000 CT volumes from the JHH dataset. The results in Appendix Table 5 show continual improvements in label quality along the active learning procedure. For example, AI models exhibited a marked improvement in the (pseudo) labels of aorta and postcava, jumping from $72.3\%$ to $83.7\%$ and $76.1\%$ to $78.6\%$ , respectively.
141
+
142
+ # 4.2 Label Bias and Segmentation Quality Evaluation
143
+
144
+ Evaluation of label bias. We first evaluate the predictions made by three AI architectures, i.e., Swin UNETR, nnU-Net, and U-Net, on CT volumes derived from the JHH dataset [92]. The segmentation predictions of these architectures are illustrated in Appendix Figure 10. We use the original annotations of the JHH dataset [92], which include our eight target organs. Consequently, for each CT volume, we have four predictions: three from the AI models and one from human experts. We then compute the DSC score between each pair across eight organs. These results are presented in Figure 8(a) and demonstrate that while the three AI architectures produce slightly varied predictions, they closely resemble those performed by human experts. To prevent label bias to a specific architecture, the final annotations in our AbdomenAtlas-8K are determined by averaging the three AI predictions. Our approach contrasts with the TotalSegmentator dataset [89], which exclusively relies on nnU-Net for segmentation. This could potentially generate a biased dataset that impedes model generalization across different architectures or scenarios.
145
+
146
+ Evaluation of AI prediction quality. To assess the automated annotation quality in AbdomenAtlas-8K, we asked for the assistance of two additional human annotators to help us modify the pseudo labels of our target eight organs. Due to time limitations, they are only able to revise six out of the eight organs, specifically on the 17 CT volumes from the BTCV dataset [46]. We compute DSC scores comparing AI predictions with those annotated independently by two human annotators, referred to as Dr1 and Dr2. Figure 8(b) presents the results for six organs. The DSC scores between AI predictions and each human annotator (Swin UNETR vs. Dr1 or Swin UNETR vs. Dr2) consistently exceed the score between the two human annotators (Dr1 vs. Dr2). These findings suggest that the automated AI annotations fall within the range of inter-annotator variability, thereby indicating that the quality of our automated annotations is comparable to human annotations.
147
+
148
+ # 4.3 Benchmarking
149
+
150
+ AbdomenAtlas-8K enables precision medicine for various downstream applications. We showcased one of the most pressing applications—early detection and localization of pancreatic cancer, an extremely deadly disease with a 5-year relative survival rate of only $12\%$ in the United States. The AI trained on a large, private dataset at Johns Hopkins Hospital (JHH) performed arguably higher than typical radiologists [92, 47, 42]. But this AI model and annotated dataset were inaccessible due to the many policies. Now, our paper demonstrated that using AbdomenAtlas-8K (100% made up of publicly accessible CT volumes), AI can achieve similar performance when directly tested on the JHH dataset (see Table 2). This study is a concrete demonstration of how AbdomenAtlas-8K can be used to train AI models that can be generalized to many CT volumes from novel hospitals and be adapted to address a range of clinical problems. For a more in-depth analysis of the segmentation performance of AI models, we present the comprehensive category-wise scores comparison across eight organs in Appendix Table 7.
151
+
152
+ Table 2: Benchmark results of AI models trained on AbdomenAtlas-8K. We directly apply four AI models, i.e. SwinUNETR, UNETR, U-Net, and SegResNet, trained on AbdomenAtlas-8K (public) to JHH (unseen, private) and compared its performance with AI trained on JHH. The AI models trained on AbdomenAtlas-8K exhibit comparable results in average mDSC and mNSD across eight organs when compared with AI models trained on the JHH dataset, proving the high generalization capacity of AI models trained on AbdomenAtlas-8K.
153
+
154
+ <table><tr><td rowspan="2">AI Models</td><td colspan="2">Trained on JHH (private)</td><td colspan="2">Trained on AbdomenAtlas-8K (public)</td></tr><tr><td>mDSC (%)</td><td>mNSD (%)</td><td>mDSC (%)</td><td>mNSD (%)</td></tr><tr><td>SwinUNETR [84]</td><td>84.8 ± 12.6</td><td>66.5 ± 14.8</td><td>86.5 ± 7.5</td><td>60.8 ± 10.9</td></tr><tr><td>UNETR [28]</td><td>78.6 ± 13.0</td><td>53.9 ± 14.1</td><td>86.6 ± 6.5</td><td>59.5 ± 10.4</td></tr><tr><td>U-Net [73]</td><td>84.8 ± 11.2</td><td>64.2 ± 14.8</td><td>87.3 ± 6.0</td><td>61.0 ± 10.2</td></tr><tr><td>SegResNet [12]</td><td>87.6 ± 6.60</td><td>64.9 ± 10.9</td><td>87.0 ± 6.1</td><td>60.4 ± 10.1</td></tr></table>
155
+
156
+ # 5 Discussion
157
+
158
+ Impact. The scientific community has generally agreed that large volumes of annotated data are required for developing effective AI algorithms [39, 2, 60, 24, 27, 101]. For example, developing
159
+
160
+ Foundation Models for healthcare has recently raised much attention. The Foundation Model refers to an AI model that is trained on a large dataset and can be adapted to many specific downstream applications. This requires a large-scale, fully-annotated dataset. The currently available medical datasets are too small to represent the real data distribution in clinics [80, 13, 68]. The availability of large-scale, multi-center, fully-labeled data (summarized in Appendix Table 3) is one of the most significant cornerstones for both the development and evaluation of CADe systems [90, 103]. In recent years, the rise of imaging data archives [74, 57, 38, 3, 14, 16, 58, 97, 49, 99] and international competitions [45, 86, 78, 79, 30, 29, 1, 15, 4, 40] produced several publicly available datasets for benchmarking AI algorithms, but these datasets usually are of small size, contain partial labels, come from various scanners & protocols, and are therefore often limited in their scope [22, 50, 102, 95, 96, 77, 67, 88, 42]. We anticipate that our AbdomenAtlas-8K can play an important role in enabling the model to capture complex organ patterns, variations, and features across different imaging phases, modalities, and a wide range of populations. This has been partially evidenced in Appendix Table 6, wherein we generalize AI to CT volumes taken from different hospitals.
161
+
162
+ Limitation. Pseudo-labels have the potential to expedite tumor annotation procedures as well, but the risk of producing a large number of false positives is a significant concern. The presence of false positives could significantly increase the time annotators need to accept or reject a detection. In total, our AI models generate 51,852 tumor masks, including tumors in the colon, liver, hepatic vessel, pancreas, kidneys, and lung. However, only using AbdomenAtlas-8K, we are not able to assess the AI performance of tumor detection due to the lack of comprehensive pathology reports or expert annotations to describe the tumors in the majority of these publicly available CT volumes. To address this problem, we collected 392 tumor-free (private) CT volumes from Johns Hopkins Hospital to evaluate the false positives in the pseudo labels. Using kidney tumor detection as an example, there are 37 out of 392 CT volumes containing false positives $(\mathrm{FPR} = 9.4\%)$ and a total of 161 false positives in the 37 CT volumes. Furthermore, we stratified the dataset based on the type of blood vessels, identifying average false positive rates of $11.83\%$ and $4.6\%$ for the venous and arterial vasculature, respectively. Therefore, while pseudo labels can be helpful, we anticipate a more effective method for false positive reduction is needed to enable a clinically practical tumor annotation, given the complexity of tumors compared with organs. As an extension, we plan to add tumor annotations to AbdomenAtlas-8K in three possible directions. Firstly, we plan to recruit more experienced radiologists to revise tumor annotations. Secondly, we will incorporate the pathology reports (based on biopsy results) into the human revision. These actions can reduce potential label biases and label errors from human annotators. Thirdly, we will exploit synthetic data (tumors) that can produce enormous tumor examples and their precise masks for AI training and validation [32, 47, 33].
163
+
164
+ In TotalSegmentator, the labels were largely generated by a single nnU-Net re-trained continually. Depending solely on nnU-Net could introduce a potential label bias favoring the nnU-Net architecture. This means that whenever TotalSegmentator is employed for benchmarking, nnU-Net would always outperform other segmentation architectures (e.g., UNETR, TransUNet, SwinUNETR, etc.). This observation has been made in several publications such as Huang et al. [35]. In contrast, our AbdomenAtlas-8K incorporates predictions from three different AI architectures, preventing bias towards one specific architecture [89]. However, such a solution comes with increased computational costs, and the performance of the AI architectures can vary. Taking an average of the predictions may result in the final outcome being pulled down by poorly performing AI architectures.
165
+
166
+ # 6 Conclusion
167
+
168
+ Our study shows the effectiveness of an active learning procedure, which combines the expertise of an annotator with the capabilities of a trained AI model. It not only deploys multiple AI models to detect prediction errors but also prompts annotators to revise these potential failures. Using this approach, we successfully annotated 8,448 abdominal CT volumes of different organs within three weeks, expediting the annotation process by a staggering factor of 533. Furthermore, our experiments demonstrate that the interaction between human and AI models can produce comparable or even superior results than that of a human annotator alone. This indicates that our approach can leverage the strengths of both human and AI models to achieve accurate and efficient annotation. Leveraging our efficient annotation framework, we anticipate that larger-scale medical datasets of various modalities, organs, and abnormalities can be curated at a significantly reduced cost, ultimately contributing to the development of Foundation Models in the medical domain [91, 52, 61].
169
+
170
+ # Acknowledgments and Disclosure of Funding
171
+
172
+ This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research and the Patrick J. McGovern Foundation Award. We appreciate the effort of the MONAI Team to provide open-source code for the community. This work has partially utilized the GPUs provided by ASU Research Computing. We thank Elliot K. Fishman, Linda Chu, and Satomi Kawamoto for providing the JHH dataset for external validation; Yuxiang Lai for generating the 3D rendering of segmentation; thank Xiaoxi Chen for reviewing and revising AI predictions; thank Seth Zonies and Andrew Wichmann for providing legal advice on the release of AbdomenAtlas-8K; thank Yu-Cheng Chou, Jieneng Chen, Junfei Xiao, Wenxuan Li, and Xiaoding Yuan for their constructive suggestions at several stages of the project. The content and dataset of this paper are covered by patents pending.
173
+
174
+ # References
175
+
176
+ [1] Michela Antonelli, Annika Reinke, Spyridon Bakas, Keyvan Farahani, Bennett A Landman, Geert Litjens, Bjoern Menze, Olaf Ronneberger, Ronald M Summers, Bram van Ginneken, et al. The medical segmentation decathlon. arXiv preprint arXiv:2106.05735, 2021.
177
+ [2] Diego Ardila, Atilla P Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J Reicher, Lily Peng, Daniel Tse, Mozziyar Etemadi, Wenxing Ye, Greg Corrado, et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nature medicine, 25(6):954-961, 2019.
178
+ [3] Samuel G Armato III, Geoffrey McLennan, Luc Bidaut, Michael F McNitt-Gray, Charles R Meyer, Anthony P Reeves, Binsheng Zhao, Denise R Aberle, Claudia I Henschke, Eric A Hoffman, et al. The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. Medical physics, 38(2):915–931, 2011.
179
+ [4] Ujjwal Baid, Satyam Ghodasara, Michel Bilello, Suyash Mohan, Evan Calabrese, Errol Colak, Keyvan Farahani, Jayashree Kalpathy-Cramer, Felipe C Kitamura, Sarthak Pati, et al. The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv preprint arXiv:2107.02314, 2021.
180
+ [5] Maria-Florina Balcan, Andrei Broder, and Tong Zhang. Margin based active learning. In International Conference on Computational Learning Theory, pages 35–50. Springer, 2007.
181
+ [6] Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, et al. The liver tumor segmentation benchmark (lits). Medical Image Analysis, 84:102680, 2023.
182
+ [7] Patrick Bilic, Patrick Ferdinand Christ, Eugene Vorontsov, Grzegorz Chlebus, Hao Chen, Qi Dou, Chi-Wing Fu, Xiao Han, Pheng-Ann Heng, Jürgen Hesser, et al. The liver tumor segmentation benchmark (lits). arXiv preprint arXiv:1901.04056, 2019.
183
+ [8] Samuel Budd, Emma C Robinson, and Bernhard Kainz. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Medical Image Analysis, page 102062, 2021.
184
+ [9] Aurelia Bustos, Antonio Pertusa, Jose-Maria Salinas, and Maria de la Iglesia-Vayá. Padchest: A large chest x-ray image dataset with multi-label annotated reports. Medical image analysis, 66:101797, 2020.
185
+ [10] Jieneng Chen, Yingda Xia, Jiawen Yao, Ke Yan, Jianpeng Zhang, Le Lu, Fakai Wang, Bo Zhou, Mingyan Qiu, Qihang Yu, et al. Towards a single unified model for effective detection, segmentation, and diagnosis of eight major cancers using a large collection of ct scans. arXiv preprint arXiv:2301.12291, 2023.
186
+ [11] Liangyu Chen, Yutong Bai, Siyu Huang, Yongyi Lu, Bihan Wen, Alan L Yuille, and Zongwei Zhou. Making your first choice: To address cold start problem in vision active learning. In Medical Imaging with Deep Learning. 2023.
187
+ [12] Xinze Chen, Guangliang Cheng, Yinghao Cai, Dayong Wen, and Heping Li. Semantic segmentation with modified deep residual networks. In Pattern Recognition: 7th Chinese Conference, CCPR 2016, Chengdu, China, November 5-7, 2016, Proceedings, Part II 7, pages 42-54. Springer, 2016.
188
+ [13] Linda C Chu and Elliot K Fishman. Deep learning for pancreatic cancer detection: current challenges and future strategies. The Lancet Digital Health, 2(6):e271-e272, 2020.
189
+
190
+ [14] Kenneth Clark, Bruce Vendt, Kirk Smith, John Freymann, Justin Kirby, Paul Koppel, Stephen Moore, Stanley Phillips, David Maffitt, Michael Pringle, et al. The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging, 26(6):1045-1057, 2013.
191
+ [15] Errol Colak, Felipe C Kitamura, Stephen B Hobbs, Carol C Wu, Matthew P Lungren, Luciano M Prevedello, Jayashree Kalpathy-Cramer, Robyn L Ball, George Shih, Anouk Stein, et al. The rsna pulmonary embolism ct dataset. Radiology: Artificial Intelligence, 3(2):e200254, 2021.
192
+ [16] Karen L Crawford, Scott C Neu, and Arthur W Toga. The image and data archive at the laboratory of neuro imaging. Neuroimage, 124:1080-1083, 2016.
193
+ [17] A Criminisi. Microsoft research cambridge (msrc) object recognition image database (version 2.0), 2004.
194
+ [18] Aron Culotta and Andrew McCallum. Reducing labeling effort for structured prediction tasks. In AAAI, volume 5, pages 746-751, 2005.
195
+ [19] Ido Dagan and Sean P Engelson. Committee-based sampling for training probabilistic classifiers. In Machine Learning Proceedings 1995, pages 150-157. Elsevier, 1995.
196
+ [20] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255. IEEE, 2009.
197
+ [21] Yang Deng, Ce Wang, Yuan Hui, Qian Li, Jun Li, Shiwei Luo, Mengke Sun, Quan Quan, Shuxin Yang, You Hao, et al. Ctspinelk: A large-scale dataset for spinal vertebrae segmentation in computed tomography. arXiv preprint arXiv:2105.14711, 2021.
198
+ [22] Nanqing Dong, Michael Kampffmeyer, Xiaodan Liang, Min Xu, Irina Voiculescu, and Eric P Xing. Towards robust medical image segmentation on small-scale data with incomplete labels. arXiv preprint arXiv:2011.14164, 2020.
199
+ [23] Matthias Eisenmann, Annika Reinke, Vivienne Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J Adler, Sharib Ali, Vincent Andrearczyk, Marc Aubreville, Ujjwal Baid, et al. Why is the winner the best? arXiv preprint arXiv:2303.17719, 2023.
200
+ [24] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115, 2017.
201
+ [25] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183-1192. PMLR, 2017.
202
+ [26] Eli Gibson, Francesco Giganti, Yipeng Hu, Ester Bonmati, Steve Bandula, Kurinchi Gurusamy, Brian Davidson, Stephen P Pereira, Matthew J Clarkson, and Dean C Barratt. Automatic multi-organ segmentation on abdominal ct with dense v-networks. IEEE transactions on medical imaging, 37(8):1822–1834, 2018.
203
+ [27] Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 316(22):2402-2410, 2016.
204
+ [28] Ali Hatamizadeh, Yucheng Tang, Vishwesh Nath, Dong Yang, Andriy Myronenko, Bennett Landman, Holger R Roth, and Daguang Xu. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 574-584, 2022.
205
+ [29] Nicholas Heller, Fabian Isensee, Klaus H Maier-Hein, Xiaoshuai Hou, Chunmei Xie, Fengyi Li, Yang Nan, Guangrui Mu, Zhiyong Lin, Miofei Han, et al. The state of the art in kidney and kidney tumor segmentation in contrast-enhanced ct imaging: Results of the kits19 challenge. Medical image analysis, 67:101821, 2021.
206
+ [30] Nicholas Heller, Sean McSweeney, Matthew Thomas Peterson, Sarah Peterson, Jack Rickman, Bethany Stai, Resha TejPaul, Makinna Oestreich, Paul Blake, Joel Rosenberg, et al. An international challenge to use artificial intelligence to define the state-of-the-art in kidney and kidney tumor segmentation in ct imaging., 2020.
207
+ [31] Hideitsu Hino. Active learning: Problem settings and recent developments. arXiv preprint arXiv:2012.04225, 2020.
208
+
209
+ [32] Qixin Hu, Yixiong Chen, Junfei Xiao, Shuwen Sun, Jieneng Chen, Alan L Yuille, and Zongwei Zhou. Label-free liver tumor segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7422-7432, 2023.
210
+ [33] Qixin Hu, Junfei Xiao, Yixiong Chen, Shuwen Sun, Jie-Neng Chen, Alan Yuille, and Zongwei Zhou. Synthetic tumors make ai segment tumors better. NeurIPS Workshop on Medical Imaging meets NeurIPS, 2022.
211
+ [34] Yuhao Huang, Xin Yang, Lian Liu, Han Zhou, Ao Chang, Xinrui Zhou, Rusi Chen, Junxuan Yu, Jiongquan Chen, Chaoyu Chen, et al. Segment anything model for medical images? arXiv preprint arXiv:2304.14660, 2023.
212
+ [35] Ziyan Huang, Haoyu Wang, Zhongying Deng, Jin Ye, Yanzhou Su, Hui Sun, Junjun He, Yun Gu, Lixu Gu, Shaoting Zhang, et al. Stu-net: Scalable and transferable medical image segmentation models empowered by large-scale supervised pre-training. arXiv preprint arXiv:2304.06716, 2023.
213
+ [36] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 590-597, 2019.
214
+ [37] Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18(2):203–211, 2021.
215
+ [38] Clifford R Jack Jr, Matt A Bernstein, Nick C Fox, Paul Thompson, Gene Alexander, Danielle Harvey, Bret Borowski, Paula J Britson, Jennifer L. Whitwell, Chadwick Ward, et al. The alzheimer's disease neuroimaging initiative (adni): Mri methods. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine, 27(4):685-691, 2008.
216
+ [39] Saurabh Jha and Eric J Topol. Adapting to artificial intelligence: radiologists and pathologists as information specialists. Jama, 316(22):2353-2354, 2016.
217
+ [40] Yuanfeng Ji, Haotian Bai, Jie Yang, Chongjian Ge, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhang, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. arXiv preprint arXiv:2206.08023, 2022.
218
+ [41] Alistair EW Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Roger G Mark, and Steven Horng. Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data, 6(1):1-8, 2019.
219
+ [42] Mintong Kang, Bowen Li, Zengle Zhu, Yongyi Lu, Elliot K Fishman, Alan L Yuille, and Zongwei Zhou. Label-assemble: Leveraging multiple datasets with partial labels. In IEEE 20th International Symposium on Biomedical Imaging (ISBI). IEEE, 2023.
220
+ [43] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
221
+ [44] Johannes Kulick, Robert Lieck, Marc Toussaint, et al. Active learning of hyperparameters: An expected cross entropy criterion for active model selection. ArXiv e-prints, 2014.
222
+ [45] Bennett Landman, Zhoubing Xu, Juan Eugenio Igelsias, Martin Styner, Thomas Robin Langerak, and Arno Klein. Multi-atlas labeling beyond the cranial vault-workshop and challenge. 2017.
223
+ [46] Bennett Landman, Zhoubing Xu, J Igelsias, Martin Styner, T Langerak, and Arno Klein. Miccai multi-Atlas labeling beyond the cranial vault-workshop and challenge. In Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, volume 5, page 12, 2015.
224
+ [47] Bowen Li, Yu-Cheng Chou, Shuwen Sun, Hualin Qiao, Alan Yuille, and Zongwei Zhou. Early detection and localization of pancreatic cancer by label-free tumor synthesis. MICCAI Workshop on Big Task Small Data, 1001-AI, 2023.
225
+ [48] Xin Li and Yuhong Guo. Adaptive active learning for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 859-866, 2013.
226
+ [49] Yuexiang Li and Linlin Shen. Skin lesion analysis towards melanoma detection using deep learning network. Sensors, 18(2):556, 2018.
227
+
228
+ [50] Pengbo Liu, Li Xiao, and S Kevin Zhou. Incremental learning for multi-organ segmentation with partially labeled datasets. arXiv preprint arXiv:2103.04526, 2021.
229
+ [51] Xiangde Luo, Wenjun Liao, Jianghong Xiao, Tao Song, Xiaofan Zhang, Kang Li, Guotai Wang, and Shaoting Zhang. Word: Revisiting organs segmentation in the whole abdominal region. arXiv preprint arXiv:2111.02403, 2021.
230
+ [52] Jun Ma and Bo Wang. Segment anything in medical images. arXiv preprint arXiv:2304.12306, 2023.
231
+ [53] Jun Ma, Yao Zhang, Song Gu, Xingle An, Zhihe Wang, Cheng Ge, Congcong Wang, Fan Zhang, Yu Wang, Yinan Xu, et al. Fast and low-gpu-memory abdomen ct organ segmentation: the flare challenge. Medical Image Analysis, 82:102616, 2022.
232
+ [54] Jun Ma, Yao Zhang, Song Gu, Cheng Ge, Shihao Ma, Adamo Young, Cheng Zhu, Kangkang Meng, Xin Yang, Ziyan Huang, et al. Unleashing the strengths of unlabeled data in pan-cancer abdominal organ quantification: the flare22 challenge. arXiv preprint arXiv:2308.05862, 2023.
233
+ [55] Jun Ma, Yao Zhang, Song Gu, Cheng Zhu, Cheng Ge, Yichi Zhang, Xingle An, Congcong Wang, Qiyuan Wang, Xin Liu, et al. Abdomenct-1k: Is abdominal organ segmentation a solved problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
234
+ [56] Dwarikanath Mahapatra, Behzad Bozortabar, Jean-Philippe Thiran, and Mauricio Reyes. Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 580–588. Springer, 2018.
235
+ [57] Daniel S Marcus, Tracy H Wang, Jamie Parker, John G Csernansky, John C Morris, and Randy L Buckner. Open access series of imaging studies (oasis): cross-sectional mri data in young, middle aged, nondemented, and demented older adults. Journal of cognitive neuroscience, 19(9):1498-1507, 2007.
236
+ [58] Mojtaba Masoudi, Hamid-Reza Pourreza, Mahdi Saadatmand-Tarzjan, Noushin Eftekhari, Fateme Shafiee Zargar, and Masoud Pezeshki Rad. A new dataset of computed-tomography angiography images for computer-aided detection of pulmonary embolism. Scientific data, 5(1):1-9, 2018.
237
+ [59] Andrew Kachites McCallumzy and Kamal Nigamy. Employing em and pool-based active learning for text classification. In Proc. International Conference on Machine Learning, pages 359-367. CiteSeer, 1998.
238
+ [60] Scott Mayer McKinney, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg S Corrado, Ara Darzi, et al. International evaluation of an ai system for breast cancer screening. Nature, 577(7788):89-94, 2020.
239
+ [61] Michael Moor, Oishi Banerjee, Zahra Shakeri Hossein Abad, Harlan M Krumholz, Jure Leskovec, Eric J Topol, and Pranav Rajpurkar. Foundation models for generalist medical artificial intelligence. Nature, 616(7956):259-265, 2023.
240
+ [62] Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, and Shadab Khan. Towards robust and reproducible active learning using neural networks. ArXiv, abs/2002.09564, 2020.
241
+ [63] Vishwesh Nath, Dong Yang, Holger R Roth, and Daguang Xu. Warm start active learning with proxy labels and selection via semi-supervised fine-tuning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 297-308. Springer, 2022.
242
+ [64] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International journal of computer vision, 42(3):145-175, 2001.
243
+ [65] S Park, LC Chu, EK Fishman, AL Yuille, B Vogelstein, KW Kinzler, KM Horton, RH Hruban, ES Zinreich, D Fadaei Fouladi, et al. Annotated normal ct data of the abdomen for deep learning: Challenges and strategies for implementation. Diagnostic and interventional imaging, 101(1):35-44, 2020.
244
+ [66] Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Gholamreza Reza Haffari, Anton Van Den Hengel, and Javen Qinfeng Shi. Active learning by feature mixing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12237-12246, 2022.
245
+ [67] Olivier Petit, Nicolas Thome, and Luc Soler. Iterative confidence relabeling with deep convnets for organ segmentation with partial labels. Computerized Medical Imaging and Graphics, page 101938, 2021.
246
+ [68] Deborah Plana, Dennis L Shung, Alyssa A Grimshaw, Anurag Saraf, Joseph JY Sung, and Benjamin H Kann. Randomized clinical trials of machine learning interventions in health care: a systematic review. JAMA Network Open, 5(9):e2233946-e2233946, 2022.
247
+
248
+ [69] Luciano M Prevedello, Safwan S Halabi, George Shih, Carol C Wu, Marc D Kohli, Falgun H Chokshi, Bradley J Erickson, Jayashree Kalpathy-Cramer, Katherine P Andriole, and Adam E Flanders. Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiology: Artificial Intelligence, 1(1):e180031, 2019.
249
+ [70] Pranav Rajpurkar and Matthew P Lungren. The current and future state of ai interpretation of medical images. New England Journal of Medicine, 388(21):1981-1990, 2023.
250
+ [71] Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. A survey of deep active learning. arXiv preprint arXiv:2009.00236, 2020.
251
+ [72] Blaine Rister, Darwin Yi, Kaushik Shivakumar, Tomomi Nobashi, and Daniel L Rubin. Ct-org, a new dataset for multiple organ segmentation in computed tomography. Scientific Data, 7(1):1-9, 2020.
252
+ [73] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234-241. Springer, 2015.
253
+ [74] Holger R Roth, Le Lu, Amal Farag, Hoo-Chang Shin, Jiamin Liu, Evrim B Turkbey, and Ronald M Summers. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In International conference on medical image computing and computer-assisted intervention, pages 556-564. Springer, 2015.
254
+ [75] Tobias Scheffer, Christian Decomain, and Stefan Wrobel. Active hidden markov models for information extraction. In International Symposium on Intelligent Data Analysis, pages 309-318. Springer, 2001.
255
+ [76] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017.
256
+ [77] Gonglei Shi, Li Xiao, Yang Chen, and S Kevin Zhou. Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Medical Image Analysis, 70:101979, 2021.
257
+ [78] George Shih, Carol C Wu, Safwan S Halabi, Marc D Kohli, Luciano M Prevedello, Tessa S Cook, Arjun Sharma, Judith K Amorosa, Veronica Arteaga, Maya Galperin-Aizenberg, et al. Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia. *Radiology: Artificial Intelligence*, 1(1):e180041, 2019.
258
+ [79] Amber L Simpson, Michela Antonelli, Spyridon Bakas, Michel Bilello, Keyvan Farahani, Bram Van Ginneken, Annette Kopp-Schneider, Bennett A Landman, Geert Litjens, Bjoern Menze, et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv preprint arXiv:1902.09063, 2019.
259
+ [80] Shelly Soffer, Avi Ben-Cohen, Orit Shimon, Michal Marianne Amitai, Hayit Greenspan, and Eyal Klang. Convolutional neural networks for radiologic images: a radiologist's guide. Radiology, 290(3):590-606, 2019.
260
+ [81] Jamshid Sourati, Ali Gholipour, Jennifer G Dy, Sila Kurugol, and Simon K Warfield. Active deep learning with fisher information for patch-wise semantic segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 83-91. Springer, 2018.
261
+ [82] Jamshid Sourati, Ali Gholipour, Jennifer G Dy, Xavier Tomas-Fernandez, Sila Kurugol, and Simon K Warfield. Intelligent labeling based on fisher information for medical image segmentation using deep learning. IEEE transactions on medical imaging, 38(11):2642-2653, 2019.
262
+ [83] Nima Tajbakhsh, Laura Jeyaseelan, Qian Li, Jeffrey N Chiang, Zhihao Wu, and Xiaowei Ding. Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Medical Image Analysis, page 101693, 2020.
263
+ [84] Yucheng Tang, Dong Yang, Wenqi Li, Holger R Roth, Bennett Landman, Daguang Xu, Vishwesh Nath, and Ali Hatamizadeh. Self-supervised pre-training of swim transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20730-20740, 2022.
264
+ [85] Emily B Tsai, Scott Simpson, Matthew P Lungren, Michelle Hershman, Leonid Roshkovan, Errol Colak, Bradley J Erickson, George Shih, Anouk Stein, Jayashree Kalpathy-Cramer, et al. The rsna international Covid-19 open radiology database (ricord). Radiology, 299(1):E204-E213, 2021.
265
+
266
+ [86] Vanya V Valindria, Nick Pawlowski, Martin Rajchl, Ioannis Lavdas, Eric O Aboagye, Andrea G Rockall, Daniel Rueckert, and Ben Glocker. Multi-modal learning from unpaired images: Application to multi-organ segmentation in ct and mri. In 2018 IEEE winter conference on applications of computer vision (WACV), pages 547-556. IEEE, 2018.
267
+ [87] Jingwen Wang, Yuguang Yan, Yubing Zhang, Guiping Cao, Ming Yang, and Michael K Ng. Deep reinforcement active learning for medical image classification. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2020: 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part I 23, pages 33-42. Springer, 2020.
268
+ [88] Li Wang, Dong Li, Yousong Zhu, Lu Tian, and Yi Shan. Cross-dataset collaborative learning for semantic segmentation. arXiv preprint arXiv:2103.11351, 2021.
269
+ [89] Jakob Wasserthal, Manfred Meyer, Hanns-Christian Breit, Joshy Cyriac, Shan Yang, and Martin Segeroth. Totalsegmentator: robust segmentation of 104 anatomical structures in ct images. arXiv preprint arXiv:2208.05868, 2022.
270
+ [90] Martin J Willemink, Wojciech A Koszek, Cailin Hardell, Jie Wu, Dominik Fleischmann, Hugh Harvey, Les R Folio, Ronald M Summers, Daniel L Rubin, and Matthew P Lungren. Preparing medical imaging data for machine learning. Radiology, 295(1):4-15, 2020.
271
+ [91] Martin J Willemink, Holger R Roth, and Veit Sandfort. Toward foundational deep learning models for medical imaging in the new era of transformer networks. Radiology: Artificial Intelligence, 4(6):e210284, 2022.
272
+ [92] Yingda Xia, Qihang Yu, Linda Chu, Satomi Kawamoto, Seyoun Park, Fengze Liu, Jieneng Chen, Zhuotun Zhu, Bowen Li, Zongwei Zhou, et al. The felix project: Deep networks to detect pancreatic neoplasms. medRxiv, 2022.
273
+ [93] Yingda Xia, Qihang Yu, Linda Chu, Satomi Kawamoto, Seyoun Park, Fengze Liu, Jieneng Chen, Zhuotun Zhu, Bowen Li, Zongwei Zhou, Yongyi Lu, Yan Wang, Wei Shen, Lingxi Xie, Yuyin Zhou, Daniel Fouladi, Shahab Shayesteh, Scott Jefferson Graves, Alejandra Blanco, Eva Zinreich, Ken Kinzler, Ralph Gruban, Bert Vogelstein, Eliot Fishman, and Alan Yuille. Ai algorithms can assist radiologists in early detection of pancreatic neoplasms through venous and arterial ct imaging. In Radiological Society of North America (RSNA), 2022.
274
+ [94] Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, and Xinjing Cheng. Towards fewer annotations: Active learning via region impurity and prediction uncertainty for domain adaptive semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8068-8078, 2022.
275
+ [95] Ke Yan, Jinzheng Cai, Adam P Harrison, Dakai Jin, Jing Xiao, and Le Lu. Universal lesion detection by learning from multiple heterogeneously labeled datasets. arXiv preprint arXiv:2005.13753, 2020.
276
+ [96] Ke Yan, Jinzheng Cai, Youjing Zheng, Adam P Harrison, Dakai Jin, You-bao Tang, Yu-Xing Tang, Lingyun Huang, Jing Xiao, and Le Lu. Learning from multiple datasets with heterogeneous and partial labels for universal lesion detection in ct. IEEE Transactions on Medical Imaging, 2020.
277
+ [97] Ke Yan, Xiaosong Wang, Le Lu, and Ronald M Summers. Deeplson: automated mining of large-scale lesion annotations and universal lesion detection with deep learning. Journal of medical imaging, 5(3):036501, 2018.
278
+ [98] Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z Chen. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, pages 399-407. Springer, 2017.
279
+ [99] Yang Yang, Xueyan Mei, Philip Robson, Brett Marinelli, Mingqian Huang, Amish Doshi, Adam Jacobi, Katherine Link, Thomas Yang, Chendi Cao, Ying Wang, Hayit Greenspan, Timothy Deyer, and Zahi Fayad. Radimagenet: A large-scale radiologic dataset for enhancing deep learning transfer learning research, 2021.
280
+ [100] Yuan Yao, Fengze Liu, Zongwei Zhou, Yan Wang, Wei Shen, Alan Yuille, and Yongyi Lu. Unsupervised domain adaptation through shape modeling for medical image segmentation. arXiv preprint arXiv:2207.02529, 2022.
281
+ [101] Alan L Yuille and Chenxi Liu. Deep nets: What have they ever done for vision? International Journal of Computer Vision, 129(3):781-802, 2021.
282
+
283
+ [102] Jianpeng Zhang, Yutong Xie, Yong Xia, and Chunhua Shen. Dodnet: Learning to segment multi-organ and tumors from multiple partially labeled datasets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1195–1204, 2021.
284
+ [103] Zongwei Zhou. Towards Annotation-Efficient Deep Learning for Computer-Aided Diagnosis. PhD thesis, Arizona State University, 2021.
285
+ [104] Zongwei Zhou, Michael B Gotway, and Jianming Liang. Interpreting medical images. In Intelligent Systems in Medicine and Health, pages 343-371. Springer, 2022.
286
+ [105] Zongwei Zhou, Jae Shin, Ruibin Feng, R Todd Hurst, Christopher B Kendall, and Jianming Liang. Integrating active learning and transfer learning for carotid intima-media thickness video interpretation. Journal of digital imaging, 32(2):290-299, 2019.
287
+ [106] Zongwei Zhou, Jae Shin, Lei Zhang, Suryakanth Gurudu, Michael Gotway, and Jianming Liang. Finetuning convolutional neural networks for biomedical image analysis: actively and incrementally. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7340-7351, 2017.
288
+ [107] Zongwei Zhou, Jae Y Shin, Suryakanth R Gurudu, Michael B Gotway, and Jianming Liang. Active, continual fine tuning of convolutional neural networks for reducing annotation efforts. Medical image analysis, 71:101997, 2021.
abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3aac494812a3bc359a085d84393c23cf9b6b8f22119903a050f245e680b426f
3
+ size 478777
abdomenatlas8kannotating8000ctvolumesformultiorgansegmentationinthreeweeks/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55b7787ff24322c663b688787ed527060c7d16ace7fda49bd6a164d6a64454c1
3
+ size 424518
abidebythelawandfollowtheflowconservationlawsforgradientflows/d4cb9e69-f570-490a-b038-de3edb695979_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:703b2b98702c63219262d0164b96c3eecd5df5a061653d0c18a7d1894d700419
3
+ size 90753
abidebythelawandfollowtheflowconservationlawsforgradientflows/d4cb9e69-f570-490a-b038-de3edb695979_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b62332250a3fb6d063163c30228a31a02d04bfd45d292f7b8c7b5385dea44484
3
+ size 107306
abidebythelawandfollowtheflowconservationlawsforgradientflows/d4cb9e69-f570-490a-b038-de3edb695979_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2115b5a5f7c0a83f85f406c02d57b53e3fa9e894134b3514d60a6ffac85438c
3
+ size 625549
abidebythelawandfollowtheflowconservationlawsforgradientflows/full.md ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Abide by the Law and Follow the Flow: Conservation Laws for Gradient Flows
2
+
3
+ Sibylle Marcotte
4
+
5
+ ENS - PSL Univ.
6
+
7
+ sibylle.marcotte@ens.fr
8
+
9
+ Rémi Gribonval
10
+
11
+ Univ Lyon, EnsL,UCBL, CNRS, Inria, LIP,
12
+
13
+ remi.gribonval@inria.fr
14
+
15
+ Gabriel Peyre
16
+
17
+ CNRS, ENS - PSL Univ.
18
+
19
+ gabriel. peyre@ens.fr
20
+
21
+ # Abstract
22
+
23
+ Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained over-parameterized models retain some properties of the optimization initialization. This "implicit bias" is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. The purpose of this article is threefold. First, we rigorously expose the definition and basic properties of "conservation laws", that define quantities conserved during gradient flows of a given model (e.g. of a ReLU network with a given architecture) with any training data and any loss. Then we explain how to find the maximal number of independent conservation laws by performing finite-dimensional algebraic manipulations on the Lie algebra generated by the Jacobian of the model. Finally, we provide algorithms to: a) compute a family of polynomial laws; b) compute the maximal number of (not necessarily polynomial) independent conservation laws. We provide showcase examples that we fully work out theoretically. Besides, applying the two algorithms confirms for a number of ReLU network architectures that all known laws are recovered by the algorithm, and that there are no other independent laws. Such computational tools pave the way to understanding desirable properties of optimization initialization in large machine learning models.
24
+
25
+ # 1 Introduction
26
+
27
+ State-of-the-art approaches in machine learning rely on the conjunction of gradient-based optimization with vastly "over-parameterized" architectures. A large body of empirical [30] and theoretical [5] works suggest that, despite the ability of these models to almost interpolate the input data, they are still able to generalize well. Analyzing the training dynamics of these models is thus crucial to gain a better understanding of this phenomenon. Of particular interest is to understand what properties of the initialization are preserved during the dynamics, which is often loosely referred to as being an "implicit bias" of the training algorithm. The goal of this article is to make this statement precise, by properly defining maximal sets of such "conservation laws", by linking these quantities to algebraic computations (namely a Lie algebra) associated with the model parameterization (in our framework, this parameterization is embodied by a re-parameterization mapping $\phi$ ), and finally by exhibiting algorithms to implement these computations in SageMath [29].
28
+
29
+ Over-parameterized model Modern machine learning practitioners and researchers have found that over-parameterized neural networks (with more parameters than training data points), which are often trained until perfect interpolation, have impressive generalization properties [30,5]. This performance seemingly contradicts classical learning theory [25], and a large part of the theoretical deep learning literature aims at explaining this puzzle. The choice of the optimization algorithm is crucial to the model generalization performance [10, 21, 14], thus inducing an implicit bias.
30
+
31
+ Implicit bias The terminology "implicit bias" informally refers to properties of trained models which are induced by the optimization procedure, typically some form of regularization [22]. For gradient descent, in simple cases such as scalar linear neural networks or two-layer networks with a single neuron, it is actually possible to compute in closed form the implicit bias, which induces some approximate or exact sparsity regularization [10]. Another interesting case is logistic classification on separable data, where the implicit bias selects the max-margin classifier both for linear models [26] and for two-layer neural networks in the mean-field limit [8]. A key hypothesis to explicit the implicit bias is often that the Riemannian metric associated to the over-parameterization is either of Hessian type [10, 3], or can be somehow converted to be of Hessian type [3], which is seemingly always a very strong constraint. For example, even for simple two-layer linear models (i.e., matrix factorization) with more than a single hidden neuron, the Hessian type assumption does not hold, and no closed form is known for the implicit bias [11]. The work of [17] gives conditions on the over-parameterization for this to be possible (for instance certain Lie brackets should vanish). These conditions are (as could be expected) stronger than those required to apply Frobenius theory, as we do in the present work to retrieve conservation laws.
32
+
33
+ Conservation laws Finding functions conserved during gradient flow optimization of neural networks (a continuous limit of gradient descent often used to model the optimization dynamics) is particularly useful to better understand the flow behavior. One can see conservation laws as a "weak" form of implicit bias: to explain, among a possibly infinite set of minimizers, which properties (e.g. in terms of sparsity, low-rank, etc.) are being favored by the dynamic. If there are enough conservation laws, one has an exact description of the dynamic (see Section 3.4), and in some cases, one can even determine explicitly the implicit bias. Otherwise, one can still predict what properties of the initialization are retained at convergence, and possibly leverage this knowledge. For example, in the case of linear neural networks, certain balancedness properties are satisfied and provide a class of conserved functions [24, 9, 11, 2, 15, 28, 19]. These conservation laws enable for instance to prove the global convergence of the gradient flow under some assumptions. We detail these laws in Proposition 4.1. A subset of these "balancedness" laws still holds in the case of a ReLU activation [9], which reflects the rescaling invariance of these networks (see Section 4 for more details). More generally such conservation laws bear connections with the invariances of the model [16]: to each 1-parameter group of transformation preserving the loss, one can associate a conserved quantity, which is in some sense analogous to Noether's theorem [23, 12]. Similar reasoning is used by [31] to show the influence of initialization on convergence and generalization performance of the neural network. Our work is somehow complementary to this line of research: instead of assuming a priori known symmetries, we directly analyze the model and give access to conservation laws using algebraic computations. For matrix factorization as well as for certain ReLU network architectures, this allows us to show that the conservation laws reported in the literature are complete (there are no other independent quantities that would be preserved by all gradient flows).
34
+
35
+ # Contributions
36
+
37
+ We formalize the notion of a conservation law, a quantity preserved through all gradient flows given a model architecture (e.g. a ReLU neural network with prescribed layers). Our main contributions are:
38
+
39
+ - to show that for several classical losses, characterizing conservation laws for deep linear (resp. shallow ReLU) networks boils down to analyzing a finite dimensional space of vector fields;
40
+ - to propose an algorithm (coded in SageMath) identifying polynomial conservation laws on linear / ReLU network architectures; it identifies all known laws on selected examples;
41
+ - to formally define the maximum number of (not necessarily polynomial) independent conservation laws and characterize it a) theoretically via Lie algebra computations; and b) practically via an algorithm (coded in SageMath) computing this number on worked examples;
42
+ - to illustrate that in certain settings these findings allow to rewrite an over-parameterized flow as an "intrinsic" low-dimensional flow;
43
+ - to highlight that the cost function associated to the training of linear and ReLU networks, shallow or deep, with various losses (quadratic and more) fully fits the proposed framework.
44
+
45
+ A consequence of our results is to show for the first time that conservation laws commonly reported in the literature are maximal: there is no other independent preserved quantity (see Propositions 4.3, 4.2, Corollary 4.4) and Section 4.2.
46
+
47
+ # 2 Conservation Laws for Gradient Flows
48
+
49
+ After some reminders on gradient flows, we formalize the notion of conservation laws.
50
+
51
+ # 2.1 Gradient dynamics
52
+
53
+ We consider learning problems, where we denote $x_{i} \in \mathbb{R}^{m}$ the features and $y_{i} \in \mathcal{V}$ the targets (for regression, typically with $\mathcal{V} = \mathbb{R}^n$ ) or labels (for classification) in the case of supervised learning, while $y_{i}$ can be considered constant for unsupervised/self-supervised learning. We denote $X := (x_{i})_{i}$ and $Y := (y_{i})_{i}$ . Prediction is performed by a parametric mapping $g(\theta, \cdot): \mathbb{R}^m \to \mathbb{R}^n$ (for instance a neural network) which is trained by empirically minimizing over parameters $\theta \in \Theta \subseteq \mathbb{R}^D$ a cost
54
+
55
+ $$
56
+ \mathcal {E} _ {X, Y} (\theta) := \sum_ {i} \ell \left(g \left(\theta , x _ {i}\right), y _ {i}\right), \tag {1}
57
+ $$
58
+
59
+ where $\ell$ is the loss function. In practical examples with linear or ReLU networks, $\Theta$ is either $\mathbb{R}^D$ or an open set of "non-degenerate" parameters. The goal of this paper is to analyze what functions $h(\theta)$ are preserved during the gradient flow (the continuous time limit of gradient descent) of $\mathcal{E}_{X,Y}$ :
60
+
61
+ $$
62
+ \dot {\theta} (t) = - \nabla \mathcal {E} _ {X, Y} (\theta (t)), \text {w i t h} \theta (0) = \theta_ {\text {i n i t}}. \tag {2}
63
+ $$
64
+
65
+ A priori, one can consider different "levels" of conservation, depending whether $h$ is conserved: during the optimization of $\mathcal{E}_{X,Y}$ for a given loss $\ell$ and a given data set $(x_{i},y_{i})_{i}$ ; or given a loss $\ell$ , during the optimization of $\mathcal{E}_{X,Y}$ for any data set $(x_{i},y_{i})_{i}$ . Note that using stochastic optimization methods and discrete gradients would break the exact preservation of the conservation laws, and only approximate conservation would hold, as remarked in [16].
66
+
67
+ # 2.2 Conserved functions
68
+
69
+ As they are based on gradient flows, conserved functions are first defined locally.
70
+
71
+ Definition 2.1 (Conservation through a flow). Consider an open subset $\Omega \subseteq \Theta$ and a vector field $\chi \in \mathcal{C}^1 (\Omega ,\mathbb{R}^D)$ . By the Cauchy-Lipschitz theorem, for each initial condition $\theta_{\mathrm{init}}$ , there exists a unique maximal solution $t\in [0,T_{\theta_{\mathrm{init}}})\mapsto \theta (t,\theta_{\mathrm{init}})$ of the ODE $\dot{\theta} (t) = \chi (\theta (t))$ with $\theta (0) = \theta_{\mathrm{init}}$ . A function $h:\Omega \subseteq \mathbb{R}^{D}\to \mathbb{R}$ is conserved on $\Omega$ through the vector field $\chi$ if $h(\theta (t,\theta_{\mathrm{init}})) = h(\theta_{\mathrm{init}})$ for each choice of $\theta_{\mathrm{init}}\in \Omega$ and every $t\in [0,T_{\theta_{\mathrm{init}}})$ . It is conserved on $\Omega$ through a subset $W\subset \mathcal{C}^1 (\Omega ,\mathbb{R}^D)$ if $h$ is conserved on $\Omega$ during all flows induced by all $\chi \in W$ .
72
+
73
+ In particular, one can adapt this definition for the flow induced by the cost $(\mathbb{D})$
74
+
75
+ Definition 2.2 (Conservation during the flow $\boxed{2}$ with a given dataset). Consider an open subset $\Omega \subseteq \Theta$ and a dataset $(X,Y)$ such that $\mathcal{E}_{X,Y} \in \mathcal{C}^2(\Omega, \mathbb{R})$ . A function $h: \Omega \subseteq \mathbb{R}^D \to \mathbb{R}$ is conserved on $\Omega$ during the flow $\boxed{2}$ if it is conserved through the vector field $\chi(\cdot) \coloneqq \nabla \mathcal{E}_{X,Y}(\cdot)$ .
76
+
77
+ Our goal is to study which functions are conserved during "all" flows defined by the ODE (2). This in turn leads to the following definition.
78
+
79
+ Definition 2.3 (Conservation during the flow $(\square)$ with "any" dataset). Consider an open subset $\Omega \subset \Theta$ and a loss $\ell(z, y)$ such that $\ell(\cdot, y)$ is $\mathcal{C}^2$ -differentiable for all $y \in \mathcal{Y}$ . A function $h: \Omega \subseteq \mathbb{R}^D \to \mathbb{R}$ is conserved on $\Omega$ for any data set if, for each data set $(X, Y)$ such that $g(\cdot, x_i) \in \mathcal{C}^2(\Omega, \mathbb{R})$ for each $i$ , the function $h$ is conserved on $\Omega$ during the flow $(\square)$ . This leads us to introduce the family of vector fields:
80
+
81
+ $$
82
+ W _ {\Omega} ^ {g} := \left\{\chi (\cdot): \exists X, Y, \forall i g (\cdot , x _ {i}) \in \mathcal {C} ^ {2} (\Omega , \mathbb {R}), \chi = \nabla \mathcal {E} _ {X, Y} \right\} \subseteq \mathcal {C} ^ {1} (\Omega , \mathbb {R} ^ {D}) \tag {3}
83
+ $$
84
+
85
+ so that being conserved on $\Omega$ for any dataset is the same as being conserved on $\Omega$ through $W_{\Omega}^{g}$ .
86
+
87
+ The above definitions are local and conditioned on a choice of open set of parameters $\Omega \subset \Theta$ . We are rather interested in functions defined on the whole parameter space $\Theta$ , hence the following definition.
88
+
89
+ Definition 2.4. A function $h: \Theta \mapsto \mathbb{R}$ is locally conserved on $\Theta$ for any data set if for each open subset $\Omega \subseteq \Theta$ , $h$ is conserved on $\Omega$ for any data set.
90
+
91
+ A basic property of $\mathcal{C}^1$ conserved functions (which proof can be found in Appendix A) corresponds to an "orthogonality" between their gradient and the considered vector fields.
92
+
93
+ Proposition 2.5. Given a subset $W \subset \mathcal{C}^1(\Omega, \mathbb{R}^D)$ , its trace at $\theta \in \Omega$ is defined as the linear space
94
+
95
+ $$
96
+ W (\theta) := \operatorname {s p a n} \left\{\chi (\theta): \chi \in W \right\} \subseteq \mathbb {R} ^ {D}. \tag {4}
97
+ $$
98
+
99
+ A function $h\in \mathcal{C}^1 (\Omega ,\mathbb{R})$ is conserved on $\Omega$ through $W$ if, and only if $\nabla h(\theta)\perp W(\theta),\forall \theta \in \Omega$
100
+
101
+ Therefore, combining Proposition 2.5 and Definition 2.4, the object of interest to study locally conserved functions is the union of the traces
102
+
103
+ $$
104
+ W _ {\theta} ^ {g} := \bigcup \left\{W _ {\Omega} ^ {g} (\theta): \Omega \subseteq \Theta \text {w i t h} \Omega \text {a n e i g h b o r h o o d} \theta \right\}. \tag {5}
105
+ $$
106
+
107
+ Corollary 2.6. A function $h: \Theta \mapsto \mathbb{R}$ is locally conserved on $\Theta$ for any data set if and only if $\nabla h(\theta) \perp W_{\theta}^{g}$ for all $\theta \in \Theta$ .
108
+
109
+ It will soon be shown (cf Theorem 2.14) that $W_{\theta}^{g}$ can be rewritten as the trace $W(\theta)$ of a simple finite-dimensional functional space $W$ . Meanwhile, we keep the specific notation. For the moment, this set is explicitly characterized via the following proposition (which proof can be found in Appendix B).
110
+
111
+ Proposition 2.7. Assume that for each $y \in \mathcal{V}$ the loss $\ell(z, y)$ is $\mathcal{C}^2$ -differentiable with respect to $z \in \mathbb{R}^n$ . For each $\theta \in \Theta$ we have:
112
+
113
+ $$
114
+ W _ {\theta} ^ {g} = \operatorname * {s p a n} _ {(x, y) \in \mathcal {X} _ {\theta} \times \mathcal {Y}} \left\{\left[ \partial_ {\theta} g (\theta , x) \right] ^ {\top} \nabla_ {z} \ell (g (\theta , x), y) \right\}
115
+ $$
116
+
117
+ where $\mathcal{X}_{\theta}$ is the set of data points $x$ such that $g(\cdot ,x)$ is $\mathcal{C}^2$ -differentiable in the neighborhood of $\theta$ .
118
+
119
+ Example 2.8. As a first simple example, consider a two-layer linear neural network in dimension 1 (both for the input and output), with a single neuron. For such - admittedly trivial - architecture, the parameter is $\theta = (u,v)\subseteq \mathbb{R}^2$ and the model writes $g(\theta ,x) = uvx$ . One can directly check that the function: $h(u,v) = u^{2} - v^{2}$ is locally conserved on $\mathbb{R}^2$ for any data set. Indeed in that case $\nabla h(u,v) = (2u, - 2v)^{\top}\bot W_{\theta}^{g} = \operatorname *{span}_{(x,y)\in \mathbb{R}\times \mathcal{V}}\{(vx,ux)^{\top}\nabla_{z}\ell (g(\theta ,x),y)\} = \mathbb{R}\times (v,u)^{\top}$ given that the gradient $\nabla_z\ell (g(\theta ,x),y)$ is an arbitrary scalar.
120
+
121
+ In this example we obtain a simple expression of $W_{\theta}^{g}$ , however in general cases it is not possible to obtain such a simple expression from Proposition 2.7. We will show that in some cases, it is possible to express $W_{\theta}^{g}$ as the trace $W(\theta)$ of a simple finite-dimensional space $W$ (cf. Theorem 2.14).
122
+
123
+ # 2.3 Reparametrization
124
+
125
+ To make the mathematical analysis tractable and provide an algorithmic procedure to determine these functions, our fundamental hypothesis is that the model $g(\theta, x)$ can be (locally) factored via a reparametrization $\phi$ as $f(\phi(\theta), x)$ . We require that the model $g(\theta, x)$ satisfies the following central assumption.
126
+
127
+ Assumption 2.9 (Local reparameterization). There exists $d$ and $\phi \in \mathcal{C}^2 (\Theta ,\mathbb{R}^d)$ such that: for each parameter $\theta_0$ in the open set $\Theta \subseteq \mathbb{R}^{D}$ , for each $x\in \mathcal{X}$ such that $\theta \mapsto g(\theta ,x)$ is $\mathcal{C}^2$ in a neighborhood of $\theta_0\mathbb{I}$ , there is a neighborhood $\Omega$ of $\theta_0$ and $f(\cdot ,x)\in \mathcal{C}^2 (\phi (\Omega),\mathbb{R}^n)$ such that
128
+
129
+ $$
130
+ \forall \theta \in \Omega , \quad g (\theta , x) = f (\phi (\theta), x). \tag {6}
131
+ $$
132
+
133
+ Note that if the model $g(\cdot, x)$ is smooth on $\Omega$ then (6) is always satisfied with $\phi := \mathrm{id}$ and $f(\cdot, x) := g(\cdot, x)$ , yet this trivial factorization fails to capture the existence and number of conservation laws as studied in this paper. This suggests that, among all factorizations shaped as (6), there may be a notion of an optimal one.
134
+
135
+ Example 2.10. (Factorization for linear neural networks) In the two-layer case, with $r$ neurons, denoting $\theta = (U,V) \in \mathbb{R}^{n \times r} \times \mathbb{R}^{m \times r}$ (so that $D = (n + m)r$ ), we can factorize $g(\theta,x) := UV^{\top}x$ by the reparametrization $\phi(\theta) := UV^{\top} \in \mathbb{R}^{n \times m}$ using $f(\phi,x) = \phi \cdot x$ . More generally for $q$ layers, with $\theta = (U_1,\dots ,U_q)$ , we can still factorize $g(\theta,x) := U_1 \dots U_qx$ using $\phi(\theta) := U_1 \dots U_q$ and the same $f$ . This factorization is globally valid on $\Omega = \Theta = \mathbb{R}^D$ since $f(\cdot,x)$ does not depend on $\theta_0$ .
136
+
137
+ The notion of locality of the factorization (6) is illustrated by the next example.
138
+
139
+ Example 2.11 (Factorization for two-layer ReLU networks). Consider $g(\theta, x) = \left( \sum_{j=1}^{r} u_{k,j} \sigma(\langle v_j, x \rangle + b_j) + c_k \right)_{k=1}^n$ , with $\sigma(t) := \max(t,0)$ the ReLU activation function and $v_j \in \mathbb{R}^m$ , $u_{k,j} \in \mathbb{R}$ , $b_j, c_k \in \mathbb{R}$ . Then, denoting $\theta = (U, V, b, c)$ with $U = (u_{k,j})_{k,j} := (u_1, \dots, u_r) \in \mathbb{R}^{n \times r}$ , $V = (v_1, \dots, v_r) \in \mathbb{R}^{m \times r}$ , $b = (b_1, \dots, b_r)^\top \in \mathbb{R}^r$ and $c = (c_1, \dots, c_n) \in \mathbb{R}^n$ (so that $D = (n + m + 1)r + n$ ), we rewrite $g(\theta, x) = \sum_{j=1}^{r} u_j \varepsilon_{j,x} (v_j^\top x + b_j) + c$ where, given $x, \varepsilon_{j,x} = \mathbb{1}(v_j^\top x + b_j > 0)$ is piecewise constant with respect to $\theta$ . Consider $\theta^0 = (U^0, V^0, b^0, c^0) \in \mathbb{R}^D$ where $V^0 = (v_1^0, \dots, v_r^0)$ and $b^0 = (b_1^0, \dots, b_r^0)^\top$ . Then the set $X_{\theta^0}$ introduced in Proposition 2.7 is $X_{\theta^0} = \mathbb{R}^m - \cup_j \{v_j^\top x + b_j^0 = 0\}$ . Let $x \in X_{\theta^0}$ . Then on any domain $\Omega \subset \mathbb{R}^D$ such that $\theta^0 \in \Omega$ and $\varepsilon_{j,x}(\theta) := \mathbb{1}(v_j^\top x + b_j > 0)$ is constant over $\theta \in \Omega$ , the model $g_\theta(x)$ can be factorized by the reparametrization $\phi(\theta) = ((u_j v_j^\top, u_j b_j)_{j=1}^r, c)$ . In particular, in the case without bias $((b,c) = (0,0))$ , the reparametrization is defined by $\phi(\theta) = (\phi_j)_{j=1}^r$ where $\phi_j = \phi_j(\theta) := u_j v_j^\top \in \mathbb{R}^{n \times m}$ (here $d = rmn$ ) using $f(\phi,x) = \sum_j \varepsilon_{j,x} \phi_j x$ : the reparametrization $\phi(\theta)$ contains $r$ matrices of size $m \times n$ (each of rank at most one) associated to a "local" $f(\cdot, x)$ valid in a neighborhood of $\theta$ . A similar factorization is possible for deeper ReLU networks [27], as further discussed in the proof of Theorem 2.14 in Appendix C.
140
+
141
+ Combining Proposition 2.7 and using chain rules, we get a new characterization of $W_{\theta}^{g}$ :
142
+
143
+ Proposition 2.12. Assume that the loss $\ell(z, y)$ is $\mathcal{C}^2$ -differentiable with respect to $z$ . We recall (cf [5]) that $W_{\theta}^{g} \coloneqq \cup_{\Omega \subseteq \Theta : \Omega \text{ is open and } \Omega \ni \theta} W_{\Omega}^{g}(\theta)$ . Under Assumption [2.9] for all $\theta \in \Theta$ :
144
+
145
+ $$
146
+ W _ {\theta} ^ {g} = \partial \phi (\theta) ^ {\top} W _ {\phi (\theta)} ^ {f} \tag {7}
147
+ $$
148
+
149
+ with $\partial \phi (\theta)\in \mathbb{R}^{d\times D}$ the Jacobian of $\phi$ and $W_{\phi (\theta)}^f\coloneqq \operatorname *{span}_{(x,y)\in \mathcal{X}_\theta \times \mathcal{Y}}\{\partial f^x (\phi (\theta))^{\top}\nabla_z\ell (g(\theta ,x),y)\}$ where $f^{x}(\cdot)\coloneqq f(\cdot ,x)$
150
+
151
+ We show in Section 2.4 that, under mild assumptions on the loss $\ell$ , $W_{\phi(\theta)}^f = \mathbb{R}^d$ , so that Proposition 2.12 yields $W_\theta^g = \mathrm{range}(\partial \phi(\theta)^\top)$ . Then by Corollary 2.6, a function $h$ that is locally conserved on $\Theta$ for any data set is entirely characterized via the kernel of $\partial \phi(\theta)^\top$ : $\partial \phi(\theta)^\top \nabla h(\theta) = 0$ for all $\theta \in \Theta$ . The core of our analysis is then to analyze the (Lie algebraic) structure of $\mathrm{range}(\partial \phi(\cdot)^\top)$ .
152
+
153
+ # 2.4 From conserved functions to conservation laws
154
+
155
+ For linear and ReLU networks we show in Theorem 2.14 and Proposition 2.16 that under (mild) assumptions on the loss $\ell(\cdot, \cdot)$ , being locally conserved on $\Theta$ for any data set (according to Definition 2.4) is the same as being conserved (according to Definition 2.1) on $\Theta$ through the finite-dimensional subspace
156
+
157
+ $$
158
+ W _ {\phi} := \operatorname {s p a n} \left\{\nabla \phi_ {1} (\cdot), \dots , \nabla \phi_ {d} (\cdot) \right\} = \left\{\theta \mapsto \sum a _ {i} \nabla \phi_ {i} (\theta): (a _ {1}, \dots , a _ {d}) \in \mathbb {R} ^ {d} \right\} \tag {8}
159
+ $$
160
+
161
+ where we write $\partial \phi (\theta)^{\top} = (\nabla \phi_{1}(\theta),\dots ,\nabla \phi_{d}(\theta))\in \mathbb{R}^{t}\times d$ with $\nabla \phi_{i}\in \mathcal{C}^{1}(\Theta ,\mathbb{R}^{D})$
162
+
163
+ The following results (which proofs can be found in Appendix C) establish that in some cases, the functions locally conserved for any data set are exactly the functions conserved through $W_{\phi}$ .
164
+
165
+ Lemma 2.13. Assume that the loss $(z,y)\mapsto \ell (z,y)$ is $\mathcal{C}^2$ -differentiable with respect to $z\in \mathbb{R}^n$ and satisfies the condition:
166
+
167
+ $$
168
+ \underset {y \in \mathcal {Y}} {\operatorname {s p a n}} \left\{\nabla_ {z} \ell (z, y) \right\} = \mathbb {R} ^ {n}, \forall z \in \mathbb {R} ^ {n}. \tag {9}
169
+ $$
170
+
171
+ Then for linear neural networks (resp. for two-layer ReLU networks) and all $\theta \in \Theta$ we have $W_{\phi (\theta)}^{f} = \mathbb{R}^{d}$ , with the reparametrization $\phi$ from Example 2.10 and $\Theta \coloneqq \mathbb{R}^D$ (resp. with $\phi$ from Example 2.11 and $\Theta$ consisting of all parameter $\theta$ of the network such that hidden neurons are associated to pairwise distinct "hyperplanes", cf Appendix C for details).
172
+
173
+ Condition (9) holds for classical losses $\ell$ (e.g. quadratic/logistic losses), as shown in Lemma C.2 in Appendix C. Note that the additional hypothesis of pairwise distinct hyperplanes for the two-layer ReLU case is a generic hypothesis and is usual (see e.g. the notion of twin neurons in [27]). The tools from Appendix C extend Theorem 2.14 beyond (deep) linear and shallow ReLU networks. An open problem is whether the conclusions of Lemma 2.13 still hold for deep ReLU networks.
174
+
175
+ Theorem 2.14. Under the same assumptions as in Lemma [2.13] we have that for linear neural networks, for all $\theta \in \Theta \coloneqq \mathbb{R}^D$ :
176
+
177
+ $$
178
+ W _ {\theta} ^ {g} = W _ {\phi} (\theta). \tag {10}
179
+ $$
180
+
181
+ The same result holds for two-layer ReLU networks with $\phi$ from Example 2.11 and $\Theta$ the (open) set of all parameters $\theta$ such that hidden neurons are associated to pairwise distinct "hyperplanes".
182
+
183
+ This means as claimed that for linear and two-layer ReLU networks, being locally conserved on $\Theta$ for any data set exactly means being conserved on $\Theta$ through the finite-dimensional functional space $W_{\phi} \subseteq \mathcal{C}^{1}(\Theta, \mathbb{R}^{D})$ . This motivates the following definition
184
+
185
+ Definition 2.15. A real-valued function $h$ is a conservation law of $\phi$ if it is conserved through $W_{\phi}$ .
186
+
187
+ Proposition 2.5 yields the following intermediate result.
188
+
189
+ Proposition 2.16. $h\in \mathcal{C}^1 (\Omega ,\mathbb{R})$ is a conservation law for $\phi$ if and only if
190
+
191
+ $$
192
+ \nabla h (\theta) \perp \nabla \phi_ {j} (\theta), \forall \theta \in \Omega , \forall j \in \{1, \dots , d \}.
193
+ $$
194
+
195
+ Thanks to Theorem 2.14, the space $W_{\phi}$ defined in (8) introduces a much simpler proxy to express $W_{\theta}^{g}$ as a trace of a subset of $\mathcal{C}^1(\Theta, \mathbb{R}^D)$ . Moreover, when $\phi$ is $\mathcal{C}^\infty$ , $W_{\phi}$ is a finite-dimensional space of infinitely smooth functions on $\Theta$ , and this will be crucial in Section 4.1 to provide a tractable scheme (i.e. operating in finite dimension) to compute the maximum number of independent conservation laws, using the Lie algebra computations that will be described in Section 3.
196
+
197
+ Example 2.17. Revisiting Example [2.8] the function to minimize is factorized by the reparametrization $\phi : (u \in \mathbb{R}, v \in \mathbb{R}) \mapsto uv \in \mathbb{R}$ with $\theta := (u, v)$ . We saw that $h((u, v)) := u^2 - v^2$ is conserved: and indeed $\langle \nabla h(u, v), \nabla \phi(u, v) \rangle = 2uv - 2vu = 0$ , $\forall (u, v)$ .
198
+
199
+ In this simple example, the characterization of Proposition 2.16 gives a constructive way to find such a conserved function: we only need to find a function $h$ such that $\langle \nabla h(u,v), \nabla \phi(u,v) \rangle = \langle \nabla h(u,v), (v,u)^{\top} \rangle = 0$ . The situation becomes more complex in higher dimensions, since one needs to understand the interplay between the different vector fields in $W_{\phi}$ .
200
+
201
+ # 2.5 Constructibility of some conservation laws
202
+
203
+ Observe that in Example 2.17 both the reparametrization $\phi$ and the conservation law $h$ are polynomials, a property that surprisingly systematically holds in all examples of interest in the paper, making it possible to algorithmically construct some conservation laws as detailed now.
204
+
205
+ By Proposition 2.16, a function $h$ is a conservation law if it is in the kernel of the linear operator $h \in \mathcal{C}^1(\Omega, \mathbb{R}) \mapsto (\theta \in \Omega \mapsto (\langle \nabla h(\theta), \nabla \phi_i(\theta) \rangle)_{i=1,\dots,d})$ . Thus, one could look for conservation laws in a prescribed finite-dimensional space by projecting these equations in a basis (as in finite-element methods for PDEs). Choosing the finite-dimensional subspace could be generally tricky, but for the linear and ReLU cases all known conservation laws are actually polynomial "balancedness-type conditions" [1,2,9], see Section 4. In these cases, the vector fields in $W_\phi$ are also polynomials (because $\phi$ is polynomial, see Theorem C.4 and Theorem C.5 in Appendix C), hence $\theta \mapsto \langle \nabla h(\theta), \nabla \phi_i(\theta) \rangle$ is a polynomial too. This allows us to compute a basis of independent polynomial conservation laws of a given degree (to be freely chosen) for these cases, by simply focusing on the corresponding subspace of polynomials. We coded the resulting equations in SageMath, and we found back on selected examples (see Appendix I) all existing known conservation laws both for ReLU and linear networks. Open-source code is available at [18].
206
+
207
+ # 2.6 Independent conserved functions
208
+
209
+ Having an algorithm to build conservation laws is nice, yet how can we know if we have built "all" laws? This requires first defining a notion of a "maximal" set of functions, which would in some sense be independent. This does not correspond to linear independence of the functions themselves (for instance, if $h$ is a conservation law, then so is $h^k$ for each $k \in \mathbb{N}$ but this does not add any other constraint), but rather to pointwise linear independence of their gradients. This notion of independence is closely related to the notion of "functional independence" studied in [7, 20]. For instance, it is shown in [20] that smooth functionally dependent functions are characterized by having dependent gradients everywhere. This motivates the following definition.
210
+
211
+ Definition 2.18. A family of $N$ functions $(h_1, \dots, h_N)$ conserved through $W \subset \mathcal{C}^1(\Omega, \mathbb{R}^D)$ is said to be independent if the vectors $(\nabla h_1(\theta), \dots, \nabla h_N(\theta))$ are linearly independent for all $\theta \in \Omega$ .
212
+
213
+ An immediate upper bound holds on the largest possible number $N$ of functionally independent functions $h_1, \ldots, h_N$ conserved through $W$ : for $\theta \in \Omega \subseteq \mathbb{R}^D$ , the space $\operatorname{span}\{\nabla h_1(\theta), \ldots, \nabla h_N(\theta)\} \subseteq \mathbb{R}^D$ is of dimension $N$ (by independence) and (by Proposition 2.5) orthogonal to $W(\theta)$ . Thus, it is necessary to have $N \leq D - \dim W(\theta)$ . As we will now see, this bound can be tight under additional assumptions on $W$ related to Lie brackets (corresponding to the so-called Frobenius theorem). This will in turn lead to a characterization of the maximum possible $N$ .
214
+
215
+ # 3 Conservation Laws using Lie Algebra
216
+
217
+ The study of hyper-surfaces trapping the solution of ODEs is a recurring theme in control theory, since the existence of such surfaces is the basic obstruction of controllability of such systems [6]. The basic result to study these surfaces is the so-called Frobenius theorem from differential calculus (See Section 1.4 of [13] for a good reference for this theorem). It relates the existence of such surfaces, and their dimensions, to some differential condition involving so-called "Lie brackets" $[u,v]$ between pairs of vector fields (see Section 3.1 below for a more detailed exposition of this operation). However, in most cases of practical interest (such as for instance matrix factorization), the Frobenius theorem is not suitable for a direct application to the space $W_{\phi}$ because its Lie bracket condition is not satisfied. To identify the number of independent conservation laws, one needs to consider the algebraic closure of $W_{\phi}$ under Lie brackets. The fundamental object of interest is thus the Lie algebra generated by the Jacobian vector fields, that we recall next. While this is only defined for vector fields with stronger smoothness assumption, the only consequence is that $\phi$ is required to be infinitely smooth, unlike the loss $\ell (\cdot ,y)$ and the model $g(\cdot ,x)$ that can be less smooth. All concretes examples of $\phi$ in this paper are polynomial hence indeed infinitely smooth.
218
+
219
+ Notations Given a vector subspace of infinitely smooth vector fields $W \subseteq \mathcal{X}(\Theta) \coloneqq \mathcal{C}^{\infty}(\Theta, \mathbb{R}^{D})$ , where $\Theta$ is an open subset of $\mathbb{R}^{D}$ , we recall (cf Proposition 2.5) that its trace at some $\theta$ is the subspace
220
+
221
+ $$
222
+ W (\theta) := \operatorname {s p a n} \left\{\chi (\theta): \chi \in W \right\} \subseteq \mathbb {R} ^ {D}. \tag {11}
223
+ $$
224
+
225
+ For each open subset $\Omega \subseteq \Theta$ , we introduce the subspace of $\mathcal{X}(\Omega)$ : $W_{|\Omega} \coloneqq \{\chi_{|\Omega} : \chi \in W\}$ .
226
+
227
+ # 3.1 Background on Lie algebra
228
+
229
+ A Lie algebra $A$ is a vector space endowed with a bilinear map $[\cdot, \cdot]$ , called a Lie bracket, that verifies for all $X, Y, Z \in A$ : $[X, X] = 0$ and the Jacobi identity: $[X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y]] = 0$ .
230
+
231
+ For the purpose of this article, the Lie algebra of interest is the set of infinitely smooth vector fields $\mathcal{X}(\Theta)$ , endowed with the Lie bracket $[\cdot, \cdot]$ defined by
232
+
233
+ $$
234
+ [ \chi_ {1}, \chi_ {2} ]: \quad \theta \in \Theta \mapsto [ \chi_ {1}, \chi_ {2} ] (\theta) := \partial \chi_ {1} (\theta) \chi_ {2} (\theta) - \partial \chi_ {2} (\theta) \chi_ {1} (\theta), \tag {12}
235
+ $$
236
+
237
+ with $\partial \chi (\theta)\in \mathbb{R}^{D\times D}$ the jacobian of $\chi$ at $\theta$ . The space $\mathbb{R}^{n\times n}$ of matrices is also a Lie algebra endowed with the Lie bracket $[A,B]\coloneqq AB - BA$ . This can be seen as a special case of (12) in the case of linear vector fields, i.e. $\chi (\theta) = A\theta$ .
238
+
239
+ Generated Lie algebra Let $A$ be a Lie algebra and let $W \subset A$ be a vector subspace of $A$ . There exists a smallest Lie algebra that contains $W$ . It is denoted $\operatorname{Lie}(W)$ and called the generated Lie algebra of $W$ . The following proposition [6, Definition 20] constructively characterizes $\operatorname{Lie}(W)$ , where for vector subspaces $[W, W'] := \{[\chi_1, \chi_2] : \chi_1 \in W, \chi_2 \in W'\}$ , and $W + W' = \{\chi_1 + \chi_2 : \chi_1 \in W, \chi_2 \in W'\}$ .
240
+
241
+ Proposition 3.1. Given any vector subspace $W \subseteq A$ we have $\operatorname{Lie}(W) = \bigcup_{k} W_{k}$ where:
242
+
243
+ $$
244
+ \left\{ \begin{array}{l l} W _ {0} & := W \\ W _ {k} & := W _ {k - 1} + [ W _ {0}, W _ {k - 1} ] f o r k \geq 1. \end{array} \right.
245
+ $$
246
+
247
+ We will see in Section 3.2 that the number of conservation laws is characterized by the dimension of the trace $\operatorname{Lie}(W_{\phi})(\theta)$ defined in (11). The following lemma (proved in Appendix D) gives a stopping criterion to algorithmically determine this dimension (see Section 3.3 for the algorithm).
248
+
249
+ Lemma 3.2. Given $\theta \in \Theta$ , if for a given $i$ , $\dim W_{i+1}(\theta') = \dim W_i(\theta)$ for every $\theta'$ in a neighborhood of $\theta$ , then there exists a neighborhood $\Omega$ of $\theta$ such that $W_k(\theta') = W_i(\theta')$ for all $\theta' \in \Omega$ and $k \geq i$ , where the $V_i$ are defined by Proposition 3.1. Thus $\operatorname{Lie}(W)(\theta') = W_i(\theta')$ for all $\theta' \in \Omega$ . In particular, the dimension of the trace of $\operatorname{Lie}(W)$ is locally constant and equal to the dimension of $W_i(\theta)$ .
250
+
251
+ # 3.2 Number of conservation laws
252
+
253
+ The following theorem uses the Lie algebra generated by $W_{\phi}$ to characterize the number of conservation laws. The proof of this result is based on two successive uses of the Frobenius theorem and can be found in Appendix E (where we also recall Frobenius theorem for the sake of completeness).
254
+
255
+ Theorem 3.3. If $\dim(\operatorname{Lie}(W_{\phi})(\theta))$ is locally constant then each $\theta \in \Omega \subseteq \mathbb{R}^D$ admits a neighborhood $\Omega'$ such that there are $D - \dim(\operatorname{Lie}(W_{\phi})(\theta))$ (and no more) independent conserved functions through $W_{\phi|_{\Omega'}}$ , i.e., there are $D - \dim(\operatorname{Lie}(W_{\phi})(\theta))$ independent conservation laws of $\phi$ on $\Omega'$ .
256
+
257
+ Remark 3.4. The proof of the Frobenius theorem (and therefore of our generalization Theorem 3.3) is actually constructive. From a given $\phi$ , conservation laws are obtained in the proof by integrating in time (i.e. solving an advection equation) the vector fields belonging to $W_{\phi}$ . Unfortunately, this cannot be achieved in closed form in general, but in small dimensions, this could be carried out numerically (to compute approximate discretized laws on a grid or approximate them using parametric functions such as Fourier expansions or neural networks).
258
+
259
+ A fundamental aspect of Theorem 3.3 is to rely only on the dimension of the trace of the Lie algebra associated with the finite-dimensional vector space $W_{\phi}$ . Yet, even if $W_{\phi}$ is finite-dimensional, it might be the case that $\operatorname{Lie}(W_{\phi})$ itself remains infinite-dimensional. Nevertheless, what matters is not the dimension of $\operatorname{Lie}(W_{\phi})$ , but that of its trace $\operatorname{Lie}(W_{\phi})(\theta)$ , which is always finite (and potentially much smaller than $\dim \operatorname{Lie}(W_{\phi})$ even when the latter is finite) and computationally tractable thanks to Lemma 3.2 as detailed in Section 3.3. In section 4.1 we work out the example of matrix factorization, a non-trivial case where the full Lie algebra $\operatorname{Lie}(W_{\phi})$ itself remains finite-dimensional.
260
+
261
+ Theorem [3.3] requires that the dimension of the trace at $\theta$ of the Lie algebra is locally constant. This is a technical assumption, which typically holds outside a set of pathological points. A good example is once again matrix factorization, where we show in Section [4.1] that this condition holds generically.
262
+
263
+ # 3.3 Method and algorithm, with examples
264
+
265
+ Given a reparametrization $\phi$ for the architectures to train, to determine the number of independent conservation laws of $\phi$ , we leverage the characterization 3.1 to algorithmically compute $\dim(\operatorname{Lie}(W_{\phi})(\theta))$ using an iterative construction of bases for the subspaces $W_{k}$ starting from $W_{0} \coloneqq W_{\phi}$ , and stopping as soon as the dimension stagnates thanks to Lemma 3.2. Our open-sourced code is available at [18] and uses SageMath. As we now show, this algorithmic principle allows to fully work out certain settings where the stopping criterion of Lemma 3.2 is reached at the first step ( $i = 0$ ) or the second one ( $i = 1$ ). Section 4.2 also discusses its numerical use for an empirical investigation of broader settings.
266
+
267
+ Example where the iterations of Lemma 3.2 stop at the first step. This corresponds to the case where $\operatorname{Lie}W_{\phi}(\theta) = W_1(\theta) = W_0(\theta) \coloneqq W_{\phi}(\theta)$ on $\Omega$ . This is the case if and only if $W_{\phi}$ satisfies that
268
+
269
+ $$
270
+ [ \chi_ {1}, \chi_ {2} ] (\theta) := \partial \chi_ {1} (\theta) \chi_ {2} (\theta) - \partial \chi_ {2} (\theta) \chi_ {1} (\theta) \in W _ {\phi} (\theta), \quad \text {f o r a l l} \chi_ {1}, \chi_ {2} \in W _ {\phi} \text {a n d a l l} \theta \in \Omega . \tag {13}
271
+ $$
272
+
273
+ i.e., when Frobenius Theorem (see Theorem E.1 in Appendix E) applies directly. The first example is a follow-up to Example 2.11.
274
+
275
+ Example 3.5 (two-layer ReLU networks without bias). Consider $\theta = (U,V)$ with $U\in \mathbb{R}^{n\times r}$ , $V\in \mathbb{R}^{m\times r}$ , $n,m,r\geq 1$ (so that $D = (n + m)r$ ), and the reparametrization $\phi (\theta)\coloneqq (u_i v_i^\top)_{i = 1,\dots ,r}\in$ $\mathbb{R}^{n\times m\times r}$ , where $U = (u_{1};\dots ;u_{r})$ and $V = (v_{1};\dots ;v_{r})$ . As detailed in Appendix F.1, since $\phi (\theta)$ is a collection of $r$ rank-one $n\times m$ matrices, $\dim (W_{\phi}(\theta)) = \mathrm{rank}\partial \phi (\theta) = (n + m - 1)r$ is constant on the domain $\Omega$ such that $u_{i},v_{j}\neq 0$ , and $W_{\phi}$ satisfies (13), hence by Theorem 3.3 each $\theta$ has a neighborhood $\Omega^{\prime}$ such that there exists $r$ (and no more) independent conserved function through $W_{\phi | \Omega '}$ . The $r$ known conserved functions [9] given by $h_i:(U,V)\mapsto \| u_i\| ^2 -\| v_i\| ^2,i = 1,\dots ,r,$ are independent, hence they are complete.
276
+
277
+ Example where the iterations of Lemma 3.2 stop at the second step (but not the first one). Our primary example is matrix factorization, as a follow-up to Example 2.10.
278
+
279
+ Example 3.6 (two-layer linear neural networks). With $\theta = (U,V)$ , where $(U\in \mathbb{R}^{n\times r},V\in \mathbb{R}^{m\times r})$ the reparameterization $\phi (\theta)\coloneqq UV^{\top}\in \mathbb{R}^{n\times m}$ (here $d = nm$ ) factorizes the functions minimized during the training of linear two-layer neural networks (see Example 2.10). As shown in Appendix I, condition (13) is not satisfied when $r > 1$ and $\max (n,m) > 1$ . Thus, the stopping criterion of Lemma 3.2 is not satisfied at the first step. However, as detailed in Proposition H.3 in Appendix H.1 $(W_{\phi})_{1} = (W_{\phi})_{2} = \mathrm{Lie}(W_{\phi})$ , hence the iterations of Lemma 3.2 stop at the second step.
280
+
281
+ We complete this example in the next section by showing (Corollary 4.4) that known conservation laws are indeed complete. Whether known conservation laws remain valid and/or complete in this settings and extended ones is further studied in Section 4 and Appendix 5.
282
+
283
+ # 3.4 Application: recasting over-parameterized flows as low-dimensional Riemannian flows
284
+
285
+ As we now show, one striking application of Theorem 3.3 (in simple cases where $\dim(W_{\phi}(\theta)) = \dim(\operatorname{Lie}W_{\phi}(\theta))$ is constant on $\Omega$ , i.e., $\operatorname{rank}(\partial \phi(\theta))$ is constant on $\Omega$ and $W_{\phi}$ satisfies (13)) is to fully rewrite the high-dimensional flow $\theta(t) \in \mathbb{R}^D$ as a low-dimensional flow on $z(t) := \phi(\theta(t)) \in \mathbb{R}^d$ , where this flow is associated with a Riemannian metric tensor $M$ that is induced by $\phi$ and depends on the initialization $\theta_{\mathrm{init}}$ . We insist on the fact that this is only possible in very specific cases, but this phenomenon is underlying many existing works that aim at writing in closed form the implicit bias associated with some training dynamics (see Section 1 for some relevant literature. Our analysis sheds some light on cases where this is possible, as shown in the next proposition.
286
+
287
+ Proposition 3.7. Assume that $\operatorname{rank}(\partial \phi(\theta))$ is constant on $\Omega$ and that $W_{\phi}$ satisfies (13). If $\theta(t) \in \mathbb{R}^D$ satisfies the ODE (2) where $\theta_{init} \in \Omega$ , then there is $0 < T_{\theta_{init}}^{\star} \leq T_{\theta_{init}}$ such that $z(t) \coloneqq \phi(\theta(t)) \in \mathbb{R}^d$ satisfies the ODE
288
+
289
+ $$
290
+ \dot {z} (t) = - M \left(z (t), \theta_ {\text {i n i t}}\right) \nabla f (z (t)) \quad \text {f o r a l l} t \in \left[ 0, T _ {\theta_ {\text {i n i t}}} ^ {\star}\right), \text {w i t h} z (0) = \phi \left(\theta_ {\text {i n i t}}\right), \tag {14}
291
+ $$
292
+
293
+ where $M(z(t),\theta_{init})\in \mathbb{R}^{d\times d}$ is a symmetric positive semi-definite matrix.
294
+
295
+ See Appendix G for a proof. Revisiting Example 3.5 leads to the following analytic example.
296
+
297
+ Example 3.8. Given the reparametrization $\phi : (u \in \mathbb{R}^*, v \in \mathbb{R}^d) \mapsto uv \in \mathbb{R}^d$ , the variable $z := uv$ satisfies (14) with $M(z, \theta_{\mathrm{init}}) = \|z\|_{\delta} \mathrm{I}_d + \|z\|_{\delta}^{-1} z z^\top$ , with $\|z\|_{\delta} := \delta + \sqrt{\delta^2 + \|z\|^2}$ , $\delta := 1/2(u_{\mathrm{init}}^2 - \|v_{\mathrm{init}}\|^2)$ .
298
+
299
+ Another analytic example is discussed in Appendix G. In light of these results, an interesting perspective is to better understand the dependence of the Riemannian metric with respect to initialization, to possibly guide the choice of initialization for better convergence dynamics.
300
+
301
+ Note that the metric $M(z, \theta_{\mathrm{init}})$ can have a kernel. Indeed, in practice, while $\phi$ is a function from $\mathbb{R}^D$ to $\mathbb{R}^d$ , the dimensions often satisfy $\operatorname{rank} \partial \phi(\theta) < \min(d, D)$ , i.e., $\phi(\theta)$ lives in a manifold of lower dimension. The evolution (14) should then be understood as a flow on this manifold. The kernel of $M(z, \theta_{\mathrm{init}})$ is orthogonal to the tangent space at $z$ of this manifold.
302
+
303
+ # 4 Conservation Laws for Linear and ReLU Neural Networks
304
+
305
+ To showcase the impact of our results, we show how they can be used to determine whether known conservation laws for linear (resp. ReLU) neural networks are complete, and to recover these laws algorithmically using reparametrizations $\phi$ adapted to these two settings. Concretely, we study the conservation laws for neural networks with $q$ layers, and either a linear or ReLU activation, with an emphasis on $q = 2$ . We write $\theta = (U_1, \dots, U_q)$ with $U_i \in \mathbb{R}^{n_i - 1 \times n_i}$ the weight matrices and we assume that $\theta$ satisfies the gradient flow [2]. In the linear case the reparametrization is $\phi_{\mathrm{Lin}}(\theta) := U_1 \dots U_q$ . For ReLU networks, we use the (polynomial) reparametrization $\phi_{\mathrm{ReLu}}$ of [27, Definition 6], which is defined for any (deep) feedforward ReLU network, with or without bias. In the simplified setting of networks without biases it reads explicitly as:
306
+
307
+ $$
308
+ \phi_ {\mathrm {R e L u}} \left(U _ {1}, \dots , U _ {q}\right) := \left(U _ {1} [:, j _ {1} ] U _ {2} [ j _ {1}, j _ {2} ] \dots U _ {q - 1} [ j _ {q - 2}, j _ {q - 1} ] U _ {q} [ j _ {q - 1}, : ]\right) _ {j _ {1}, \dots , j _ {q - 1}} \tag {15}
309
+ $$
310
+
311
+ with $U[i,j]$ the $(i,j)$ -th entry of $U$ . This covers $\phi(\theta) := (u_j v_j^\top)_{j=1}^r \in \mathbb{R}^{n \times m \times r}$ from Example 2.11. Some conservation laws are known for the linear case $\phi_{\mathrm{Lin}}$ [1, 2] and for the ReLu case $\phi_{\mathrm{ReLu}}$ [9].
312
+
313
+ Proposition 4.1 ([129]). If $\theta \coloneqq (U_1, \dots, U_q)$ satisfies the gradient flow (2), then for each $i = 1, \dots, q-1$ the function $\theta \mapsto U_i^\top U_i - U_{i+1} U_{i+1}^\top$ (resp. the function $\theta \mapsto \operatorname{diag}(U_i^\top U_i - U_{i+1} U_{i+1}^\top))$ defines $n_i \times (n_i + 1)/2$ conservation laws for $\phi_{\mathrm{Lin}}$ (resp. $n_i$ conservation laws for $\phi_{\mathrm{ReLu}}$ ).
314
+
315
+ Proposition 4.1 defines $\sum_{i=1}^{q-1} n_i \times (n_i + 1)/2$ conserved functions for the linear case. In general, they are not independent, and we give below in Proposition 4.2 for the case of $q = 2$ , the exact
316
+
317
+ number of independent conservation laws among these particular laws. Establishing whether there are other (previously unknown) conservation laws is an open problem for $q > 2$ . We already answered negatively to this question in the two-layer ReLu case without bias (See Example 3.5). In the following Section (Corollary 4.4), we show the same result in the linear case $q = 2$ . Numerical computations suggest this is still the case for deeper linear and ReLU networks as detailed in Section 4.2.
318
+
319
+ # 4.1 The matrix factorization case $(q = 2)$
320
+
321
+ To simplify the analysis when $q = 2$ , we rewrite $\theta = (U,V)$ as a vertical matrix concatenation denoted $(U;V)\in \mathbb{R}^{(n + m)\times r}$ , and $\phi (\theta) = \phi_{\mathrm{Lin}}(\theta) = UV^{\top}\in \mathbb{R}^{n\times m}$ .
322
+
323
+ How many independent conserved functions are already known? The following proposition refines Proposition 4.1 for $q = 2$ by detailing how many independent conservation laws are already known. See Appendix H.1 for a proof.
324
+
325
+ Proposition 4.2. Consider $\Psi : \theta = (U; V) \mapsto U^\top U - V^\top V \in \mathbb{R}^{r \times r}$ and assume that $(U; V)$ has full rank noted $\mathbf{rk}$ . Then the function $\Psi$ gives $\mathbf{rk} \cdot (2r + 1 - \mathbf{rk}) / 2$ independent conserved functions.
326
+
327
+ There exist no more independent conserved functions. We now come to the core of the analysis, which consists in actually computing $\mathrm{Lie}(W_{\phi})$ as well as its traces $\mathrm{Lie}(W_{\phi})(\theta)$ in the matrix factorization case. The crux of the analysis, which enables us to fully work out theoretically the case $q = 2$ , is that $W_{\phi}$ is composed of linear vector fields (that are explicitly characterized in Proposition H.2 in Appendix H), the Lie bracket between two linear fields being itself linear and explicitly characterized with skew matrices, see Proposition H.3 in Appendix H. Eventually, what we need to compute is the dimension of the trace $\mathrm{Lie}(W_{\phi})(U,V)$ for any $(U,V)$ . We prove the following in Appendix H
328
+
329
+ Proposition 4.3. If $(U;V) \in \mathbb{R}^{(n + m)\times r}$ has full rank noted $\mathbf{rk}$ , then: $\dim (\operatorname{Lie}(W_{\phi})(U;V)) = (n + m)r - (2r + 1 - \mathbf{rk}) / 2$ .
330
+
331
+ With this explicit characterization of the trace of the generated Lie algebra and Proposition 4.2, we conclude that Proposition 4.1 has indeed exhausted the list of independent conservation laws.
332
+
333
+ Corollary 4.4. If $(U;V)$ has full rank, then all conserved functions are given by $\Psi : (U,V) \mapsto U^{\top}U - V^{\top}V$ . In particular, there exist no more independent conserved functions.
334
+
335
+ # 4.2 Numerical guarantees in the general case
336
+
337
+ The expressions derived in the previous section are specific to the linear case $q = 2$ . For deeper linear networks and for ReLU networks, the vector fields in $W_{\phi}$ are non-linear polynomials, and computing Lie brackets of such fields can increase the degree, which could potentially make the generated Lie algebra infinite-dimensional. One can however use Lemma 3.2 and stop as soon as $\dim((W_{\phi})_k(\theta))$ stagnates. Numerically comparing this dimension with the number $N$ of independent conserved functions known in the literature (predicted by Proposition 4.1) on a sample of depths/widths of small size, we empirically confirmed that there are no more conservation laws than the ones already known for deeper linear networks and for ReLU networks too (see Appendix J for details). Our code is open-sourced and is available at [18]. It is worth mentioning again that in all tested cases $\phi$ is polynomial, and there is a maximum set of conservation laws that are also polynomial, which are found algorithmically (as detailed in Section 2.5).
338
+
339
+ # Conclusion
340
+
341
+ In this article, we proposed a constructive program for determining the number of conservation laws. An important avenue for future work is the consideration of more general classes of architectures, such as deep convolutional networks, normalization, and attention layers. Note that while we focus in this article on gradient flows, our theory can be applied to any space of displacements in place of $W_{\phi}$ . This could be used to study conservation laws for flows with higher order time derivatives, for instance gradient descent with momentum, by lifting the flow to a higher dimensional phase space. A limitation that warrants further study is that our theory is restricted to continuous time gradient flow. Gradient descent with finite step size, as opposed to continuous flows, disrupts exact conservation. The study of approximate conservation presents an interesting avenue for future work.
342
+
343
+ # Acknowledgement
344
+
345
+ The work of G. Peyré was supported by the European Research Council (ERC project NORIA) and the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute). The work of R. Gribonval was partially supported by the AllegroAssai ANR project ANR-19-CHIA-0009. We thank Thomas Bouchet for introducing us to SageMath, as well as Léo Grinsztajn for helpful feedbacks regarding the numerics. We thank Pierre Ablin and Raphael Barboni for comments on a draft of this paper. We also thank the anonymous reviewers for their fruitful feedback.
346
+
347
+ # References
348
+
349
+ [1] S. ARORA, N. COHEN, N. GOLOWICH, AND W. HU, A convergence analysis of gradient descent for deep linear neural networks, arXiv preprint arXiv:1810.02281, (2018).
350
+ [2] S. ARORA, N. COHEN, AND E. HAZAN, On the optimization of deep networks: Implicit acceleration by overparameterization, in Int. Conf. on Machine Learning, PMLR, 2018, pp. 244-253.
351
+ [3] S. AZULAY, E. MOROSHKO, M. S. NACSON, B. E. WOODWORTH, N. SREBRO, A. GLOBERSON, AND D. SOUDRY, On the implicit bias of initialization shape: Beyond infinitesimal mirror descent, in Proceedings of the 38th International Conference on Machine Learning, M. Meila and T. Zhang, eds., vol. 139 of Proceedings of Machine Learning Research, PMLR, 18–24 Jul 2021, pp. 468–477.
352
+ [4] B. BAH, H. RAUHUT, U. TERSTIEGE, AND M. WESTDICKENBERG, Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers, Information and Inference: A Journal of the IMA, 11 (2022), pp. 307-353.
353
+ [5] M. BELKIN, D. HSU, S. MA, AND S. MANDAL, Reconciling modern machine-learning practice and the classical bias-variance trade-off, Proc. of the National Academy of Sciences, 116 (2019), pp. 15849-15854.
354
+ [6] B. BONNARD, M. CHYBA, AND J. ROUOT, Geometric and Numerical Optimal Control - Application to Swimming at Low Reynolds Number and Magnetic Resonance Imaging, SpringerBriefs in Mathematics, Springer Int. Publishing, 2018.
355
+ [7] A. B. BROWN, Functional dependence, Transactions of the American Mathematical Society, 38 (1935), pp. 379-394.
356
+ [8] L. CHIZAT AND F. BACH, Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss, in Conf. on Learning Theory, PMLR, 2020, pp. 1305-1338.
357
+ [9] S. S. DU, W. HU, AND J. D. LEE, Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced, Adv. in Neural Inf. Proc. Systems, 31 (2018).
358
+ [10] S. GUNASEKAR, J. LEE, D. SOUDRY, AND N. SREBRO, Characterizing implicit bias in terms of optimization geometry, in Int. Conf. on Machine Learning, PMLR, 2018, pp. 1832-1841.
359
+ [11] S. GUNASEKAR, B. E. WOODWORTH, S. BHOJANAPALLI, B. NEYSHABUR, AND N. SREBRO, Implicit regularization in matrix factorization, Adv. in Neural Inf. Proc. Systems, 30 (2017).
360
+ [12] G. GLUCH AND R. URBANKE, Noether: The more things change, the more stay the same, 2021.
361
+ [13] A. ISIDORI, Nonlinear system control, New York: Springer Verlag, 61 (1995), pp. 225-236.
362
+ [14] Z. Ji, M. DUDÍK, R. E. SCHAPIRE, AND M. TELGARSKY, Gradient descent follows the regularization path for general losses, in Conf. on Learning Theory, PMLR, 2020, pp. 2109-2136.
363
+ [15] Z. Ji AND M. TELGARSKY, Gradient descent aligns the layers of deep linear networks, arXiv preprint arXiv:1810.02032, (2018).
364
+
365
+ [16] D. KUNIN, J. SAGASTUY-BRENA, S. GANGULI, D. L. YAMINS, AND H. TANAKA, Neural mechanics: Symmetry and broken conservation laws in deep learning dynamics, arXiv preprint arXiv:2012.04728, (2020).
366
+ [17] Z. LI, T. WANG, J. D. LEE, AND S. ARORA, Implicit bias of gradient descent on reparametrized models: On equivalence to mirror descent, in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, eds., vol. 35, Curran Associates, Inc., 2022, pp. 34626-34640.
367
+ [18] S. MARCOTTE, R. GRIBONVAL, AND G. PEYRE, Code for reproducible research. Abide by the Law and Follow the Flow: Conservation Laws for Gradient Flows, Oct. 2023.
368
+ [19] H. MIN, S. TARMOUN, R. VIDAL, AND E. MALLADA, On the explicit role of initialization on the convergence and implicit bias of overparametrized linear networks, in Int. Conf. on Machine Learning, PMLR, 2021, pp. 7760-7768.
369
+ [20] W. F. Newns, Functional dependence, The American Mathematical Monthly, 74 (1967), pp. 911-920.
370
+ [21] B. NEYSHABUR, Implicit regularization in deep learning, arXiv preprint arXiv:1709.01953, (2017).
371
+ [22] B. NEYSHABUR, R. TOMIOKA, AND N. SREBRO, In search of the real inductive bias: On the role of implicit regularization in deep learning, arXiv preprint arXiv:1412.6614, (2014).
372
+ [23] E. NOETHER, Invariante variationsprobleme, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse, 1918 (1918), pp. 235-257.
373
+ [24] A. M. SAXE, J. L. McCLELLAND, AND S. GANGULI, Exact solutions to the nonlinear dynamics of learning in deep linear neural networks, arXiv preprint arXiv:1312.6120, (2013).
374
+ [25] S. SHALEV-SHWARTZ AND S. BEN-DAVID, Understanding machine learning: From theory to algorithms, Cambridge university press, 2014.
375
+ [26] D. SOUDRY, E. HOFFER, M. S. NACSON, S. GUNASEKAR, AND N. SREBRO, The implicit bias of gradient descent on separable data, The Journal of Machine Learning Research, 19 (2018), pp. 2822-2878.
376
+ [27] P. STOCK AND R. GRIBONVAL, An Embedding of ReLU Networks and an Analysis of their Identifiability, Constructive Approximation, (2022). Publisher: Springer Verlag.
377
+ [28] S. TARMOUN, G. FRANCA, B. D. HAEFFELE, AND R. VIDAL, Understanding the dynamics of gradient flow in overparameterized linear models, in Int. Conf. on Machine Learning, PMLR, 2021, pp. 10153-10161.
378
+ [29] THE SAGE DEVELOPERS, SageMath, the Sage Mathematics Software System (Version 9.7), 2022. https://www.sagemath.org.
379
+ [30] C. ZHANG, S. BENGIO, M. HARDT, B. RECHT, AND O. VINYALS, Understanding deep learning requires rethinking generalization, in Int. Conf. on Learning Representations, 2017.
380
+ [31] B. ZHAO, I. GANEV, R. WALTERS, R. YU, AND N. DEHMAMY, Symmetries, flat minima, and the conserved quantities of gradient flow, arXiv preprint arXiv:2210.17216, (2022).
abidebythelawandfollowtheflowconservationlawsforgradientflows/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97c05f878733585aa5366f60ec66e264ca97675a532fed071a69976b4a38a71a
3
+ size 119583
abidebythelawandfollowtheflowconservationlawsforgradientflows/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:390d671d9f6076eb7afeb069e819470be06d30c6d1ecf3b04adf2f93b61e403b
3
+ size 705000
acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/593c13dd-0a60-4ada-ac05-8c0e10b2ddd1_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d03df7368fe9ecaed272cfbae60b1713570d9b9ea1638523d598f6c788d2960
3
+ size 168242
acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/593c13dd-0a60-4ada-ac05-8c0e10b2ddd1_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e98cff72c1b502d478c5b68b19971563b74bd544d3c675fb164694fffcd12716
3
+ size 197623
acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/593c13dd-0a60-4ada-ac05-8c0e10b2ddd1_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdebade76b9dad88581af6338b884a1127c6995e3e00eeb957e5d0cb9a62e221
3
+ size 1220638
acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/full.md ADDED
@@ -0,0 +1,856 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accelerated On-Device Forward Neural Network Training with Module-Wise Descending Asynchronism
2
+
3
+ Xiaohan Zhao<sup>1</sup>
4
+
5
+ xiaotuzxh@gmail.com
6
+
7
+ Hualin Zhang
8
+
9
+ zhanghualin98@gmail.com
10
+
11
+ Zhouyuan Huo*
12
+
13
+ huozhouyuan@gmail.com
14
+
15
+ Bin Gu2,3†
16
+
17
+ jsbugin@gmail.com
18
+
19
+ <sup>1</sup> Nanjing University of Information Science and Technology, China
20
+
21
+ $^{2}$ Mohamed bin Zayed University of Artificial Intelligence, UAE
22
+
23
+ <sup>3</sup> School of Artificial Intelligence, Jilin University, China
24
+
25
+ # Abstract
26
+
27
+ On-device learning faces memory constraints when optimizing or fine-tuning on edge devices with limited resources. Current techniques for training deep models on edge devices rely heavily on backpropagation. However, its high memory usage calls for a reassessment of its dominance. In this paper, we propose forward gradient descent (FGD) as a potential solution to overcome the memory capacity limitation in on-device learning. However, FGD's dependencies across layers hinder parallel computation and can lead to inefficient resource utilization. To mitigate this limitation, we propose AsyncFGD, an asynchronous framework that decouples dependencies, utilizes module-wise stale parameters, and maximizes parallel computation. We demonstrate its convergence to critical points through rigorous theoretical analysis. Empirical evaluations conducted on NVIDIA's AGX Orin, a popular embedded device, show that AsyncFGD reduces memory consumption and enhances hardware efficiency, offering a novel approach to on-device learning.
28
+
29
+ # 1 Introduction
30
+
31
+ Deep learning models have increasingly gained attraction in a multitude of applications, showcasing exceptional predictive capabilities. Nevertheless, their rapidly expanding size [19] poses a formidable challenge for resource-limited edge devices, such as mobile phones and embedded systems. These devices are pervasive in our society and continuously generate new data. To attain model customization, user privacy, and low latency, these devices necessitate on-device learning, involving training and fine-tuning models on freshly gathered data [46]. However, the restricted memory capacity of these devices emerges as a significant hindrance. For example, the Raspberry Pi Model A, introduced in 2013, only featured $256\mathrm{MB}$ of memory [17], while the more recent Raspberry Pi 400, released in 2020, modestly increased this capacity to a mere 4 GB.
32
+
33
+ Various techniques have been proposed to address this issue, encompassing quantized training, efficient transfer learning, and rematerialization. Quantized training curtails memory consumption by utilizing low-precision network representation [22, 15, 42, 43, 8]. Efficient transfer learning diminishes training costs by updating merely a subset of the model [20, 5, 2, 44]. Lastly, rematerialization
34
+
35
+ ![](images/38df47a0ab2e054df8b39b7e8eeb9bd310df1523187932b186f46894d360df8d.jpg)
36
+ Figure 1: Comparison of Vanilla FGD and AsyncFGD, where A, B, and C signify workers in the system. Through the allocation of tasks from varying iterations, AsyncFGD breaks forward locking in FGD, thereby maximizing worker utilization.
37
+
38
+ conserves memory at the expense of time by discarding and recomputing intermediate variables [3, 7, 16, 35].
39
+
40
+ Although most of these techniques can be applied irrespective of the optimization strategy, they are frequently employed in tandem with backpropagation in deep learning. Due to the reciprocal nature of backpropagation (i.e., activations from the forward pass are preserved for the subsequent backward pass), the training of deep models utilizing backpropagation typically commands a memory footprint that is $3 - 4 \times$ larger than the number of parameters involved. Consequently, reexamining backpropagation within the context of surpassing the memory capacity barrier is crucial due to its elevated memory consumption.
41
+
42
+ The recent revival of interest in the forward-forward algorithm proposed by [10], along with other algorithms predicated on forward computation [39, 28, 1, 33], has prompted a reevaluation of backpropagation. The question persists: is it essential to store intermediate variables and pause to propagate gradients? An alternative, forward gradient descent (FGD), leverages the Jacobian-vector product (JVP) under automatic differentiation [37, 1, 33] to assess the effect of a stochastic perturbation in conjunction with the forward pass, yielding an unbiased approximation of the true gradient. FGD offers benefits in terms of memory consumption and biological plausibility[31], as it solely employs forward passes and substantially reduces the storage of intermediate variables [31]. Moreover, FGD can be combined with other existing techniques, such as quantized training and efficient transfer learning, potentially further diminishing memory costs.
43
+
44
+ Nonetheless, the intrinsic locking mechanism within FGD, more specifically, the layer-by-layer sequential computation during the forward pass, poses an impediment to parallel computation across disparate layers as shown in Fig. 1. In response to these challenges, this paper aims at training deep models on edge devices with memory constraints from an optimization perspective, while improving resource utilization by breaking the lock inside the forward pass with provable convergence. Thus, we propose AsyncFGD, an asynchronous version of Forward Gradient Descent with module-wise asynchronism in the forward pass to disentangle its dependencies, allowing simultaneous computation on different workers. The induced module-wise decreasing and bounded staleness in the parameters not only accelerates prediction and training but also provides theoretical guarantees in convergence. We empirically validate our method across multiple architectures and devices, including CPUs, GPUs, and embedded devices. Our results indicate that AsyncFGD reduces memory consumption and improves hardware efficiency, achieving efficient training on edge devices.
45
+
46
+ # 2 Related Works
47
+
48
+ # 2.1 Forward Gradient with Reinforcement Learning
49
+
50
+ This research builds upon the concept of forward-mode automatic differentiation (AD) initially introduced by [37]. It has since been applied to learning recurrent neural networks [39] and calculating
51
+
52
+ Hessian vector products [28]. However, exact gradient computation using forward-mode AD requires the complete and computationally expensive Jacobian matrix. Recently, Baydin et al. [1] and Silver et al. [31] proposed an innovative technique for weight updating based on directional gradients along randomly perturbed directions. These algorithms have connections to both reinforcement learning (RL) and evolution strategies (ES), since the network receives global rewards in each instance. RL and ES have been successfully utilized in specific continuous control and decision-making tasks in neural networks [38, 34, 32, 4]. Clark et al. [4] observed that global credit assignment performs well in vector neural networks with weights between vectorized neuron groups. However, in forward-model AD methods, much like in evolution strategies, the random sampling approach can lead to high variance in the estimated gradient, particularly when optimizing over a large-dimensional parameter space [31].
53
+
54
+ # 2.2 Parallel Strategies and Asynchronous Approaches
55
+
56
+ Various parallel frameworks have been developed to accelerate computations by utilizing multiple workers. These frameworks include data parallelism [21], pipeline parallelism [12, 30, 23, 14], and tensor parallelism [29, 24]. Each approach leverages different dimensions of data - batch dimension for data parallelism, layer dimension for pipeline parallelism, and feature dimension for tensor parallelism - to improve computational efficiency.
57
+
58
+ However, in edge computing environments, data is often collected and processed frame-by-frame rather than in batches. This is particularly true for scenarios requiring real-time adaptation and streaming data learning. In such contexts, edge models must quickly adapt to newly acquired data in real-time, leaving limited scope for collecting, fetching, and batching historical data. Therefore, algorithms that do not depend on batch size are preferable. The ability to align with this pipeline and train the model on the fly with incoming data is crucial. Therefore, our research specifically focuses on pipeline parallelism.
59
+
60
+ Pipeline parallelism can be categorized into synchronous and asynchronous pipelines, which divide computations into sequential stages and align well with the data stream on edge devices. However, in edge devices with limited computational power, potential idle workers due to synchronization in synchronous pipelines like [12] are not optimal.
61
+
62
+ Asynchronous parallelism, which allows non-simultaneous task execution to enhance resource utilization, is particularly relevant to our method. [23, 14] parallelize tasks by executing them from different iterations. However, the additional memory overhead of storing replicate copies to handle staleness in [23] and the use of multiple activations for backpropagation make the algorithm memory inefficient. This memory overhead is mitigated in strategies such as "rematerialization" [40, 13] and weight estimation [41]. However, these strategies were originally designed for backpropagation, while our focus is on FGD, which has distinct computational characteristics, rendering the existing "1B1F" strategy for work scheduling in [40, 13, 41, 23] unsuitable.
63
+
64
+ # 3 Preliminaries
65
+
66
+ Gradient-based Method. We commence with a succinct introduction to gradient-based methods deployed in neural networks. Envision the training of a $L$ -layer feedforward neural network, where each layer $l \in 1,2,\ldots,L$ accepts $h_{l-1}$ as an input, generating an activation $h_l = F_l(h_{l-1};w_l)$ with weight $w_l \in \mathbb{R}^{d_l}$ . We represent all parameters by $w = [w_1^{\intercal}, w_2^{\intercal}, \ldots, w_L^{\intercal}]^{\intercal} \in \mathbb{R}^d$ and the output of the network by $h_L = F(h_0, w)$ , where $h_0$ symbolizes the input data $x$ . Given a loss function $f$ and targets $y$ , the objective problem is as follows:
67
+
68
+ $$
69
+ \min _ {w} f (F (x; w), y) \tag {1}
70
+ $$
71
+
72
+ Here, we use $f(x;w)$ in subsequent sections for the sake of brevity.
73
+
74
+ Gradient-based methods are widely used to optimize deep learning problems. At iteration $t$ , we feed data sample $x_{i(t)}$ into the network, where $i(t)$ signifies the sample's index. As per the principles of stochastic gradient descent (SGD), we define the network parameters as follows:
75
+
76
+ $$
77
+ w _ {l} ^ {t + 1} = w _ {l} ^ {t} - \gamma_ {t} \nabla f _ {l, x _ {i (t)}} \left(w ^ {t}\right), \quad \forall l \in \{1, 2, \dots , L \} \tag {2}
78
+ $$
79
+
80
+ Here, $\gamma_t \in \mathbb{R}$ is the stepsize and $\nabla f_{l,x_{i(t)}}(w^t) \in \mathbb{R}^{d_l}$ is the gradient of the loss function with respect to the weights at layer $l$ and data sample $x_{i(t)}$ .
81
+
82
+ The crux of the matter is obtaining $\nabla f_{l,x_{i(t)}}(w^t)$ . Both scientific and industrial sectors lean towards backpropagation underpinned by automatic differentiation (AD). However, FGD could be another alternative, because it can approximate the true gradient without bias by using forward-mode AD, and importantly has low memory consumption as it only preserves the intermediate variable passed from previous layer, while backpropagation need to store intermediate variable in each layer.
83
+
84
+ Forward Gradient Descent. Let $J_{F_l}$ represent the Jacobian matrix of layer $l$ while $u_{w_l}$ represent the random perturbation on $w_l$ (more precisely, tangent vector around $w_l$ ), we can calculate the JVP value $o_l$ , in each layer sequentially by
85
+
86
+ $$
87
+ h _ {l} = F _ {l} \left(h _ {l - 1}; w _ {l}\right) \in \mathbb {R} ^ {d _ {h _ {l}}}, \tag {3}
88
+ $$
89
+
90
+ $$
91
+ o _ {l} = J _ {F _ {l}} \left(h _ {l - 1}, w _ {l}\right) u _ {l} \in \mathbb {R} ^ {d _ {h _ {l}}}, \quad \text {w h e r e} \quad u _ {l} = \left[ o _ {l - 1} ^ {\intercal}, u _ {w _ {l}} ^ {\intercal} \right] ^ {\intercal} \in \mathbb {R} ^ {d _ {h _ {l - 1}} + d _ {l}} \tag {4}
92
+ $$
93
+
94
+ Mathematically, $o_l$ is the directional derivative of $F_l$ along $u_l$ ; intuitively, $o_l$ can be interpreted as the influence of the perturbation $u_{w_l}$ on function value $h_l$ (we set $o_0$ to be 0 since we don't need to calculate the JVP value with respect to the input data). This process can be arranged within the forward pass such that (3), (4) are computed in the same pass. Also, since we don't need to propagate backward, $h_{l-1}$ and $o_{l-1}$ are thrown away right after the computation of $h_l$ and $o_l$ . Then we can approximate the true gradient $\nabla f(x; w)$ unbiasedly (Lemma 3.1) by $\left( \frac{\partial f}{\partial h_L} o_L \right) u$ , where $u = [u_{w_1}, \ldots, u_{w_L}]^T$ .
95
+
96
+ Lemma 3.1. Let $u_{w_l} \in \mathbb{R}^{d_l}$ , $l \in \{1,2,\dots,L\}$ be a normally distributed random Gaussian vectors, then the forward gradient computed through Eq.(3), (4) is an unbiased estimate of the true gradient
97
+
98
+ $$
99
+ \nabla f (x; w) = \mathbb {E} _ {u} \left[ \left(\frac {\partial f}{\partial h _ {L}} o _ {L}\right) u \right].
100
+ $$
101
+
102
+ More specifically, each layer $l$ receives $o_{L}$ from the last layer and updates its own parameters locally by using $\mathbb{E}_{u_{w_l}}(u_{w_l} \cdot o_L)$ . We can then rewrite (2) as:
103
+
104
+ $$
105
+ w _ {l} ^ {t + 1} = w _ {l} ^ {t} - \gamma_ {t} \left(\left(\frac {\partial f}{\partial h _ {L}} o _ {L} ^ {t}\right) u _ {w _ {l}} ^ {t}\right) \tag {5}
106
+ $$
107
+
108
+ Forward Locking. It is evident from Eq.(4) that the computation in layer $l$ depends on the activation and JVP value $h_{l-1}, o_{l-1}$ from layer $l-1$ . This dependency creates a "lock" that prevents all layers from updating before receiving the output from dependent layers, thus serializing the computation in the forward pass (refer to Figure 1 for illustration).
109
+
110
+ # 4 AsyncFGD
111
+
112
+ In this section, we propose an innovative approach that utilizes module-wise staleness to untether the dependencies inherent in FGD. This method, which we've named AsyncFGD, facilitates the simultaneous execution of tasks originating from disparate iterations. Suppose a $L$ -layer feedforward neural network is divided into $K$ modules, with each module comprising a set of consecutive layers and their respective parameters. Consequently, we have a configuration such that $w = [w_{\mathcal{G}(0)}^{\top}, w_{\mathcal{G}(1)}^{\top}, \dots, w_{\mathcal{G}(K-1)}^{\top}]^{\top}$ , with $\mathcal{G}(k)$ denoting the layer indices in group $k$ .
113
+
114
+ Now, let's delve into the details of how AsyncFGD untethers iteration dependencies and expedites the training process.
115
+
116
+ # 4.1 Detaching Iteration Dependency
117
+
118
+ Forward. At the timestamp $t$ , the data sample $x_{i}(t)$ is pumped to the network. In contrast to sequential FGD [1], AsyncFGD permits modules to concurrently compute tasks, each originally belonging to a distinct iteration. All modules, with the exception of the last, operate using delayed parameters. We designate the activation and its JVP value originally ascribed to iteration $t$ in Eq.(3), (4) as $\hat{o}_l^t, \hat{h}_l^t$ respectively. Though the superscript $t$ is no longer time-dependent, it maintains its role in indicating sequential order. Consider $L_{k} \in \mathcal{G}(k)$ as the final layer in module $k$ and $f_{k}$ as the activation of this last layer. The computation in module $k \in 0,1,\dots ,K - 1$ at timestamp $t$ is defined recursively as follows:
119
+
120
+ $$
121
+ \hat {h} _ {L _ {k}} ^ {t - k} = f _ {k} \left(\hat {h} _ {L _ {k - 1}} ^ {t - k}; w _ {\mathcal {G} (k)} ^ {t - 2 K + k + 2}\right) \tag {6}
122
+ $$
123
+
124
+ $$
125
+ \hat {o} _ {L _ {k}} ^ {t - k} = J _ {f _ {k}} \left(\hat {h} _ {L _ {k - 1}} ^ {t - k}; w _ {\mathcal {G} (k)} ^ {t - 2 K + k + 2}\right) u _ {\mathcal {G} (k)} ^ {t - k}, \text {w h e r e} u _ {\mathcal {G} (k)} ^ {t - k} = \left[ \hat {o} _ {L _ {k - 1}} ^ {t - k \mathsf {T}}, u _ {w _ {\mathcal {G} (k)}} ^ {t - k \mathsf {T}} \right] ^ {\mathsf {T}}. \tag {7}
126
+ $$
127
+
128
+ Concurrently, each module receives and stores output from its dependent module for future computations.
129
+
130
+ Update. The update phase in AsyncFGD parallels that in FGD [1]. The final module broadcasts its JVP value, triggering each module to perform local updates to their parameters. To summarize, at timestamp $t$ , we execute the following update:
131
+
132
+ $$
133
+ w ^ {t - K + 2} = w ^ {t - K + 1} - \gamma_ {t - K + 1} \left(\hat {\partial} _ {L _ {K - 1}} ^ {t - K + 1} u _ {w} ^ {t - K + 1}\right) \tag {8}
134
+ $$
135
+
136
+ Staleness. We measure the time delay in Eq.(6), (7) with $g(k) = K - 1 - k$ , and $g(K - 1) = 0$ indicates that last module employs up-to-date parameters.
137
+
138
+ This approach effectively disrupts the "lock-in" characteristic of FGD, facilitating a parallel forward pass. A comparative illustration of the execution details in sequential FGD and AsyncFGD is provided in Figure 1.
139
+
140
+ # 4.2 Stochastic AsyncFGD Algorithm
141
+
142
+ To better illustrate the working state of AsyncFGD, we make the following definitions:
143
+
144
+ $$
145
+ w ^ {t - K + 1} := \left\{ \begin{array}{l l} w ^ {0}, & t - K + 1 < 0 \\ w ^ {t - K + 1}, & \text {o t h e r w i s e} \end{array} ; \right. \text {a n d} \hat {h} _ {L _ {k}} ^ {t - k}, \hat {o} _ {L _ {k}} ^ {t - k} := \left\{ \begin{array}{l l} 0, 0, & t - k < 0 \\ \hat {h} _ {L _ {k}} ^ {t - k}, \hat {o} _ {L _ {k}} ^ {t - k}, & \text {o t h e r w i s e} \end{array} \right. \tag {9}
146
+ $$
147
+
148
+ Unlike FGD, AsyncFGD forwards the JVP value and activations by parameters in different time delays, which can be concluded as $f\left(x_{i(t - 2K + 2)};w_{\mathcal{G}(0)}^{t - 2K + 2};w_{\mathcal{G}(1)}^{t - 2K + 3};\dots ;w_{\mathcal{G}(K - 1)}^{t - K + 1}\right)$ . A detailed illustration of the AsyncFGD with $K = 3$ is shown in Appendix G.2. We summarize the proposed algorithm in Algorithm 1 by example of sampling one tangent vector per iteration.
149
+
150
+ Algorithm 1 AsyncFGD-SGD
151
+ Initialize: Stepsize sequence $\{\gamma_t\}_{t = K - 1}^{T - 1}$ weight $w^0 = \left[w_{\mathcal{G}(0)}^0,\dots ,w_{\mathcal{G}(K - 1)}^0\right]\in \mathbb{R}^d$
152
+ 1: for $t = 0,1,\dots ,T - 1$ do
153
+ 2: for $k = 0,1,\dots ,K - 1$ in parallel do
154
+ 3: Read $\hat{h}_{L_{k - 1}}^{t - k},\hat{o}_{L_{k - 1}}^{t - k}$ from storage if $k\neq 0$
155
+ 4: Compute $\hat{h}_{L_k}^{t - k},\hat{o}_{L_k}^{t - k}$
156
+ 5: Send $\hat{h}_{L_k}^{t - k},\hat{o}_{L_k}^{t - k}$ to next worker's storage if $k\neq K - 1$
157
+ 6: end for
158
+ 7: Broadcast $\hat{o}_{L_K - 1}^{t - K + 1}$
159
+ 8: for $k = 0,1,\dots ,K - 1$ in parallel do
160
+ 9: Compute $\Delta w_{\mathcal{G}(k)}^{t - K + 1} = \hat{o}_{L_{K - 1}}^{t - K + 1}u_{w_{\mathcal{G}(k)}}^{t - K + 1}$
161
+ 10: Update $w_{\mathcal{G}(k)}^{t - K + 2} = w_{\mathcal{G}(k)}^{t - K + 1} - \gamma_{t - K + 1}\Delta w_{\mathcal{G}(k)}^{t - K + 1}$
162
+ 11: end for
163
+ 12: end for
164
+
165
+ Additionally, the procedure in line 5 could overlap with the operations in line 7,9 and 10. We can also apply this approximation algorithm to gradient-based methods like Adam [18] with little modification to the original. Details can be found in Appendix G.1.
166
+
167
+ Tangent checkpoint. However, some workers, especially those which are closer to the input, store duplicated tangent vectors. To tackle this issue, we use tangent checkpoint,i.e., storing the seed of tangent vectors and reproducing them in the update stage.
168
+
169
+ Integration with Efficient Transfer Learning Although AsyncFGD offers several advantages over backpropagation, it shares a common challenge with random Zeroth-Order Optimization and Evolution Strategy methods: the variance of the approximation increases with the dimension of random perturbations[25]. Reducing the learning rate can help but may result in slower convergence. However, we observe that deploying models on edge devices typically involves fine-tuning rather
170
+
171
+ than training from scratch. Our method can flexibly incorporate the principles of efficient transfer learning by introducing a scaling factor $\alpha \in [0,1]$ to the randomly sampled tangent:
172
+
173
+ $$
174
+ u _ {w l} ^ {\prime} = u _ {w l} \cdot \alpha_ {w l}
175
+ $$
176
+
177
+ The modified $u_{w_l}'$ still supports an approximation of the true gradient, with the expectation of the modified estimator being $\alpha_{w_l}^2 \nabla f(x; w_l)$ . When $\alpha_{w_l}$ is set to 0, the corresponding parameter is "frozen," resulting in no perturbation or updates and a transparent transmission of JVP values. By adjusting $\alpha$ to various values, we can either fix or reduce the influence and learning of specific layers, aligning our approach with the objectives of efficient transfer learning.
178
+
179
+ # 4.3 Acceleration of AsyncFGD
180
+
181
+ When $K = 1$ the AsyncFGD is reduced to vanilla FGD without any time delay in parameters. When $K \geq 2$ , we can distribute the network across multiple workers. Figure 2 shows the computational time of different algorithms in ideal conditions (i.e. the network is evenly distributed and the communication is overlapped by computation and update). $T_{F}$ denotes the time for forward pass and $T_{U}$ denotes the time for updates. It is easy to see that AsyncFGD can fully utilize the computation resources, thus achieving considerable speedup.
182
+
183
+ <table><tr><td>Method</td><td>Training Progress (K=3)</td><td>Computation Time</td></tr><tr><td>FGD</td><td></td><td>TF+TU</td></tr><tr><td>Sync FGD</td><td></td><td>TF+TU</td></tr><tr><td>Async FGD</td><td>Forward Update Idle</td><td>TF/K+TU</td></tr></table>
184
+
185
+ Figure 2: Comparison of computation time and training process when the network is deployed across $K$ workers. Since $T_{F}$ is much larger than $T_{U}$ , AsyncFGD can achieve considerable speedup.
186
+
187
+ # 5 Convergence Analysis
188
+
189
+ In this section, we provide the convergence guarantee of Algorithm 1 to critical points in a non-convex setup. We first make the following basic assumptions for nonconvex optimization:
190
+
191
+ Assumption 5.1. The gradient of $f(w)$ is Lipschitz continuous with Lipschitz constant $L > 0$ , i.e.,
192
+
193
+ $$
194
+ \forall x, y \in \mathbb {R} ^ {d}, \| \nabla f (x) - \nabla f (y) \| \leq L \| x - y \|.
195
+ $$
196
+
197
+ Assumption 5.2. The second moment of stochastic gradient is bounded, i.e., there exist a constant $M \geq 0$ , for any sample $x_{i}$ and $\forall w \in \mathbb{R}^{d}$ :
198
+
199
+ $$
200
+ \| \nabla f _ {x _ {i}} (w) \| ^ {2} \leq M.
201
+ $$
202
+
203
+ Lemma 5.3 (Mean and Variance). Let $t' = t - K + 1$ and diagonal matrices $\mathbf{I}_0, \dots, \mathbf{I}_k, \dots, \mathbf{I}_{K-1} \in \mathbb{R}^{d \times d}$ such that all the principle diagonal elements of $\mathbf{I}_k$ in $\mathcal{G}(k)$ are 1, and all the principle diagonal elements of $I_k$ in other than $\mathcal{G}(k)$ are 0. Then we can obtain the mean value and the variance of the forward gradient as follows,
204
+
205
+ $$
206
+ \mathbb {E} _ {u _ {w _ {g} (k)} ^ {t ^ {\prime}}} \left(o _ {L _ {K - 1}} ^ {t ^ {\prime}} \cdot u _ {w _ {g (k)}} ^ {t ^ {\prime}}\right) = \nabla f _ {g (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right), \tag {10}
207
+ $$
208
+
209
+ $$
210
+ \mathbb {E} _ {u _ {w} ^ {t ^ {\prime}}} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L _ {K - 1}} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} \right\| ^ {2} \leq (d + 4) \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k), x _ {i \left(t ^ {\prime}\right)}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2}. \tag {11}
211
+ $$
212
+
213
+ Remark 5.4. Note that, from modules 0 to $K - 1$ , the corresponding time delays are from $K - 1$ to 0. Specifically, in timestamp $t'$ , when $k = K - 1$ , $\mathbb{E}_{u_{w_{\mathcal{G}(K - 1)}}^{t'}}(o_L^{t'} \cdot u_{w_{\mathcal{G}(K - 1)}}^{t'}) = \nabla f_{\mathcal{G}(K - 1),x_{i(t')}}(w^{t'})$ indicates that the forward gradient in module $K - 1$ is an unbiased estimation of the up-to-date gradient.
214
+
215
+ Under Assumption 5.1 and 5.2, we obtain the following descent lemma about the objective function value:
216
+
217
+ Lemma 5.5. Assume that Assumption 5.1 and 5.2 hold. In addition, let $t' = t - K + 1$ , $\sigma := \max_{t'} \frac{\gamma_{\max\{0,t' - K + 1\}}}{\gamma_{t'}}$ , $M_K = KM + \sigma K^4 M$ and choose $\gamma_{t'} \leq \frac{1}{L}$ . The iterations in Algorithm 1 satisfy the following descent property in expectation, $\forall t' \in \mathbb{N}$ :
218
+
219
+ $$
220
+ \mathbb {E} \left[ f \left(w ^ {t ^ {\prime} + 1}\right) \right] - f \left(w ^ {t ^ {\prime}}\right) \leq - \frac {\gamma t ^ {\prime}}{2} \| \nabla f \left(w ^ {t ^ {\prime}}\right) \| ^ {2} + 4 (d + 4) L \gamma_ {t ^ {\prime}} ^ {2} M _ {K}, \tag {12}
221
+ $$
222
+
223
+ Theorem 5.6. Assume Assumption 5.1 and 5.2 hold and the fixed stepsize sequence $\{\gamma_{t'}\}$ satisfies $\gamma_{t'} = \gamma \leq \frac{1}{L}, \forall t' \in \{0,1,\dots,T-1\}$ . In addition, we assume $w^*$ to be the optimal solution to $f(w)$ and let $t' = t - K + 1$ , $\sigma = 1$ such that $M_K = KM + K^4 M$ . Then, the output of Algorithm 1 satisfies that:
224
+
225
+ $$
226
+ \frac {1}{T} \sum_ {t ^ {\prime} = 0} ^ {T - 1} \mathbb {E} \| \nabla f \left(w ^ {t ^ {\prime}}\right) \| ^ {2} \leq \frac {2 \left(f \left(w ^ {0}\right) - f \left(w ^ {*}\right)\right)}{\gamma T} + 4 (d + 4) L \gamma M _ {K}. \tag {13}
227
+ $$
228
+
229
+ Theorem 5.7. Assume Assumption 5.1 and 5.2 hold and the diminishing stepsize sequence $\{\gamma_{t'}\}$ satisfies $\gamma_{t'} = \frac{\gamma_0}{t' + 1} \leq \frac{1}{L}$ . In addition, we assume $w^*$ to be the optimal solution to $f(w)$ and let $t' = t - K + 1$ , $\sigma = K$ such that $M_K = KM + K^5 M$ . Let $\Gamma_T = \sum_{t'=0}^{T-1} \gamma_{t'}$ , the output of Algorithm 1 satisfies that:
230
+
231
+ $$
232
+ \frac {1}{\Gamma_ {T}} \sum_ {t ^ {\prime} = 0} ^ {T - 1} \gamma_ {t ^ {\prime}} \mathbb {E} \| \nabla f \left(w ^ {t ^ {\prime}}\right) \| ^ {2} \leq \frac {2 \left(f \left(w ^ {0}\right) - f \left(w ^ {*}\right)\right)}{\Gamma_ {T}} + \frac {\sum_ {t ^ {\prime} = 0} ^ {T - 1} 4 (d + 4) \gamma_ {t ^ {\prime}} ^ {2} L M _ {K}}{\Gamma_ {T}} \tag {14}
233
+ $$
234
+
235
+ Remark 5.8. Since the stepsize sequence $\gamma_{t} = \frac{\gamma_{0}}{t + 1}$ satisfies that $\lim_{T\to \infty}\sum_{t = 0}^{T - 1}\gamma_t = \infty$ $\lim_{T\to \infty}\sum_{t = 0}^{T - 1}\gamma_t^2 < \infty$ , when $T\to \infty$ , the RHS of Eq.(14) will converge to 0. Let $w^{s}$ be randomly chosen from $\{w^{t'}\}_{t' = 0}^{T - 1}$ with probabilities proportional to $\{\gamma_{t'}\}_{t' = 0}^{T - 1}$ , then we have $\lim_{s\to \infty}\mathbb{E}\| \nabla f(w^s)\|^2 = 0$ .
236
+
237
+ # 6 Experiments
238
+
239
+ This section embarks on a meticulous examination of our proposed method, AsyncFGD. We assess its performance through three distinct facets: memory consumption, acceleration rate, and accuracy. We initiate our analysis by outlining our experimental setup. To validate the efficacy of applying directional derivatives and utilizing module-wise stale parameters, we contrast AsyncFGD with an array of alternative methods encompassing backpropagation, conventional FGD, and other backpropagation-free algorithms. Subsequently, our focus shifts towards scrutinizing the potential of AsyncFGD within the sphere of efficient transfer learning, conducting experiments on prevalent efficient networks. Memory consumption represents another cardinal aspect that we explore, benchmarking AsyncFGD against popular architectures and unit layers like fully-connected layers, RNN cells, and convolutional layers. Concluding our empirical investigation, we assess the speedup of our method relative to other parallel strategies under a diverse set of conditions across multiple platforms.
240
+
241
+ # 6.1 Experimental Setup
242
+
243
+ Methods. We contrast our proposal's memory footprint with Backpropagation, Sublinear [3], Backpropagation through time, and Memory Efficient BPTT [7]. Accuracy-wise, we compare with backpropagation-free methods: Feedback Alignment (FA) [27], Direct Feedback Alignment (DFA) [26], Direct Random Tangent Propagation, and Error-sign-based Direct Feedback Alignment (sDFA) [6]. We also apply parallelization to FGD through FGD-DP (data parallelism) and FGD-MP (model parallelism).
244
+
245
+ Platform. Experiments utilize Python 3.8 and Pytorch, primarily on nVidia's AGX Orin. Additional results on alternate platforms are in the appendix.
246
+
247
+ Training. Batch size is 64 unless noted. The optimal learning rate (chosen from $\{1e - 5,1e - 4,1e - 3,1e - 2\}$ with Adam optimizer [18]) is based on validation performance. The parameter $\alpha$ is initially set to 1 for the classifier, with others at 0 for the first 10 epochs. Subsequently, $\alpha$ is gradually increased to 0.15 for specific layers. More details are in the appendix.
248
+
249
+ # 6.2 Effectiveness of Directional Derivative and Asynchronism
250
+
251
+ This section documents our experimentation on the effectiveness of using random directional derivative to approximate the true gradient by contrasting it with other BP-free algorithms. Furthermore, we demonstrate that the consistency remains intact when using module-wise stale parameters to uncouple the dependencies, comparing AsyncFGD with vanilla FGD. Results in Table 1 indicate that AsyncFGD can produce results closely aligned with vanilla FGD. Notably, FGD and AsyncFGD yield
252
+
253
+ Table 1: Comparison for AsyncFGD with other BP-free methods. ConvS and FCS refers to small convolutional network and full-connected network while ConvL and FCL refers to their slightly bigger counterpart. Different activation functions are marked as subscript. Details of network architectures can be found in Appendix H.2
254
+
255
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Model</td><td rowspan="2">BP</td><td colspan="6">BP-free</td></tr><tr><td>FA</td><td>DFA</td><td>sDFA</td><td>DRTP</td><td>FGD</td><td>Async</td></tr><tr><td rowspan="8">MNIST</td><td>ConvS\( _{Tanh} \)</td><td>98.7</td><td>88.1</td><td>95.9</td><td>96.8</td><td>95.4</td><td>94.6</td><td>94.4</td></tr><tr><td>ConvS\( _{ReLU} \)</td><td>99.2</td><td>12.0</td><td>11.5</td><td>13.8</td><td>13.6</td><td>95.5</td><td>95.5</td></tr><tr><td>ConvL\( _{Tanh} \)</td><td>99.3</td><td>8.7</td><td>92.2</td><td>93.4</td><td>92.6</td><td>94.4</td><td>94.2</td></tr><tr><td>ConvL\( _{ReLU} \)</td><td>99.3</td><td>89.7</td><td>93.0</td><td>93.1</td><td>93.2</td><td>93.0</td><td>93.2</td></tr><tr><td>FCS\( _{Tanh} \)</td><td>98.9</td><td>83.2</td><td>95.6</td><td>94.2</td><td>94.5</td><td>94.4</td><td>94.3</td></tr><tr><td>FCS\( _{ReLU} \)</td><td>98.5</td><td>8.8</td><td>10.0</td><td>10.8</td><td>9.9</td><td>93.6</td><td>93.7</td></tr><tr><td>FCL\( _{Tanh} \)</td><td>98.8</td><td>89.8</td><td>93.0</td><td>92.0</td><td>92.4</td><td>95.2</td><td>95.4</td></tr><tr><td>FCL\( _{ReLU} \)</td><td>99.3</td><td>86.3</td><td>93.8</td><td>94.3</td><td>94.1</td><td>94.8</td><td>95.3</td></tr><tr><td rowspan="8">CIFAR-10</td><td>ConvS\( _{Tanh} \)</td><td>69.1</td><td>33.4</td><td>56.5</td><td>57.4</td><td>57.6</td><td>46.5</td><td>46.0</td></tr><tr><td>ConvS\( _{ReLU} \)</td><td>69.3</td><td>12.0</td><td>11.5</td><td>10.8</td><td>12.0</td><td>40.0</td><td>39.7</td></tr><tr><td>ConvL\( _{Tanh} \)</td><td>71.0</td><td>40.4</td><td>42.0</td><td>43.6</td><td>44.1</td><td>47.3</td><td>47.3</td></tr><tr><td>ConvL\( _{ReLU} \)</td><td>71.2</td><td>40.4</td><td>42.0</td><td>43.6</td><td>44.1</td><td>44.2</td><td>44.1</td></tr><tr><td>FCS\( _{Tanh} \)</td><td>47.8</td><td>46.2</td><td>46.4</td><td>46.0</td><td>46.2</td><td>42.0</td><td>42.3</td></tr><tr><td>FCS\( _{ReLU} \)</td><td>52.7</td><td>10.2</td><td>12.0</td><td>10.0</td><td>10.3</td><td>43.7</td><td>43.6</td></tr><tr><td>FCL\( _{Tanh} \)</td><td>54.4</td><td>17.4</td><td>44.0</td><td>44.3</td><td>45.5</td><td>47.2</td><td>47.2</td></tr><tr><td>FCL\( _{ReLU} \)</td><td>55.3</td><td>40.4</td><td>42.0</td><td>43.6</td><td>44.1</td><td>46.0</td><td>46.7</td></tr></table>
256
+
257
+ the most optimal outcomes when the network is more complex, or when we employ ReLU as the activation function devoid of batchnorm layers (results from $\mathrm{ConvS}_{ReLU}$ and $\mathrm{FCS}_{ReLU}$ ), situations where FA, DFA and sDFA often fail to propagate an effective error message. Backpropagation results are also furnished solely as a reference to true gradient results. However, when the network grows larger, all BP-free algorithms grapple with variance. The use of larger networks results in only minor improvements compared to BP. We try to address this challenge through efficient transfer learning in the subsequent section.
258
+
259
+ ![](images/2f140229e1e31a99c6bd6551cf8e0c2f01803c208e82ad20f4041457e2cd6caf.jpg)
260
+ (a) FC Layers
261
+
262
+ ![](images/23cb355d178e5c76468dc18d639b0fe553dd3fba58ce2450fa8288b755a0284d.jpg)
263
+ (b) RNN Cell
264
+ Figure 3: Memory footprint comparison across methods. $\mathrm{Async}^{\dagger}$ is AsyncFGD without tangent checkpoint while $\mathrm{Async}^{*}$ refers to AsyncFGD using efficient training strategy. (a) FC layer memory vs. layer count; (b) RNN memory vs. sequential length; (c) Accuracy vs. memory on efficient architectures.
265
+
266
+ ![](images/a95fb523cfef0683ff5582bc555ca23c3fd60b9ecfaac4944d80f67627736654.jpg)
267
+ (c) Popular Efficient Architectures
268
+
269
+ # 6.3 Efficacy of Efficient Transfer Learning
270
+
271
+ In this segment, we delve into the efficacy of amalgamating AsyncFGD with efficient transfer learning, focusing on popular architectures like ResNet-18(Res-18) [9]; MoblieNet (Mobile)[11]; MnasNet(Mnas)[36] and ShuffleNet(Shuffle)[45] with their lightweight counterpart. The models are fine-tuned with weights pre-trained onImagenet. AsyncFGD† denotes AsyncFGD utilizing the efficient training strategy. As can be observed from Table 2, the application of an efficient transfer learning strategy brings about substantial performance enhancement, yielding superior results
272
+
273
+ compared to training with perturbation on the full model. Ablation studies on $\alpha$ provided in Appendix I.1 also shows that, compared to optimizing subset of the model, FGD suffers more from variance.
274
+
275
+ # 6.4 Memory Footprint
276
+
277
+ As illustrated in Fig 3(c), when we plot accuracy against memory consumption, it is evident that AsyncFGD employs approximately one-third of the memory while maintaining close accuracy. Further experimentation on memory consumption with respect to the computing unit reveals, as shown in Fig 3(a) and Fig 3(b), that the additional memory consumption in AsyncFGD mainly serves as placeholders for random tangents, while the JVP computation consumes a negligible amount of additional memory. Memory consumption of other basic units like CNN and batch norm layers, are provided in Appendix.
278
+
279
+ # 6.5 Acceleration on Input Stream
280
+
281
+ In this final section, we assess the acceleration of ResNet-18 with varying $K$ . In this setting, the batch size is set to 4 to better reflect the mechanism of streamlined input on edge device. As demonstrated in 4, AsyncFGD, by detaching dependencies in the forward pass, can outperform other parallel strategies in terms of acceleration rate. While pipeline parallelism is fast, the locking within the forward pass still induces overhead for synchronization, ultimately leading to lower hardware utilization and speed. Results pertaining to different network architectures and other platforms like CPU and GPU as well as more generalized case for larger batch size can be found in the Appendix I.2.
282
+
283
+ Table 2: Results for different algorithms in transfer learning. $\text{Async}^*$ refers to using efficient transfer learning strategy.
284
+
285
+ <table><tr><td>Dataset</td><td>Model</td><td>BP</td><td>FGD</td><td>Async</td><td>\( \text{Async}^* \)</td></tr><tr><td rowspan="5">MNIST</td><td>Res-18</td><td>98.5</td><td>90.6</td><td>90.4</td><td>96.4</td></tr><tr><td>Mobile</td><td>99.2</td><td>89.3</td><td>88.4</td><td>97.1</td></tr><tr><td>Efficient</td><td>99.2</td><td>90.4</td><td>90.1</td><td>95.9</td></tr><tr><td>Mnas</td><td>98.9</td><td>86.6</td><td>86.3</td><td>96.0</td></tr><tr><td>Shuffle</td><td>99.0</td><td>85.8</td><td>85.8</td><td>96.4</td></tr><tr><td rowspan="5">CIFAR</td><td>Res-18</td><td>93.0</td><td>71.2</td><td>71.2</td><td>87.8</td></tr><tr><td>Mobile</td><td>94.0</td><td>72.3</td><td>72.2</td><td>91.1</td></tr><tr><td>Efficient</td><td>94.9</td><td>70.2</td><td>70.1</td><td>90.2</td></tr><tr><td>Mnas</td><td>84.2</td><td>68.8</td><td>68.5</td><td>78.9</td></tr><tr><td>Shuffle</td><td>89.9</td><td>72.5</td><td>72.5</td><td>82.0</td></tr><tr><td rowspan="5">FMNIST</td><td>Res-18</td><td>94.2</td><td>80.2</td><td>80.2</td><td>88.0</td></tr><tr><td>Mobile</td><td>93.2</td><td>82.3</td><td>83.1</td><td>90.6</td></tr><tr><td>Efficient</td><td>92.8</td><td>79.8</td><td>79.8</td><td>90.4</td></tr><tr><td>Mnas</td><td>92.1</td><td>77.1</td><td>77.0</td><td>87.0</td></tr><tr><td>Shuffle</td><td>92.8</td><td>78.4</td><td>78.4</td><td>87.3</td></tr></table>
286
+
287
+ ![](images/00bc5242d874f440572f91c4008f4c39e56684ff76174a024f00badb1dd68586.jpg)
288
+ (a) Cluster with 4 GPUs
289
+
290
+ ![](images/5fafe4d51ed4d444b11392149fa3060167187e5794b355bc6237490350649fb0.jpg)
291
+ (b) Single Embedded Device
292
+ Figure 4: Comparison for acceleration of different parallel methods.
293
+
294
+ # 7 Limitations and Discussion
295
+
296
+ While AsyncFGD offers computational benefits, it is not without limitations. A key drawback is its inferior performance compared to traditional Backpropagation (BP). This performance disparity is largely attributed to the use of randomly sampled directional derivatives in the forward gradient computation, aligning AsyncFGD with Zeroth-Order (ZO) optimization methods and evolutionary strategies. This sampling introduces a significant source of gradient variance, a challenge that is part of a larger problem in the field of stochastic optimization. However, we take encouragement from
297
+
298
+ recent advancements aimed at reducing this variance, some of which have even facilitated the training of large-scale models [31].
299
+
300
+ Another constraint pertains to the availability of idle workers on edge devices—a condition that is not universally applicable given the wide variety of edge computing environments. These can span from IoT chips with limited computational resources, where even deploying a standard deep learning model can be problematic, to high-capacity micro-computers used in autonomous vehicles.
301
+
302
+ Nevertheless, our experimental findings suggest that AsyncFGD is particularly beneficial for specific edge computing scenarios. In such settings, it may serve as a viable alternative for reducing memory usage while fully leveraging available computational resources.
303
+
304
+ # 8 Conclusion
305
+
306
+ In the present paper, we introduce AsyncFGD, an innovative approach designed to shatter the shackles of locking within the forward pass in FGD. This is achieved by incorporating module-wise stale parameters, simultaneously retaining the advantage of minimized memory consumption. In the theoretical segment, we offer a lucid analysis of this partially ordered staleness, demonstrating that our proposed method is capable of converging to critical points even in the face of non-convex problems. We further extend our algorithm to efficient transfer learning by introducing a scale parameter. Our experimental reveals that a sublinear acceleration can be accomplished, without compromising accuracy as well as huge performance gain when utilizing efficient transfer learning strategy. While the exploration of large models employing extensive datasets will undoubtedly continue to rely on backpropagation [10], we assert that the potential of asynchronous algorithms predicated on forward computation should not be overlooked. It offers a promising avenue for fully harnessing limited resources in on-device learning scenarios.
307
+
308
+ # References
309
+
310
+ [1] Atilim Güneş Baydin, Barak A Pearlmutter, Don Syme, Frank Wood, and Philip Torr. Gradients without backpropagation. arXiv preprint arXiv:2202.08587, 2022.
311
+ [2] Han Cai, Chuang Gan, Ligeng Zhu, and Song Han. Tinytl: Reduce activations, not trainable parameters for efficient on-device learning. arXiv preprint arXiv:2007.11622, 2020.
312
+ [3] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016.
313
+ [4] David Clark, LF Abbott, and SueYeon Chung. Credit assignment through broadcasting a global error vector. Advances in Neural Information Processing Systems, 34:10053-10066, 2021.
314
+ [5] Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge Belongie. Large scale fine-grained categorization and domain-specific transfer learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4109-4118, 2018.
315
+ [6] Charlotte Frenkel, Martin Lefebvre, and David Bol. Learning without feedback: Fixed random learning signals allow for feedforward training of deep neural networks. Frontiers in neuroscience, 15:629892, 2021.
316
+ [7] Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efficient backpropagation through time. Advances in Neural Information Processing Systems, 29, 2016.
317
+ [8] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International conference on machine learning, pages 1737-1746. PMLR, 2015.
318
+ [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
319
+ [10] Geoffrey Hinton. The forward-forward algorithm: Some preliminary investigations. arXiv preprint arXiv:2212.13345, 2022.
320
+ [11] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. Searching for mobilenetv3, 2019.
321
+ [12] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan First, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. Advances in neural information processing systems, 32, 2019.
322
+ [13] Zhouyuan Huo, Bin Gu, and Heng Huang. Training neural networks using features replay. Advances in Neural Information Processing Systems, 31, 2018.
323
+ [14] Zhouyuan Huo, Bin Gu, Heng Huang, et al. Decoupled parallel backpropagation with convergence guarantee. In International Conference on Machine Learning, pages 2098-2106. PMLR, 2018.
324
+ [15] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2704-2713, 2018.
325
+ [16] Paras Jain, Ajay Jain, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Kurt Keutzer, Ion Stoica, and Joseph E. Gonzalez. Checkmate: Breaking the memory wall with optimal tensor rematerialization, 2020.
326
+ [17] Steven J Johnston and Simon J Cox. The raspberry pi: A technology disrupter, and the enabler of dreams, 2017.
327
+
328
+ [18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
329
+ [19] Skanda Koppula, Lois Orosa, A Giray Yaqlikci, Roknoddin Azizi, Taha Shahroodi, Konstantinos Kanellopoulos, and Onur Mutlu. Eden: Enabling energy-efficient, high-performance deep neural network inference using approximate dram. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, pages 166-181, 2019.
330
+ [20] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2661-2671, 2019.
331
+ [21] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
332
+ [22] Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, and Tom Goldstein. Training quantized nets: A deeper understanding. Advances in Neural Information Processing Systems, 30, 2017.
333
+ [23] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R Devanur, Gregory R Ganger, Phillip B Gibbons, and Matei Zaharia. Pipedream: generalized pipeline parallelism for dnn training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, pages 1-15, 2019.
334
+ [24] Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. Efficient large-scale language model training ongpu clusters using megatron-lm. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-15, 2021.
335
+ [25] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527-566, 2017.
336
+ [26] Arild Nøkland. Direct feedback alignment provides learning in deep neural networks. Advances in neural information processing systems, 29, 2016.
337
+ [27] Paul Orsmond and Stephen Merry. Feedback alignment: effective and ineffective links between tutors' and students' understanding of coursework feedback. Assessment & Evaluation in Higher Education, 36(2):125-136, 2011.
338
+ [28] Barak A Pearlmutter. Fast exact multiplication by the hessian. Neural computation, 6(1):147-160, 1994.
339
+ [29] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-16. IEEE, 2020.
340
+ [30] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. Advances in neural information processing systems, 24, 2011.
341
+ [31] Mengye Ren, Simon Kornblith, Renjie Liao, and Geoffrey Hinton. Scaling forward gradient with local losses. arXiv preprint arXiv:2210.03310, 2022.
342
+ [32] Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
343
+ [33] David Silver, Anirudh Goyal, Ivo Danihelka, Matteo Hessel, and Hado van Hasselt. Learning by directional gradient descent. In International Conference on Learning Representations, 2022.
344
+ [34] Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99-127, 2002.
345
+ [35] Benoit Steiner, Mostafa Elhoushi, Jacob Kahn, and James Hegarty. Olla: Optimizing the lifetime and location of arrays to reduce the memory usage of neural networks, 2022.
346
+
347
+ [36] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V. Le. Mnasnet: Platform-aware neural architecture search for mobile, 2019.
348
+ [37] Robert Edwin Wengert. A simple automatic derivative evaluation program. Communications of the ACM, 7(8):463-464, 1964.
349
+ [38] Darrell Whitley, Stephen Dominic, Rajarshi Das, and Charles W Anderson. Genetic reinforcement learning for neurocontrol problems. Machine Learning, 13:259-284, 1993.
350
+ [39] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270-280, 1989.
351
+ [40] An Xu, Zhouyuan Huo, and Heng Huang. On the acceleration of deep learning model parallelism with staleness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2088-2097, 2020.
352
+ [41] Bowen Yang, Jian Zhang, Jonathan Li, Christopher Ré, Christopher Aberger, and Christopher De Sa. Pipemare: Asynchronous pipeline parallel dnn training. Proceedings of Machine Learning and Systems, 3:269-296, 2021.
353
+ [42] Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, and Chris De Sa. Swalp: Stochastic weight averaging in low precision training. In International Conference on Machine Learning, pages 7015-7024. PMLR, 2019.
354
+ [43] Yukuan Yang, Lei Deng, Shuang Wu, Tianyi Yan, Yuan Xie, and Guoqi Li. Training high-performance and large-scale deep neural networks with full 8-bit integers. *Neural Networks*, 125:70–82, 2020.
355
+ [44] Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199, 2021.
356
+ [45] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices, 2017.
357
+ [46] Qihua Zhou, Zhihao Qu, Song Guo, Boyuan Luo, Jingcai Guo, Zhenda Xu, and Rajendra Akerkar. On-device learning systems for edge intelligence: A software and hardware synergy perspective. IEEE Internet of Things Journal, 8(15):11916-11934, 2021.
358
+
359
+ # A Appendix
360
+
361
+ The organization of the Appendix is as follows: Appendix sections B to F provide detailed proofs of the lemmas and theorems presented in the main text. This is followed by additional insights, including operational specifics of working with Adam and details on how AsyncFGD can be extended for parallel processing on recursive networks, covered in Appendix G. Lastly, Appendix H and I provide a comprehensive view of the training process and additional experimental results, respectively.
362
+
363
+ # B Proof of Lemma 3.1
364
+
365
+ According to the update rule of Eq.(3), (4), we have
366
+
367
+ $$
368
+ h _ {1} = F _ {1} \left(h _ {0}, w _ {1}\right) = F _ {1} \left(x, w _ {1}\right)
369
+ $$
370
+
371
+ $$
372
+ o _ {1} = J _ {F _ {1}} \left(h _ {0}, w _ {1}\right) \left[ o _ {0} ^ {\top}, u _ {w _ {1}} ^ {\top} \right] ^ {\top} = J _ {F _ {1}} \left(h _ {0}\right) o _ {0} + J _ {F _ {1}} \left(w _ {1}\right) u _ {w _ {1}} = J _ {F _ {1}} \left(w _ {1}\right) u _ {w _ {1}}
373
+ $$
374
+
375
+ $$
376
+ h _ {2} = F _ {2} \left(h _ {1}, w _ {2}\right)
377
+ $$
378
+
379
+ $$
380
+ \begin{array}{l} o _ {2} = J _ {F _ {2}} \left(h _ {1}, w _ {2}\right) \left[ o _ {1} ^ {\intercal}, u _ {w _ {2}} ^ {\intercal} \right] ^ {\intercal} = J _ {F _ {2}} \left(h _ {1}\right) o _ {1} + J _ {F _ {2}} \left(w _ {2}\right) u _ {w _ {2}} = J _ {F _ {2}} \left(h _ {1}\right) J _ {F _ {1}} \left(w _ {1}\right) u _ {w _ {1}} + J _ {F _ {2}} \left(w _ {2}\right) u _ {w _ {2}} \\ = J _ {F _ {2}} \left(w _ {1}\right) u _ {w _ {1}} + J _ {F _ {2}} \left(w _ {2}\right) u _ {w _ {2}} \\ \end{array}
381
+ $$
382
+
383
+ .
384
+
385
+ $$
386
+ o _ {L} = \sum_ {l = 1} ^ {L} J _ {F _ {L}} (w _ {l}) u _ {w _ {l}}
387
+ $$
388
+
389
+ Then for any $l \in \{1, 2, \dots, L\}$ , take expectation with respect to $u_{w_l}$ , we have
390
+
391
+ $$
392
+ \mathbb {E} _ {u _ {w _ {l}}} \left(o _ {L} \cdot u _ {w _ {l}}\right) = \mathbb {E} _ {u _ {w _ {l}}} \left[ \left(J _ {F _ {L}} (w _ {l}) u _ {w _ {l}}\right) \cdot u _ {w _ {l}} + \sum_ {k \neq l} \left(J _ {F _ {L}} (w _ {k}) u _ {w _ {k}}\right) \cdot u _ {w _ {l}} \right]
393
+ $$
394
+
395
+ Note that
396
+
397
+ $$
398
+ \begin{array}{l} \mathbb {E} _ {u _ {w _ {l}}} \left[ \left(J _ {F _ {L}} (w _ {l}) u _ {w _ {l}}\right) \cdot u _ {w _ {l}} \right] = \mathbb {E} _ {u _ {w _ {l}}} \left(\left[ \begin{array}{c c c c} \frac {\partial J _ {F _ {L} , 1}}{\partial w _ {l , 1}} & \frac {\partial J _ {F _ {L} , 1}}{\partial w _ {l , 2}} & \dots & \frac {\partial J _ {F _ {L} , 1}}{\partial w _ {l , d _ {l}}} \\ \frac {\partial J _ {F _ {L} , 2}}{\partial w _ {l , 1}} & \frac {\partial J _ {F _ {L} , 2}}{\partial w _ {l , 2}} & \dots & \frac {\partial J _ {F _ {L} , 2}}{\partial w _ {l , d _ {l}}} \\ \vdots & \vdots & \ddots & \vdots \\ \frac {\partial J _ {F _ {L} , d _ {h _ {L}}}}{\partial w _ {l , 1}} & \frac {\partial J _ {F _ {L} , d _ {h _ {L}}}}{\partial w _ {l , 2}} & \dots & \frac {\partial J _ {F _ {L} , d _ {h _ {L}}}}{\partial w _ {l , d _ {l}}} \end{array} \right] \left[ \begin{array}{c} u _ {w _ {l}, 1} \\ u _ {w _ {l}, 2} \\ \vdots \\ u _ {w _ {l}, d _ {l}} \end{array} \right] \cdot \left[ \begin{array}{c c c} u _ {w _ {l}, 1} & u _ {w _ {l}, 2} & \dots u _ {w _ {l}, d _ {l}} \end{array} \right]\right) \\ = \mathbb {E} _ {u _ {w _ {l}}} \left(\left[ \begin{array}{c c c} \sum_ {i = 1} ^ {d _ {l}} \frac {\partial J _ {F _ {L} , 1}}{\partial w _ {l , i}} \cdot u _ {w _ {l, i}} \\ \sum_ {i = 1} ^ {d _ {l}} \frac {\partial J _ {F _ {L} , 2}}{\partial w _ {l , i}} \cdot u _ {w _ {l, i}} \\ \vdots \\ \sum_ {i = 1} ^ {d _ {l}} \frac {\partial J _ {F _ {L} , d _ {h _ {L}}}}{\partial w _ {l , i}} \cdot u _ {w _ {l, i}} \end{array} \right] \cdot \left[ \begin{array}{c c c} u _ {w _ {l}, 1} & u _ {w _ {l}, 2} & \dots u _ {w _ {l}, d _ {l}} \end{array} \right]\right) \\ = \mathbb {E} _ {u _ {w _ {l}}} (D), \\ \end{array}
399
+ $$
400
+
401
+ where
402
+
403
+ $$
404
+ D _ {m, n} = \left(\sum_ {i = 1} ^ {d _ {l}} \frac {\partial J _ {F _ {L} , m}}{\partial w _ {l , i}} \cdot u _ {w _ {l, i}}\right) u _ {w _ {l, n}} = \frac {\partial J _ {F _ {L} , m}}{\partial w _ {l , n}} u _ {w _ {l, n}} ^ {2} + \sum_ {k \neq n} \frac {\partial J _ {F _ {L} , m}}{\partial w _ {l , k}} u _ {w _ {l, k}} u _ {w _ {l, n}}
405
+ $$
406
+
407
+ with $m\in \{1,2,\ldots ,d_{h_L}\} ,n\in \{1,2,\ldots ,d_l\}$ . Since each $u_{w_l}\sim \mathbb{N}(0,I)$ , we have
408
+
409
+ $$
410
+ \mathbb {E} _ {u _ {w _ {l}}} (D _ {m, n}) = \frac {\partial J _ {F _ {L , m}}}{\partial w _ {l , n}}, \quad \mathbb {E} _ {u _ {w _ {l}}} (D) = J _ {F _ {L}} (w _ {l}).
411
+ $$
412
+
413
+ Similarly, we can prove that $\mathbb{E}_{u_{w_l}}\left[\sum_{k\neq l}\left(J_{F_L}(w_k)u_{w_k}\right)\cdot u_{w_l}\right] = 0\in \mathbb{R}^{d_{h_L}\times d_l}$ . So we have,
414
+
415
+ $$
416
+ \mathbb {E} \left(\frac {\partial f}{\partial F _ {L}} \cdot o _ {L} \cdot u _ {w _ {l}}\right) = \frac {\partial f}{\partial F _ {L}} J _ {F _ {L}} (w _ {l}) = \nabla_ {l, x} f (w) \in \mathbb {R} ^ {d _ {l}}
417
+ $$
418
+
419
+ # C Proof of Lemma 5.3
420
+
421
+ # Proof of Mean.
422
+
423
+ $$
424
+ \begin{array}{l} \hat {h} _ {1} ^ {t - K + 1} = F _ {1} (h _ {0} ^ {t - K + 1}, w _ {1} ^ {t - 2 K + 2}) = F _ {1} (x _ {i (t - K + 1)}, w _ {1} ^ {t - 2 K + 2}) \\ \hat {o} _ {1} ^ {t - K + 1} = J _ {F _ {1}} (h _ {0} ^ {t - K + 1}, w _ {1} ^ {t - 2 K + 2}) [ o _ {0} ^ {\intercal}, u _ {w _ {1}} ^ {t - K + 1 \intercal} ] ^ {\intercal} = J _ {F _ {1}} (h _ {0} ^ {t - K + 1}) o _ {0} + J _ {F _ {1}} (w _ {1} ^ {t - 2 K + 2}) u _ {w _ {1}} ^ {t - K + 1} \\ = J _ {F _ {1}} \left(w _ {1} ^ {t - 2 K + 2}\right) u _ {w _ {1}} ^ {t - K + 1} \\ \end{array}
425
+ $$
426
+
427
+ $$
428
+ \hat {h} _ {2} ^ {t - K + 1} = F _ {2} (\hat {h} _ {1} ^ {t - K + 1}, w _ {2} ^ {t - 2 K + 2})
429
+ $$
430
+
431
+ $$
432
+ \begin{array}{l} \hat {o} _ {2} ^ {t - K + 1} = J _ {F _ {2}} (h _ {1} ^ {t - K + 1}, w _ {2} ^ {t - K + 1}) [ o _ {1} ^ {t - K + 1 \top}, u _ {w _ {2}} ^ {t - K + 1 \top} ] ^ {\intercal} = J _ {F _ {2}} (h _ {1} ^ {t - K + 1}) o _ {1} ^ {t - K + 1} + J _ {F _ {2}} (w _ {2} ^ {t - 2 K + 2}) u _ {w _ {2}} ^ {t - K + 1} \\ = J _ {F _ {2}} (h _ {1} ^ {t - K + 1}) J _ {F _ {1}} (w _ {1} ^ {t - 2 K + 2}) u _ {w _ {1}} ^ {t - K + 1} + J _ {F _ {2}} (w _ {2}) u _ {w _ {2} ^ {t - 2 K + 2}} ^ {t - K + 1} \\ \end{array}
433
+ $$
434
+
435
+ ··
436
+
437
+ $$
438
+ \hat {o} _ {L _ {0}} ^ {t - K + 1} = \sum_ {l = 1} ^ {L _ {0}} J _ {f _ {0}} \left(w _ {l} ^ {t - 2 K + 2}\right) u _ {w _ {l}} ^ {t - K + 1} = \sum_ {l \in \mathcal {G} (0)} J _ {f _ {0}} \left(w _ {l} ^ {t - 2 K + 2}\right) u _ {w _ {l}} ^ {t - K + 1}
439
+ $$
440
+
441
+ ···
442
+
443
+ $$
444
+ \begin{array}{l} \hat {o} _ {L _ {t}} ^ {t - K + 1} = \sum_ {l \in \mathcal {G} (0)} J _ {f _ {t}} (w _ {l} ^ {t - 2 K + 2}) u _ {w _ {l}} ^ {t - K + 1} + \sum_ {l \in \mathcal {G} (1)} J _ {f _ {t}} (w _ {l} ^ {t - 2 K + 3}) u _ {w _ {l}} ^ {t - K + 1} + \dots + \sum_ {l \in \mathcal {G} (t)} J _ {f _ {t}} (w _ {l} ^ {t - 2 K + t + 2}) u _ {w _ {l}} ^ {t - K + 1} \\ = \sum_ {k = 0} ^ {t} \sum_ {l \in \mathcal {G} (k)} J _ {f _ {t}} \left(w _ {l} ^ {t - 2 K + k + 2}\right) u _ {w _ {l}} ^ {t - K + 1} \\ \end{array}
445
+ $$
446
+
447
+ ··
448
+
449
+ $$
450
+ \begin{array}{l} \hat {o} _ {L _ {K - 1}} ^ {t - K + 1} = \sum_ {k = 0} ^ {K - 1} \sum_ {l \in \mathcal {G} (k)} J _ {f _ {K - 1}} \left(w _ {l} ^ {t - 2 K + k + 2}\right) = \sum_ {k = 0} ^ {K - 1} \sum_ {l \in \mathcal {G} (k)} J _ {f} \left(w _ {l} ^ {t - 2 K + k + 2}\right) u _ {w _ {l}} ^ {t - K + 1} \\ = \sum_ {k = 0} ^ {K - 1} \sum_ {l \in \mathcal {G} (k)} \nabla f _ {l, x _ {i (t - K + 1)}} \left(w ^ {t - 2 K + k + 2}\right) ^ {\intercal} u _ {w _ {l}} ^ {t - K + 1} \\ \end{array}
451
+ $$
452
+
453
+ Take expectation with respect to $u_{w_l}^{t - K + 1}$ , where $l\in \mathcal{G}(k)$ , we have
454
+
455
+ $$
456
+ \mathbb {E} _ {u _ {w _ {l}} ^ {t - K + 1}} \left(\hat {o} _ {L _ {K - 1}} ^ {t - K + 1} \cdot u _ {w _ {l}} ^ {t - K + 1}\right) = \nabla f _ {l, x _ {i (t - K + 1)}} \left(w ^ {t - 2 K + k + 2}\right)
457
+ $$
458
+
459
+ So we have,
460
+
461
+ $$
462
+ \mathbb {E} _ {u _ {w \mathcal {G} (k)} ^ {t - K + 1}} \big (\hat {o} _ {L _ {K - 1}} ^ {t - K + 1} \cdot u _ {w _ {l}} ^ {t - K + 1} \big) = \nabla f _ {\mathcal {G} (k), x _ {i (t - K + 1)}} \big (w ^ {t - 2 K + k + 2} \big)
463
+ $$
464
+
465
+ Lemma C.1 ([25], Theorem 3). Let $g_{u}(x) = \langle \nabla f(x), u \rangle u$ , where $u \in \mathbb{R}^d$ is a normally distributed Gaussian vector, then we have
466
+
467
+ $$
468
+ \mathbb {E} _ {u} \| g _ {u} (x) \| ^ {2} \leq (d + 4) \| \nabla f (x) \| ^ {2}
469
+ $$
470
+
471
+ Lemma C.2. Let $g_{u_1, u_2}(x) = \langle \nabla f(x), u_1 \rangle u_2$ , where $u_1 \in \mathbb{R}^{d_1}$ , $u_2 \in \mathbb{R}^{d_2}$ are two i.i.d. normally distributed Gaussian vectors, then we have
472
+
473
+ $$
474
+ \mathbb {E} _ {u _ {1}, u _ {2}} \left\| g _ {u _ {1}, u _ {2}} (x) \right\| ^ {2} \leq d _ {2} \left\| \nabla f (x) \right\| ^ {2}
475
+ $$
476
+
477
+ Proof.
478
+
479
+ $$
480
+ \begin{array}{l} \mathbb {E} _ {u _ {1}, u _ {2}} \| g _ {u _ {1}, u _ {2}} (x) \| ^ {2} = \mathbb {E} _ {u _ {1}, u _ {2}} \| \left\langle \nabla f (x), u _ {1} \right\rangle u _ {2} \| ^ {2} = \mathbb {E} _ {u _ {1}, u _ {2}} \left(\left\langle \nabla f (x), u _ {1} \right\rangle^ {2} \| u _ {2} \| ^ {2}\right) \\ = \mathbb {E} _ {u _ {1}} \left(\left\langle \nabla f (x), u _ {1} \right\rangle^ {2}\right) \cdot \mathbb {E} _ {u _ {2}} \left(\left\| u _ {2} \right\| ^ {2}\right) \\ \leq d _ {2} \mathbb {E} _ {u _ {1}} \left(\langle \nabla f (x), u _ {1} \rangle^ {2}\right) = d _ {2} \mathbb {E} _ {u _ {1}} \left(\sum_ {i = 1} ^ {d _ {1}} \nabla_ {i} f (x) u _ {1, i}\right) ^ {2} \\ \end{array}
481
+ $$
482
+
483
+ $$
484
+ \begin{array}{l} = d _ {2} \mathbb {E} _ {u _ {1}} \left(\sum_ {i = 1} ^ {d _ {1}} \nabla_ {i} ^ {2} f (x) u _ {1, i} ^ {2} + \sum_ {i \neq j} \nabla_ {i} f (x) \nabla_ {j} f (x) u _ {1, i} u _ {1, j}\right) \\ = d _ {2} \| \nabla f (x) \| ^ {2} \\ \end{array}
485
+ $$
486
+
487
+ where the first inequality is due to Lemma 1 in [25].
488
+
489
+ ![](images/5486064d48c26f0e74b5e4a53818a1fb63a5c1fce9725d7107eddb1fdd758653.jpg)
490
+
491
+ Proof of Variance.
492
+
493
+ $$
494
+ \begin{array}{l} \mathbb {E} _ {u _ {w} ^ {t ^ {\prime}}} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L K - 1} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} \right\| ^ {2} \\ = \mathbb {E} _ {u _ {w} ^ {t ^ {\prime}}} \sum_ {k = 0} ^ {K - 1} \left\| \hat {\partial} _ {L _ {K - 1}} ^ {t ^ {\prime}} \cdot u _ {\mathcal {G} (k)} ^ {t ^ {\prime}} \right\| ^ {2} = \mathbb {E} _ {u _ {w} ^ {t ^ {\prime}}} \sum_ {k = 0} ^ {K - 1} \left\| \left(\sum_ {j = 0} ^ {K - 1} \left\langle \nabla f _ {\mathcal {G} (j), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + j + 1}\right), u _ {w _ {\mathcal {G} (j)}} ^ {t ^ {\prime}} \right\rangle\right) \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} \right\| ^ {2} \\ = \mathbb {E} _ {u _ {w} ^ {t ^ {\prime}}} \sum_ {k = 0} ^ {K - 1} \left[ \left(\sum_ {j = 0} ^ {K - 1} \left\langle \nabla f _ {\mathcal {G} (j), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + j + 1}\right), u _ {w _ {\mathcal {G} (j)}} ^ {t ^ {\prime}} \right\rangle\right) ^ {2} \cdot \left\| u _ {\mathcal {G} (k)} ^ {t ^ {\prime}} \right\| ^ {2} \right] \\ = \mathbb {E} _ {u _ {w} ^ {t ^ {\prime}}} \sum_ {k = 0} ^ {K - 1} \left[ \left(\left\langle \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right), u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} \right\rangle^ {2} + \sum_ {j \neq k} \left\langle \nabla f _ {\mathcal {G} (j), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + j + 1}\right), u _ {w _ {\mathcal {G} (j)}} ^ {t ^ {\prime}} \right\rangle^ {2} \right. \right. \\ \left. \left. + \sum_ {m \neq n} \left\langle \nabla f _ {\mathcal {G} (m), x _ {i \left(t ^ {\prime}\right)}} \left(w ^ {t ^ {\prime} - K + m + 1}\right), u _ {w _ {\mathcal {G} (m)}} ^ {t ^ {\prime}} \right\rangle \cdot \left\langle \nabla f _ {\mathcal {G} (n), x _ {i \left(t ^ {\prime}\right)}} \left(w ^ {t ^ {\prime} - K + n + 1}\right), u _ {w _ {\mathcal {G} (n)}} ^ {t ^ {\prime}} \right\rangle\right) \cdot \left\| u _ {\mathcal {G} (k)} ^ {t ^ {\prime}} \right\| ^ {2} \right] \\ = \mathbb {E} _ {u _ {w} ^ {t ^ {\prime}}} \sum_ {k = 0} ^ {K - 1} \left[ \left(\left\langle \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right), u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} \right\rangle^ {2} + \sum_ {j \neq k} \left\langle \nabla f _ {\mathcal {G} (j), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + j + 1}\right), u _ {w _ {\mathcal {G} (j)}} ^ {t ^ {\prime}} \right\rangle^ {2}\right) \cdot \left\| u _ {\mathcal {G} (k)} ^ {t ^ {\prime}} \right\| ^ {2} \right] \\ \leq \sum_ {k = 0} ^ {K - 1} \left[ \right.\left( \right.d _ {k} + 4) \left\| \nabla f _ {\mathcal {G} (k), x _ {i \left(t ^ {\prime}\right)}} \left(w ^ {t ^ {\prime} - K + k + 1}\right)\right\| ^ {2} + \sum_ {j \neq k} \left(d _ {k} \left\| \nabla f _ {\mathcal {G} (j), x _ {i \left(t ^ {\prime}\right)}} \left(w ^ {t ^ {\prime} - K + j + 1}\right)\right\| ^ {2}\right)\left. \right] \\ = \sum_ {k = 0} ^ {K - 1} \left[ d _ {k} \sum_ {j = 0} ^ {K - 1} \left\| \nabla f _ {\mathcal {G} (j), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + j + 1}\right) \right\| ^ {2} + 4 \left\| \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2} \right] \\ = \left(\sum_ {k = 0} ^ {K - 1} d _ {k}\right) \sum_ {j = 0} ^ {K - 1} \left\| \nabla f _ {\mathcal {G} (j), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + j + 1}\right) \right\| ^ {2} + 4 \sum_ {k = 0} ^ {K - 1} \left\| \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2} \\ = (d + 4) \sum_ {k = 0} ^ {K - 1} \left\| \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2} = (d + 4) \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2}, \\ \end{array}
495
+ $$
496
+
497
+ where the inequality is due to Lemma C.1 and C.2.
498
+
499
+ ![](images/ed3c82223a68f08eb42dfb90f789ca0802c6623949784a09988d2603e59c566a.jpg)
500
+
501
+ # D proof of Lemma 5.6
502
+
503
+ Proof. For the convenience of analysis, we denote $t' = t - K + 1$ , then the update rule of algorithm 1 can be rewritten as
504
+
505
+ $$
506
+ w _ {\mathcal {G} (k)} ^ {t ^ {\prime} + 1} = w _ {\mathcal {G} (k)} ^ {t ^ {\prime}} - \gamma_ {t ^ {\prime}} \left(\hat {o} _ {L} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}}\right)
507
+ $$
508
+
509
+ Take expectation with respect to $u_{w_{\mathcal{G}(k)}}^{t'}$ , we have
510
+
511
+ $$
512
+ w _ {\mathcal {G} (k)} ^ {t ^ {\prime} + 1} = w _ {\mathcal {G} (k)} ^ {t ^ {\prime}} - \gamma_ {t ^ {\prime}} \mathbb {E} _ {u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}}} \left(\hat {\sigma} _ {L} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}}\right) = w _ {\mathcal {G} (k)} ^ {t ^ {\prime}} - \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w _ {\mathcal {G} (k)} ^ {t ^ {\prime} - K + k + 1}\right)
513
+ $$
514
+
515
+ Define diagonal matrices $\mathbf{I}_0, \dots, \mathbf{I}_k, \dots, \mathbf{I}_{K-1} \in \mathbb{R}^{d \times d}$ such that all the principle diagonal elements of $\mathbf{I}_k$ in $\mathcal{G}(k)$ are 1, and all the principle diagonal elements of $I_k$ in other than $\mathcal{G}(k)$ are 0. Then we have
516
+
517
+ $$
518
+ \hat {\sigma} _ {L} ^ {t ^ {\prime}} \cdot u _ {w} ^ {t ^ {\prime}} = \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}}
519
+ $$
520
+
521
+ $$
522
+ \nabla f _ {x _ {i \left(t ^ {\prime}\right)}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) = \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k), x _ {i \left(t ^ {\prime}\right)}} \left(w _ {\mathcal {G} (k)} ^ {t ^ {\prime} - K + k + 1}\right)
523
+ $$
524
+
525
+ Since $f(\cdot)$ is $L$ -smooth, the following inequality holds that:
526
+
527
+ $$
528
+ f \left(w ^ {t ^ {\prime} + 1}\right) \leq f \left(w ^ {t ^ {\prime}}\right) + \left\langle \nabla f \left(w ^ {t ^ {\prime}}\right), w ^ {t ^ {\prime} + 1} - w ^ {t ^ {\prime}} \right\rangle + \frac {L}{2} \| w ^ {t ^ {\prime} + 1} - w ^ {t ^ {\prime}} \| ^ {2}
529
+ $$
530
+
531
+ From the update rule of Algorithm 1, we take expectation with respect to all random variables on both sides and obtain:
532
+
533
+ $$
534
+ \begin{array}{l} \mathbb {E} \left[ f \left(w ^ {t ^ {\prime} + 1}\right) \right] \leq f \left(w ^ {t ^ {\prime}}\right) - \gamma_ {t ^ {\prime}} \mathbb {E} \left[ \nabla f \left(w ^ {t ^ {\prime}}\right) ^ {\intercal} \left(\sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}}\right) \right] + \frac {L \gamma_ {t ^ {\prime}} ^ {2}}{2} \mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L _ {K - 1}} ^ {t ^ {\prime}} u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} \right\| ^ {2} \\ = f \left(w ^ {t ^ {\prime}}\right) - \gamma_ {t ^ {\prime}} \sum_ {k = 0} ^ {K - 1} \nabla f \left(w ^ {t ^ {\prime}}\right) ^ {\intercal} \mathbf {I} _ {k} \left(\nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime}}\right) + \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime}}\right)\right) \\ + \frac {L \gamma_ {t ^ {\prime}} ^ {2}}{2} \mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L _ {K - 1}} ^ {t ^ {\prime}} u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} - \nabla f \left(w ^ {t ^ {\prime}}\right) + \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} \\ = f \left(w ^ {t ^ {\prime}}\right) - \gamma_ {t ^ {\prime}} \left\| \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} - \gamma_ {t ^ {\prime}} \sum_ {k = 0} ^ {K - 1} \nabla f \left(w ^ {t ^ {\prime}}\right) ^ {\intercal} \mathbf {I} _ {k} \left(\nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime}}\right)\right) \\ + \frac {L \gamma_ {t ^ {\prime}} ^ {2}}{2} \left\| \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} + \frac {L \gamma_ {t ^ {\prime}} ^ {2}}{2} \mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L _ {K - 1}} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} - \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} \\ + L \gamma_ {t ^ {\prime}} ^ {2} \sum_ {k = 0} ^ {K - 1} \nabla f \left(w ^ {t ^ {\prime}}\right) ^ {\intercal} \mathbf {I} _ {k} \left(\nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime}}\right)\right) \\ = f \left(w ^ {t ^ {\prime}}\right) - \left(\gamma_ {t ^ {\prime}} - \frac {L \gamma_ {t ^ {\prime}} ^ {2}}{2}\right) \left\| \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} + \underbrace {\frac {L \gamma_ {t ^ {\prime}} ^ {2}}{2} \mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {o} _ {L _ {K - 1}} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} - \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2}} _ {Q _ {1}} \\ \underbrace {- \left(\gamma_ {t ^ {\prime}} - L \gamma_ {t ^ {\prime}} ^ {2}\right) \sum_ {k = 0} ^ {K - 1} \nabla f (w ^ {t ^ {\prime}}) ^ {\intercal} \mathbf {I} _ {k} \left(\nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime}}\right)\right)} _ {Q _ {2}}, \\ \end{array}
535
+ $$
536
+
537
+ Using the fact that $\| x + y\| ^2\leq 2\| x\| ^2 +2\| y\| ^2$ and $xy\leq \frac{1}{2}\| x\| ^2 +\frac{1}{2}\| y\| ^2$ , we have
538
+
539
+ $$
540
+ \begin{array}{l} Q _ {1} = \frac {L \gamma_ {t ^ {\prime}} ^ {2}}{2} \mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L _ {K - 1}} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} - \nabla f \left(w ^ {t ^ {\prime}}\right) - \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right. \\ + \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \Bigg \| ^ {2} \\ \leq L \gamma_ {t ^ {\prime}} ^ {2} \underbrace {\mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {o} _ {L K - 1} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} - \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2}} _ {Q _ {3}} + \\ \end{array}
541
+ $$
542
+
543
+ $$
544
+ \begin{array}{l} + L \gamma_ {t ^ {\prime}} ^ {2} \underbrace {\left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f (w ^ {t ^ {\prime}}) \right\| ^ {2}} _ {Q _ {4}} \\ Q _ {2} = - \left(\gamma_ {t ^ {\prime}} - L \gamma_ {t ^ {\prime}} ^ {2}\right) \sum_ {k = 0} ^ {K - 1} \nabla f (w ^ {t ^ {\prime}}) ^ {\intercal} \mathbf {I} _ {k} \left(\nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime}}\right)\right) \\ \leq \frac {\gamma_ {t ^ {\prime}} - L \gamma_ {t ^ {\prime}} ^ {2}}{2} \left\| \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} + \frac {\gamma_ {t ^ {\prime}} - L \gamma_ {t ^ {\prime}} ^ {2}}{2} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla_ {\mathcal {G} (k)} f \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} \\ \end{array}
545
+ $$
546
+
547
+ Using $\mathbb{E}\| \xi -\mathbb{E}[\xi ]\| ^2\leq \mathbb{E}\| \xi \| ^2$ , we have
548
+
549
+ $$
550
+ \begin{array}{l} Q _ {3} = \mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L K - 1} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} - \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2} \\ \leq \mathbb {E} \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \cdot \hat {\sigma} _ {L _ {K - 1}} ^ {t ^ {\prime}} \cdot u _ {w _ {\mathcal {G} (k)}} ^ {t ^ {\prime}} \right\| ^ {2} \\ \leq (d + 4) \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2} \\ = (d + 4) \sum_ {k = 0} ^ {K - 1} \left\| \nabla f _ {\mathcal {G} (k), x _ {i (t ^ {\prime})}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2} \\ \leq (d + 4) K M, \\ \end{array}
551
+ $$
552
+
553
+ where the second inequality is due to Lemma 5.3. Then we bound $Q_{4}$ ,
554
+
555
+ $$
556
+ \begin{array}{l} Q _ {4} = \left\| \sum_ {k = 0} ^ {K - 1} \mathbf {I} _ {k} \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} = \sum_ {k = 0} ^ {K - 1} \left\| \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f _ {\mathcal {G} (k)} \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} \\ \leq \sum_ {k = 0} ^ {K - 1} \left\| \nabla f \left(w ^ {t ^ {\prime} - K + k + 1}\right) - \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} \\ \leq L ^ {2} \sum_ {k = 0} ^ {K - 1} \left\| w ^ {t ^ {\prime}} - w ^ {t ^ {\prime} - K + k + 1} \right\| ^ {2} \\ = L ^ {2} \sum_ {k = 0} ^ {K - 1} \left\| \sum_ {j = \max \{0, t ^ {\prime} - K + k + 1 \}} ^ {t ^ {\prime} - 1} \left(w ^ {j + 1} - w ^ {j}\right) \right\| ^ {2} = L ^ {2} \sum_ {k = 0} ^ {K - 1} \left\| \sum_ {j = \max \{0, t ^ {\prime} - K + k + 1 \}} ^ {t ^ {\prime} - 1} \gamma_ {j} \left(\hat {o} _ {L _ {K - 1}} ^ {j} \cdot u _ {w} ^ {j}\right) \right\| ^ {2} \\ \leq L ^ {2} \gamma_ {\max \{0, t ^ {\prime} - K + 1 \}} ^ {2} \sum_ {k = 0} ^ {K - 1} K \sum_ {j = \max \{0, t ^ {\prime} - K + k + 1 \}} ^ {t ^ {\prime} - 1} (d + 4) \left\| \sum_ {k = 0} ^ {K - 1} \nabla f _ {\mathcal {G} (k), x _ {(j)}} (w ^ {t ^ {\prime} - K + k + 1}) \right\| ^ {2} \\ \leq (d + 4) K L \gamma_ {t ^ {\prime}} \frac {\gamma_ {\max \{0 , t ^ {\prime} - K + 1 \}}}{\gamma_ {t ^ {\prime}}} \sum_ {k = 0} ^ {K - 1} \sum_ {j = \max \{0, t ^ {\prime} - K + k + 1 \}} ^ {t ^ {\prime} - 1} \left\| \sum_ {k = 0} ^ {K - 1} \nabla f _ {\mathcal {G} (k), x _ {(j)}} \left(w ^ {t ^ {\prime} - K + k + 1}\right) \right\| ^ {2} \\ \leq (d + 4) L \gamma_ {t ^ {\prime}} \sigma K ^ {4} M, \\ \end{array}
557
+ $$
558
+
559
+ where the second inequality is from Assumption 5.1, the third inequality is due to Lemma 5.3, the fourth inequality follows from $L\gamma_{t'} < 1$ , the last inequality follows from the inequality of arithmetic and geometric means, Assumption 5.2 and $\sigma \coloneqq \max_{t'} \frac{\gamma_{\max\{0,t' - K + 1\}}}{\gamma_{t'}}$ . Integrating the upper bound together, we have
560
+
561
+ $$
562
+ \mathbb {E} \left[ f (w ^ {t ^ {\prime} + 1}) - f (w ^ {\prime}) \right] \leq - \frac {\gamma_ {t ^ {\prime}}}{2} \left\| \nabla f (w ^ {t ^ {\prime}}) \right\| ^ {2} + (d + 4) L \gamma_ {t ^ {\prime}} ^ {2} K M + \frac {\gamma_ {t ^ {\prime}} + L \gamma_ {t ^ {\prime}} ^ {2}}{2} (d + 4) L \gamma_ {t ^ {\prime}} \sigma K ^ {4} M
563
+ $$
564
+
565
+ $$
566
+ \begin{array}{l} \leq - \frac {\gamma_ {t ^ {\prime}}}{2} \left\| \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} + 2 (d + 4) L \gamma_ {t ^ {\prime}} ^ {2} \left(K M + \sigma K ^ {4} M\right) \\ = - \frac {\gamma_ {t ^ {\prime}}}{2} \left\| \nabla f \left(w ^ {t ^ {\prime}}\right) \right\| ^ {2} + 2 (d + 4) L \gamma_ {t ^ {\prime}} ^ {2} M _ {K}, \\ \end{array}
567
+ $$
568
+
569
+ where we let $M_K = KM + \sigma K^4 M$ .
570
+
571
+ ![](images/64e66e0f3ae9e8522b840d0fe957fdbaa08a58cda1c986be994d377c7a8d2fa2.jpg)
572
+
573
+ # E Proof of Theorem 5.6
574
+
575
+ Proof. Let $\gamma_t = \gamma$ be a constant, taking total expectation in Lemma 5.5, we have
576
+
577
+ $$
578
+ \mathbb {E} \left[ f \left(w ^ {t ^ {\prime} + 1}\right) \right] - \mathbb {E} \left[ f \left(w ^ {t ^ {\prime}}\right) \right] \leq - \frac {\gamma}{2} \mathbb {E} \| \nabla f \left(w ^ {t ^ {\prime}}\right) \| ^ {2} + 2 (d + 4) L \gamma^ {2} M _ {K},
579
+ $$
580
+
581
+ where $\sigma = 1$ and $M_{k} = KM + K^{4}M$ . summing the above inequality from $t' = 0$ to $T - 1$ we have
582
+
583
+ $$
584
+ \mathbb {E} [ f (w ^ {T}) ] - f (w ^ {0}) \leq - \frac {\gamma}{2} \sum_ {t ^ {\prime} = 0} ^ {T - 1} \mathbb {E} \| \nabla f (w ^ {t ^ {\prime}}) \| ^ {2} + 2 T (d + 4) \gamma^ {2} L M _ {K}
585
+ $$
586
+
587
+ Then we have
588
+
589
+ $$
590
+ \frac {1}{T} \sum_ {t ^ {\prime} = 0} ^ {T - 1} \mathbb {E} \| \nabla f (w ^ {t ^ {\prime}}) \| ^ {2} \leq \frac {2 (f (w ^ {0}) - f (w ^ {*}))}{\gamma T} + 4 (d + 4) L \gamma M _ {K}.
591
+ $$
592
+
593
+ ![](images/f6e6d665a18899d9a87cfd32ea056b6de607ae74c360e9d24b3d4e577d7b97e9.jpg)
594
+
595
+ # F Proof of Theorem 5.7
596
+
597
+ Proof. Let $\{\gamma_{t'}\}$ be a diminishing sequence and $\gamma_{t'} = \frac{\gamma_0}{t' + 1}$ , such that $\sigma < K$ and $M_K = KM + K^5 M$ . Taking expectation in Lemma 5.5 and summing it from $t' = 0$ to $T - 1$ , we have
598
+
599
+ $$
600
+ \mathbb {E} \left[ f \left(w ^ {T}\right) \right] - f \left(w ^ {0}\right) \leq - \frac {1}{2} \sum_ {t ^ {\prime} = 0} ^ {T - 1} \gamma_ {t} \mathbb {E} \| \nabla f \left(w ^ {t ^ {\prime}}\right) \| ^ {2} + \sum_ {t ^ {\prime} = 0} ^ {T - 1} 2 (d + 4) \gamma_ {t ^ {\prime}} ^ {2} L M _ {K}.
601
+ $$
602
+
603
+ Letting $\Gamma_T = \sum_{t' = 0}^{T - 1}\gamma_{t'}$ , then we have
604
+
605
+ $$
606
+ \frac {1}{\Gamma_ {T}} \sum_ {t ^ {\prime} = 0} ^ {T - 1} \gamma_ {t ^ {\prime}} \mathbb {E} \| \nabla f (w ^ {t ^ {\prime}}) \| ^ {2} \leq \frac {2 (f (w ^ {0}) - f (w ^ {*}))}{\Gamma_ {T}} + \frac {\sum_ {t ^ {\prime} = 0} ^ {T - 1} 4 (d + 4) \gamma_ {t ^ {\prime}} ^ {2} L M _ {K}}{\Gamma_ {T}}
607
+ $$
608
+
609
+ ![](images/bd786cc85986d89dbbdc9900afd219832a41a49ceb903afdc0fc57d3b16cb4cd.jpg)
610
+
611
+ # G Details of AsyncFGD
612
+
613
+ # G.1 Working with Adam
614
+
615
+ We provide example for AsyncFGD working with Adam in Algorithm 2. Minimal changes are made on Adam by substituting the gradient the estimator using Forward Gradient.
616
+
617
+ # G.2 Execution Details
618
+
619
+ Details are presented in Figure 5. By pipelining over time dimension, we can preserve buffer for input in only one timestamp and still achieve parallel computation.
620
+
621
+ Algorithm 2 AsyncFGD-Adam
622
+ Initialize: Stepsize sequence $\{\gamma_t\}_{t = K - 1}^{T - 1}$ weight $w^0 = \left[w_{\mathcal{G}(0)}^0,\dots ,w_{\mathcal{G}(K - 1)}^0\right]\in \mathbb{R}^d,m_{\mathcal{G}(k)} =$ $0,v_{\mathcal{G}(k)} = 0,\beta_1 = 0.9,\beta_2 = 0.999,\eta = 1e - 8$
623
+ 1: for $t = 0,1,\dots ,T - 1$ do
624
+ 2: for $k = 0,1,\dots ,K - 1$ in parallel do
625
+ 3: Read $\hat{h}_{L_k - 1}^{t - k},\hat{o}_{L_k - 1}^{t - k}$ from storage if $k\neq 0$
626
+ 4: Compute $\hat{h}_{L_k}^{t - k},\hat{o}_{L_k}^{t - k}$
627
+ 5: Send $\hat{h}_{L_k}^{t - k},\hat{o}_{L_k}^{t - k}$ to next worker's storage if $k\neq K - 1$
628
+ 6: end for
629
+ 7: Broadcast $\hat{o}_{LK - 1}^{t - K + 1}$
630
+ 8: for $k = 0,1,\dots ,K - 1$ in parallel do
631
+ 9: Compute $\Delta w_{\mathcal{G}(k)}^{t - K + 1} = \hat{o}_{LK - 1}^{t - K + 1}u_{w_{\mathcal{G}(k)}}^{t - K + 1}$
632
+ 10: Update $m_{\mathcal{G}(k)} = \beta_1\Delta w_{\mathcal{G}(k)}^{t - K + 1} + (1 - \beta_1)m_{\mathcal{G}(k)}$
633
+ 11: Update $v_{\mathcal{G}(k)} = \beta_2\Delta w_{\mathcal{G}(k)}^{t - K + 1} + (1 - \beta_2)v_{\mathcal{G}(k)}$
634
+ 12: Compute $\hat{m}_{\mathcal{G}(k)} = m_{\mathcal{G}(k)} / \beta_1^t$
635
+ 13: Compute $\hat{v}_{\mathcal{G}(k)} = v_{\mathcal{G}(k)} / \beta_2^t$
636
+ 14: Update $w_{\mathcal{G}(k)}^{t - K + 2} = w_{\mathcal{G}(k)}^{t - K + 1} - \gamma_{t - K + 1}\hat{m}_{\mathcal{G}(k)} / \hat{v}_{\mathcal{G}(k)}$
637
+ 15: end for
638
+ 16: end for
639
+
640
+ ![](images/c806c0a52b89f3794102aa5e0473fcf58943833a8a5ef88bdb6a73f78a137623.jpg)
641
+ Figure 5: Details in execution of AsyncFGD-RNN with 3 modules. In the skip stage, only the host accumulate loss and its jvp value and other workers will jump right into the next state.
642
+
643
+ # G.3 Extension: AsyncFGD-Recursive
644
+
645
+ In this section, we extend the potential of AsyncFGD by exploring the parallelization of sequential inputs in RNNs with reduced memory footprint, necessitating the preservation of input for only a single timestamp.
646
+
647
+ We adopt a one-to-many RNN network for ease of illustration and denote the equal length of each sequence as $n$ . We begin by refactoring the original loss for RNNs in terms of cumulative loss and new activation. Here, $s_l^t$ signifies the hidden state at timestamp $t$ on layer $l$ . At timestamp $t$ , each layer ingests $(s_{l-1}^t, s_l^{t-1})$ as input, generating $s_l^t = F_l(s_{l-1}^t, s_l^{t-1}, w_l)$ . We represent the stacked latent states passed from $t-1$ as $s^{t-1} = [s_1^{t-1}, s_2^{t-1}, \ldots, s_L^{t-1}]$ and the output as $y_t = F(s_0^t, s^{t-1}; w)$ , where $s_0^t$ symbolizes the input data $x_t$ . The cumulative loss from timestamp $1 \sim T$ is given by:
648
+
649
+ $$
650
+ \sum_ {t = 1} ^ {T} f \left(F \left(x _ {t}, s ^ {t - 1}; w\right), y _ {t}\right) \tag {15}
651
+ $$
652
+
653
+ We next refactor equation 2 for the $i_{th}$ sequential input in iteration $i, i \geq 0$ as:
654
+
655
+ $$
656
+ w _ {l} ^ {i + 1} = w _ {l} ^ {i} - \gamma_ {i} \frac {\partial \mathcal {L} _ {i}}{\partial w _ {l} ^ {i}}, \quad \forall l \in 1, 2, \dots , L \tag {16}
657
+ $$
658
+
659
+ where $\mathcal{L}i\coloneqq \sum t = in + 1^{(i + 1)n}f(F(x_t,s^{t - 1};w),y_t)$ represents the loss for the $i_{th}$ sequence. We break the dependency between timestamps and iterations by employing dynamic staleness in AsynFGD. Specifically, the computation in module $k\in 1,2,\dots ,K$ at timestamp $t$ is defined as follows:
660
+
661
+ $$
662
+ \hat {s} L _ {k} ^ {t - k} = f _ {k} \left(\hat {s} L _ {k - 1} ^ {t - k}, \hat {s} \mathcal {G} (k) ^ {t - k - 1}; w \mathcal {G} (k) ^ {t - 2 K + k + 2}\right) \tag {17}
663
+ $$
664
+
665
+ $$
666
+ \partial L _ {k} ^ {t - k} = J f _ {k} \left(\hat {s} L k - 1 ^ {t - k}, \hat {s} \mathcal {G} (k) ^ {t - k - 1}; w \mathcal {G} (k) ^ {t - 2 K + k + 2}\right) u _ {\mathcal {G} (k)} ^ {t - k}, \tag {18}
667
+ $$
668
+
669
+ where $u_{\mathcal{G}(k)}^{t - k} = [\hat{o} Lk - 1^{t - k\mathsf{T}},\hat{o}\mathcal{G}(k)^{t - k - 1\mathsf{T}},uw_{\mathcal{G}(k)}^{t - k\mathsf{T}}]\mathsf{T}$
670
+
671
+ Given that tasks belonging to the same iteration use identical parameters, we use $\delta(k,t,i) = t - ni - k - 1$ , $t \in [in + 1, (i + 1)n]$ to quantify this difference for the $i_{th}$ sequential. If $\delta(k,t,i) \leq 0$ , then module $k$ uses stale parameters from iteration $i - 1$ at timestamp $t$ . AsyncFGD-RNN only updates the parameter upon the completion of the last computation in the sequence. Specifically, we use:
672
+
673
+ $$
674
+ w ^ {t - K + 2} = \left\{ \begin{array}{l l} w ^ {t - K + 1}, & \text {i f} \frac {t - K + 1}{n} \notin \mathbb {N} ^ {*} \\ w ^ {t - K + 1} - \gamma_ {\lfloor \frac {t - K}{n} \rfloor} \mathbb {E} u w ^ {t} \left(\left(\frac {\partial \mathcal {L} \left\lfloor \frac {t - K}{n} \right\rfloor}{\partial s L _ {K} ^ {t - K}} o _ {L _ {K}} ^ {t - K}\right) u _ {w} ^ {(t - K)}\right), & \text {o t h e r w i s e} \end{array} \right.
675
+ $$
676
+
677
+ Refer to figure 5 for detailed execution instructions. By combining training and prediction, we can process data from different timestamps of sequential input, maintain a buffer for just a single timestamp, and still achieve parallelization among various workers.
678
+
679
+ # H Training Details
680
+
681
+ In this section, we explain some details in the training process.
682
+
683
+ # H.1 Random Seed
684
+
685
+ The seeds for all experiments are fixed to 0.
686
+
687
+ # H.2 Description of Network Architecture
688
+
689
+ # H.2.1 Models Deployed in Section 6.2.
690
+
691
+ The network structures of ConvS, ConvL, FCS and FCL are enumerated in Tables 8, 11, 9, and 10 correspondingly. In these designations, the subscripts $ReLU$ and $Tanh$ signify the particular activation function used in the model. Moreover, the larger models, denoted as counterparts, incorporate batch normalization layers for enhanced performance.
692
+
693
+ # H.2.2 Models used in Section 6.3.
694
+
695
+ We delve into the specific models employed in Section 6.3. For MobileNet, we utilize the structure of MobileNet_V3_Small. The ResNet-18 structure is used when implementing ResNet. The model EfficientNet_B0 signifies the EfficientNet architecture. The MNASNet0_5 is used for MNasNet. Lastly, we adopt ShuffleNet_V2_X0_5 for ShuffleNet.
696
+
697
+ # H.3 Model Splitting
698
+
699
+ In this section, we provide details for how to split the model into consecutive moduels and distribute them in different worker, we will first provide how to split models in Section 6.2, then we provide how to split model in 6.3.
700
+
701
+ # H.3.1 Model Splitting in Section 6.2
702
+
703
+ In Section 6.2, all models are split with $K = 3$ . Details for how to split the ConvS, ConvL, FCS, FCL are repented in Table 5 Table 6, Table3 and Table 4, respectively.
704
+
705
+ Table 3: Details for model splitting for ConvS, definition for layers can be found in Table 8
706
+
707
+ <table><tr><td>K</td><td>Layers</td></tr><tr><td>1</td><td>conv1, act</td></tr><tr><td>2</td><td>pool1, fc1</td></tr><tr><td>3</td><td>act2, fc2</td></tr></table>
708
+
709
+ Table 4: Details for model splitting for ConvL, definition for layers can be found in Table 11
710
+
711
+ <table><tr><td>K</td><td>Layers</td></tr><tr><td>1</td><td>conv1, bn1, act1, pool1, conv2, bn2, act2, pool2</td></tr><tr><td>2</td><td>conv3, bn3, act3, pool3,conv4, bn4, act4, pool4</td></tr><tr><td>3</td><td>conv5, bn5, act5, pool5, fc1</td></tr></table>
712
+
713
+ Table 5: Details for model splitting for FCS, definition for layers can be found in Table 9
714
+
715
+ <table><tr><td>K</td><td>Layers</td></tr><tr><td>1</td><td>fc1,ac1</td></tr><tr><td>2</td><td>fc2,ac2</td></tr><tr><td>3</td><td>fc3,ac3</td></tr></table>
716
+
717
+ Table 6: Details for model splitting for FSL, definition for layers can be found in Table10
718
+
719
+ <table><tr><td>K</td><td>Layers</td></tr><tr><td>1</td><td>fc1, bn1, ac1, fc2, bn2, ac2</td></tr><tr><td>2</td><td>fc3, bn3, ac3 ,fc4, bn4, ac4</td></tr><tr><td>3</td><td>fc5,bn5,ac5,fc5</td></tr></table>
720
+
721
+ # H.3.2 Model Splitting in Section 6.3
722
+
723
+ In section 6.3, all models are divided into four parts ( $K = 4$ ). Detailed descriptions of how each model is split are provided below. Note that 'head' and 'tail' refer to the layers before and after the main blocks of each architecture, respectively, which are assigned to the first and the last worker:
724
+
725
+ - ResNet-18: The core of ResNet-18 consists of 4 Residual Blocks, each distributed to one of the four workers.
726
+ - EfficientNet: The core of EfficientNet consists of 7 Mobile Inverted Bottlenecks (MBConv). The first worker handles MBConv 1, the second handles MBConv 2 to 3, the third manages MBConv 4 to 6, and the last one manages MBConv 7.
727
+ - MoblieNet: The core of MoblieNetV3-small includes 13 layers of bottlenecks. The first worker handles layers 1 to 3, the second manages layers 4 to 7, the third manages layers 8 to 11, and the last worker handles layers 12 to 13.
728
+ - MnasNet: The core of MnasNet consists of 6 blocks of inverted residuals. Blocks 1 to 2, 3 to 5, and 6 are assigned to workers 2, 3, and 4 respectively, while the first worker only handles the head.
729
+ - ShuffleNet: The core of ShuffleNet consists of 3 stages, each assigned to workers 2, 3, and 4, respectively.
730
+
731
+ # I Additional Experimental Results
732
+
733
+ # I.1 Ablation study in $\alpha$
734
+
735
+ We have incremented the value of $\alpha_{bias}$ gradually, with a step size of 0.0075, over 20 epochs. This process can be generalized using the following equations:
736
+
737
+ $$
738
+ \alpha_ {b i a s} = \left\{ \begin{array}{l l} t \times \frac {\alpha_ {b i a s} ^ {*}}{2 0}, & t < = 2 0 \\ \alpha_ {b i a s} ^ {*}, & \text {o t h e r w i s e} \end{array} \right.
739
+ $$
740
+
741
+ Here, $\alpha_{bias}^{*}$ control the rate of increase and the maximum attainable value of $\alpha_{bias}$ , respectively. The ablation study with respect to $\alpha_{bias}^{*}$ is presented in Table 7.
742
+
743
+ We observe that reducing $\alpha_{bias}^{*}$ to 0, which corresponds to only updating the classifier, still results in performance gains compared to updating the full model. This improvement can be attributed to reduced variance. As $\alpha_{bias}^{*}$ increases, we generally see better results, since the norm of the gradient approximation increases. However, when $\alpha_{bias}^{*}$ exceeds 0.25, we sometimes observe a performance drop due to the corresponding increase in variance.
744
+
745
+ Table 7: Ablation study on different value of ${\alpha }_{bias}^{ * }$
746
+
747
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Model</td><td colspan="8">α*bias</td></tr><tr><td>0.00</td><td>0.03</td><td>0.06</td><td>0.09</td><td>0.12</td><td>0.15</td><td>0.20</td><td>0.25</td></tr><tr><td rowspan="5">CIFAR10</td><td>Res-18</td><td>0.838</td><td>0.838</td><td>0.845</td><td>0.855</td><td>0.865</td><td>0.878</td><td>0.872</td><td>0.885</td></tr><tr><td>Mobile</td><td>0.898</td><td>0.912</td><td>0.911</td><td>0.910</td><td>0.913</td><td>0.911</td><td>0.914</td><td>0.909</td></tr><tr><td>Efficient</td><td>0.892</td><td>0.900</td><td>0.902</td><td>0.903</td><td>0.902</td><td>0.902</td><td>0.887</td><td>0.895</td></tr><tr><td>Shuffle</td><td>0.788</td><td>0.805</td><td>0.808</td><td>0.812</td><td>0.820</td><td>0.820</td><td>0.822</td><td>0.825</td></tr><tr><td>Mnas</td><td>0.782</td><td>0.790</td><td>0.788</td><td>0.789</td><td>0.788</td><td>0.789</td><td>0.777</td><td>0.782</td></tr><tr><td rowspan="5">FMNIST</td><td>Res-18</td><td>0.866</td><td>0.869</td><td>0.871</td><td>0.873</td><td>0.875</td><td>0.880</td><td>0.882</td><td>0.884</td></tr><tr><td>Mobile</td><td>0.890</td><td>0.908</td><td>0.906</td><td>0.906</td><td>0.906</td><td>0.906</td><td>0.899</td><td>0.901</td></tr><tr><td>Efficient</td><td>0.889</td><td>0.904</td><td>0.906</td><td>0.902</td><td>0.905</td><td>0.904</td><td>0.908</td><td>0.897</td></tr><tr><td>Shuffle</td><td>0.849</td><td>0.854</td><td>0.857</td><td>0.860</td><td>0.864</td><td>0.870</td><td>0.870</td><td>0.877</td></tr><tr><td>Mnas</td><td>0.854</td><td>0.868</td><td>0.870</td><td>0.870</td><td>0.870</td><td>0.870</td><td>0.864</td><td>0.864</td></tr></table>
748
+
749
+ # I.2 Acceleration across Various Platforms and Architectures
750
+
751
+ In Section 6.5, we examined the acceleration of AsyncFGD in comparison to vanilla FGD on ResNet-18, using two hardware platforms: 1) NVIDIA AGX Orin, an embedded device, and 2) a cluster of four NVIDIA 1080 Ti GPUs. These platforms were chosen to reflect real-world edge device scenarios and to simulate situations of ample computational power, such as in the case of stacked chips.
752
+
753
+ In this section, we expand our scope of investigation by incorporating two additional devices: 1) NVIDIA A100, and 2) Intel(R) Xeon(R) CPU E5-2678 v3 @2.50GHz. These additions allow us to further examine acceleration under various conditions. We also provide supplementary results on acceleration with respect to different batch sizes to reflect variable input streams. Moreover, to emulate streamlined input, the mini-batch size of the synchronized pipeline is set to 1.
754
+
755
+ The performance of ResNet-18 with different batch sizes on the four NVIDIA 1080Ti GPUs, A100, and AGX Orin platforms is illustrated in Figures 6, 7, and 8, respectively. Results for MobileNetV3-small on AGX Orin are presented in Figure 10. A notable observation is that AsyncFGD performance appears largely insensitive to batch size. In contrast, other algorithms typically exhibit poorer performance with smaller batch sizes. Particularly, when the batch size is reduced to 1, these algorithms offer negligible performance improvements over vanilla FGD. Furthermore, the overall
756
+
757
+ acceleration on a single device is constrained by computational power. For instance, while AsyncFGD achieves a speedup of $2.84 \times$ on a four GPU cluster, it only delivers a $2.11 \times$ speedup on a single AGX Orin. Communication also imposes a limit on the overall acceleration, as demonstrated by the superior performance on the A100 in comparison to the four-GPU cluster. This is attributable to the elimination of communication overhead on a single device, except for the sending and receiving operations of CUDA kernels.
758
+
759
+ Results for MobileNetV3-small with different batch sizes on CPU are depicted in Figure 9. Due to the inherently sequential execution pattern of CPUs, the acceleration is constrained, resulting in only modest speedup and advantage over other algorithms.
760
+
761
+ ![](images/2e7230abd8a877ccc76950520ef5ed76bc45ccf1cd0075c1f230d321d1ad1a7a.jpg)
762
+ (a) Batch size $= 1$
763
+
764
+ ![](images/cba49fe6cd16241b96507d57a5c6003f6da6184307b46905b2bbe3bf161526c3.jpg)
765
+ (b) Batch size $= 2$
766
+
767
+ ![](images/cebf9a30cc4fa342ddc9019d3f6a43e6cf2fcb22e4f639e50e531848c8c8a436.jpg)
768
+ (c) Batch size $= 3$
769
+
770
+ ![](images/03c5b1a50f8ca19910624719aa156be70b0851178780a7c3f93c27780722fb4a.jpg)
771
+ (d) Batch size $= 4$
772
+ Figure 6: Acceleration with differernt batch size on ResNet-18, cluster with 4 Nvidia 1080 Ti
773
+
774
+ # I.3 Memory Profiling on Other Basic Units of Convolutional Neural Networks
775
+
776
+ This section outlines memory profiling for basic units within a Convolutional Neural Network (CNN). Commonly, a CNN layer is coupled with a batch normalization layer and an activation layer using ReLU, so we've combined these elements for our memory testing. We examine the memory consumption against the number of layers and present the results in Figure 11(a). For further examination, we also assess the memory consumption against the number of output channels and batch size, with results shown in Figures 11(b) and 11(c), respectively.
777
+
778
+ Our findings reveal that implementing forward gradients can significantly reduce memory consumption. Generally, the majority of memory usage in CNNs stems from intermediate results, since CNNs often operate in a 'broadcast then product' pattern (to be specific, they are referred as 'img2col'). Consequently, the additional memory required by the random tangent in AsyncFGD is minimal. As such, the memory consumption appears to be invariant to the number of layers, mainly because in the forward pass we discard almost all the intermediate variables.
779
+
780
+ ![](images/f4f29e212d09e61a837dc92ec26f7bb42070b51f222de6b72c14b16eb1e08818.jpg)
781
+ (a) Batch size $= 1$
782
+
783
+ ![](images/08e8cd591274addd37701e9bb40c9dc0d8b7e5e4645e815eae0156c39866cde0.jpg)
784
+ (b) Batch size $= 2$
785
+
786
+ ![](images/bc50c4d98b8096333aeb7677a4072c471b053bc0c5634fad6588483c887d8c1f.jpg)
787
+ (c) Batch size $= 3$
788
+
789
+ ![](images/c422c5f0609ff3e3288f17c30316a1ed42575a9b0493a70d129ddbc52e7d835e.jpg)
790
+ (d) Batch size $= 4$
791
+ Figure 7: Acceleration with different batch size on ResNet-18, A100
792
+
793
+ ![](images/532fc997f71e18467a4712bec29785884cfcc4766baf87d6b8ee04ed3f8d060a.jpg)
794
+ (a) Batch size $= 1$
795
+
796
+ ![](images/19742864188519291acec5c4c153595f4801f18671ff3c29fc1bdc9dc5bc8e25.jpg)
797
+ (b) Batch size $= 2$
798
+
799
+ ![](images/43ee92d30266e56096e088c1bf9b14512644c9ded18403f11ba5d434280f18ba.jpg)
800
+ (c) Batch size $= 3$
801
+ Figure 8: Acceleration with differentert batch size on ResNet-18, AGX Orin
802
+
803
+ ![](images/d8883cfbeca96d69afb9d00708b530c5feab7e065cb49b65a946ed01ed6f4ff1.jpg)
804
+ (d) Batch size $= 4$
805
+
806
+ ![](images/770a18ef57cd0835491cde215ea5ded4fa0ccae3b0996479dc718c8beb94309f.jpg)
807
+ (a) Batch size $= 1$
808
+
809
+ ![](images/d351ccf11121ecaa175b02739ad7aa799e0992d15e50837d6dfaa6f55332a5c1.jpg)
810
+ (b) Batch size $= 2$
811
+
812
+ ![](images/a5a405803a2137102cce029b5282362820fea716e90c87921cbe9d5d9a1579e4.jpg)
813
+ (c) Batch size $= 3$
814
+
815
+ ![](images/221e39985e6c42641ffaf9ee8b5ac56b57718b07b068417a0bde101a2b4fae33.jpg)
816
+ (d) Batch size $= 4$
817
+ Figure 9: Acceleration with differenternt batch size on MobileNetV3-small, Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz
818
+
819
+ ![](images/64ab7eee2910b88e61c2c28ac4803f2beecaaf77d29b8ca70cd71dcef892e9c9.jpg)
820
+ (a) Batch size $= 1$
821
+
822
+ ![](images/b0257d93603acc9c17439908a6c0d6dc11c4aa0ffdf751d010cca6b46921aa77.jpg)
823
+ (b) Batch size $= 2$
824
+
825
+ ![](images/77ddc4f91a7d4ddf563c5debffe91760f8805c131397b61955bd2d807d43ddab.jpg)
826
+ (c) Batch size $= 3$
827
+ Figure 10: Acceleration with different batch size on MobileNetV3-small, AGX Orin
828
+
829
+ ![](images/f5df3cda97477060ca24e5cdfbb12cba1ffcaa56b0dab90c6d700f1a397066f0.jpg)
830
+ (d) Batch size $= 4$
831
+
832
+ Table 8: Network architecture for $ConvS_{ReLU}$ and $ConvS_{Tanh}$ . $ConvS_{ReLU}$ denotes using ReLU for the activation functions and $ConvS_{Tanh}$ denotes using Tanh as activation functions
833
+
834
+ <table><tr><td>Layer</td><td>Type</td><td>Params</td></tr><tr><td>conv1</td><td>Conv2d</td><td>out_channels=32, kernel_size=5, stride=1, padding=2</td></tr><tr><td>act1</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>pool1</td><td>Maxpool12d</td><td>kernel_size=2, stride=2, padding=0, dilation=1</td></tr><tr><td>fc1</td><td>Linear</td><td>out_features=1000</td></tr><tr><td>act2</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc2</td><td>Linear</td><td>out_features=10</td></tr></table>
835
+
836
+ Table 9: Network architecture for $FCS_{ReLU}$ and $FCS_{Tanh}$ . $FCS_{ReLU}$ denotes using ReLU for the activation functions and $FCS_{Tanh}$ denotes using Tanh as activation functions
837
+
838
+ <table><tr><td>Layer</td><td>Type</td><td>Params</td></tr><tr><td>flatten</td><td>Flatten</td><td>N/A</td></tr><tr><td>fc1</td><td>Linear</td><td>out_features=1024</td></tr><tr><td>ac1</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc2</td><td>Linear</td><td>out_features=512</td></tr><tr><td>ac2</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc3</td><td>Linear</td><td>out_features=256</td></tr></table>
839
+
840
+ Table 10: Network architecture for $FCL_{ReLU}$ and $FCL_{Tanh}$ . $FCL_{ReLU}$ denotes using ReLU for the activation functions and $FCL_{Tanh}$ denotes using Tanh as activation functions
841
+
842
+ <table><tr><td>Layer</td><td>Type</td><td>Params</td></tr><tr><td>flatten</td><td>Flatten</td><td>N/A</td></tr><tr><td>fc1</td><td>Linear</td><td>out_features=1024</td></tr><tr><td>bn1</td><td>BatchNorm1d</td><td>N/A</td></tr><tr><td>ac1</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc2</td><td>Linear</td><td>out_features=1024</td></tr><tr><td>bn2</td><td>BatchNorm1d</td><td>N/A</td></tr><tr><td>ac2</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc3</td><td>Linear</td><td>out_features=1024</td></tr><tr><td>bn3</td><td>BatchNorm1d</td><td>N/A</td></tr><tr><td>ac3</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc4</td><td>Linear</td><td>out_features=1024</td></tr><tr><td>bn4</td><td>BatchNorm1d</td><td>N/A</td></tr><tr><td>ac4</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc5</td><td>Linear</td><td>out_features=512</td></tr><tr><td>bn5</td><td>BatchNorm1d</td><td>N/A</td></tr><tr><td>ac5</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>fc6</td><td>Linear</td><td>out_features=10</td></tr></table>
843
+
844
+ ![](images/15d9d8d6134cdcbaa6f05ad18a92ee32898c8abd30dbccbd2498edc3f1d8a571.jpg)
845
+ (a) Different number of layers
846
+
847
+ ![](images/3449610826b197a65f790f2e2d0be050ad4acab32dd5d9c0ad13ca593274fe0d.jpg)
848
+ (b) Different channels
849
+ Figure 11: Memory consumption of basic units in convolutional networks, batch size=64, channels=3 and number of layers=18 unless appears as in the x axis
850
+
851
+ ![](images/1213bafaa59262af5e948b54fe38fd6f929962bff6fed897f2b300de5a38a110.jpg)
852
+ (c) Different batch size
853
+
854
+ Table 11: Network architecture for $ConvL_{ReLU}$ and $ConvL_{Tanh}$ . $ConvL_{ReLU}$ denotes using ReLU for the activation functions and $ConvL_{Tanh}$ denotes using Tanh as activation functions
855
+
856
+ <table><tr><td>Layer</td><td>Type</td><td>Params</td></tr><tr><td>conv1</td><td>Conv2d</td><td>out_channels=32, kernel_size=3, stride=1, padding=2</td></tr><tr><td>bn1</td><td>BatchNorm2d</td><td>N/A</td></tr><tr><td>act1</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>pool1</td><td>Maxpool12d</td><td>kernel_size=2, stride=2, padding=0, dilation=1</td></tr><tr><td>conv2</td><td>Conv2d</td><td>out_channels=64, kernel_size=3, stride=1, padding=2</td></tr><tr><td>bn2</td><td>BatchNorm2d</td><td>N/A</td></tr><tr><td>act2</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>pool2</td><td>Maxpool12d</td><td>kernel_size=2, stride=2, padding=0, dilation=1</td></tr><tr><td>conv3</td><td>Conv2d</td><td>out_channels=128, kernel_size=3, stride=1, padding=2</td></tr><tr><td>bn3</td><td>BatchNorm2d</td><td>N/A</td></tr><tr><td>act3</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>pool3</td><td>Maxpool12d</td><td>kernel_size=2, stride=2, padding=0, dilation=1</td></tr><tr><td>conv4</td><td>Conv2d</td><td>out_channels=256, kernel_size=3, stride=1, padding=2</td></tr><tr><td>bn4</td><td>BatchNorm2d</td><td>N/A</td></tr><tr><td>act4</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>pool4</td><td>Maxpool12d</td><td>kernel_size=2, stride=2, padding=0, dilation=1</td></tr><tr><td>conv5</td><td>Conv2d</td><td>out_channels=512, kernel_size=3, stride=1, padding=2</td></tr><tr><td>bn5</td><td>BatchNorm2d</td><td>N/A</td></tr><tr><td>act5</td><td>ReLU/Tanh</td><td>N/A</td></tr><tr><td>pool5</td><td>Maxpool12d</td><td>kernel_size=2, stride=2, padding=0, dilation=1</td></tr><tr><td>flatten</td><td>Flatten</td><td>N/A</td></tr><tr><td>fc1</td><td>Linear</td><td>out_features=10</td></tr></table>
acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7860fdd1ace84cd53865acae57f52cd7d378071df5ecce68978388b2146ae74
3
+ size 1856328
acceleratedondeviceforwardneuralnetworktrainingwithmodulewisedescendingasynchronism/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfb833efa762acf473ed8e471021335a2e02c1ffd3a99f0309761a78f6f85ade
3
+ size 904825
acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/2f194880-bd7f-48af-a9ec-650f3b491eec_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:179c5c5fa0bab360227d48ec47508c5cb8b30d89e291fad306d5b59ee959c496
3
+ size 305932
acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/2f194880-bd7f-48af-a9ec-650f3b491eec_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cc61a8e04983a90380d3bb2d241e19d0adc75ef3e492074ca1cfcd4565f5fb8
3
+ size 349630
acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/2f194880-bd7f-48af-a9ec-650f3b491eec_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd3a1fb9eac59a7bcfe665a05033e9848e5c974957b9bf4f2a160d85171664ca
3
+ size 1438759
acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/full.md ADDED
The diff for this file is too large to render. See raw diff
 
acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49a2c21c64aed58ce068e86cae733366c5445e7ba3dce2f566b9c1b17d02ec12
3
+ size 1978302
acceleratedquasinewtonproximalextragradientfasterrateforsmoothconvexoptimization/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dac784388f46a04ab8e93a83766ce12360bcca0e3dcb8bf6cbeaf03c23d6b951
3
+ size 2002221
acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/86b1e1e5-defb-41f3-ba94-b000113a0604_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afb0f011bf03e3a517cdc6ddb21d21c0365ebdaf94de854a80a49720fb47c5d5
3
+ size 129960
acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/86b1e1e5-defb-41f3-ba94-b000113a0604_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62544b78d6397ba3f263a5b6409c39b45a3bb286304b66b019c6f666f31eb903
3
+ size 156646
acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/86b1e1e5-defb-41f3-ba94-b000113a0604_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee90fba65b02f6479a8eac759df43ee1ba73fe52396fc2a2535cf442162af907
3
+ size 3525562
acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/full.md ADDED
@@ -0,0 +1,607 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accelerated Training via Incrementally Growing Neural Networks using
2
+
3
+ # Variance Transfer and Learning Rate Adaptation
4
+
5
+ # Xin Yuan
6
+
7
+ University of Chicago yuanx@uchicago.edu
8
+
9
+ # Pedro Savarese
10
+
11
+ TTI-Chicago savarese@ttic.edu
12
+
13
+ # Michael Maire
14
+
15
+ University of Chicago mmaire@uchicago.edu
16
+
17
+ # Abstract
18
+
19
+ We develop an approach to efficiently grow neural networks, within which parameterization and optimization strategies are designed by considering their effects on the training dynamics. Unlike existing growing methods, which follow simple replication heuristics or utilize auxiliary gradient-based local optimization, we craft a parameterization scheme which dynamically stabilizes weight, activation, and gradient scaling as the architecture evolves, and maintains the inference functionality of the network. To address the optimization difficulty resulting from imbalanced training effort distributed to subnetworks fading in at different growth phases, we propose a learning rate adaption mechanism that rebalances the gradient contribution of these separate subcomponents. Experiments show that our method achieves comparable or better accuracy than training large fixed-size models, while saving a substantial portion of the original training computation budget. We demonstrate that these gains translate into real wall-clock training speedups.
20
+
21
+ # 1 Introduction
22
+
23
+ Modern neural network design typically follows a "larger is better" rule of thumb, with models consisting of millions of parameters achieving impressive generalization performance across many tasks, including image classification [22, 32, 30, 46], object detection [13, 26, 11], semantic segmentation [27, 3, 24] and machine translation [34, 7]. Within a class of network architecture, deeper or wider variants of a base model typically yield further improvements to accuracy. Residual networks (ResNets) [15] and wide residual networks [45] illustrate this trend in convolutional neural network (CNN) architectures. Dramatically scaling up network size into the billions of parameter regime has recently revolutionized transformer-based language modeling [34, 7, 1].
24
+
25
+ The size of these models imposes prohibitive training costs and motivates techniques that offer cheaper alternatives to select and deploy networks. For example, hyperparameter tuning is notoriously expensive as it commonly relies on training the network multiple times, and recent techniques aim to circumvent this by making hyperparameters transferable between models of different sizes, allowing them to be tuned on a small network prior to training an original large model once [41].
26
+
27
+ Our approach incorporates these ideas, but extends the scope of transferability to include the parameters of the model itself. Rather than view training small and large models as separate events, we grow a small model into a large one through many intermediate steps, each of which introduces additional parameters to the network. Our contribution is to do so in a manner that preserves the function computed by the model at each growth step (functional continuity) and offers stable training dynamics, while also saving compute by leveraging intermediate solutions. More specifically, we use partially trained subnetworks as scaffolding that accelerates training of newly added parameters, yielding greater overall efficiency than training a large static model from scratch.
28
+
29
+ ![](images/da8733d49a306ea74ea41ab9ab74facd0885ad48c52102709c98245b8f48675f.jpg)
30
+ (a) Existing Methods: Splitting Init, Global LR
31
+
32
+ ![](images/da5f193ae6d0788ef79c4996dcadd01bb9387b939a84998885d0da44c99c6eab.jpg)
33
+ Figure 1: Dynamic network growth strategies. Different from (a) which rely on either splitting [4, 25, 39] or adding neurons with auxiliary local optimization [38, 10], our initialization (b) of new neurons is random but function-preserving. Additionally, our separate learning rate scheduler governs weight updating to address the discrepancy in total accumulated training between different growth stages.
34
+
35
+ ![](images/43ac5aa4fb997a10249d8fdce87e1aca38823953181237593db5271074c1f73d.jpg)
36
+ (b) Ours: Function-Preserving Init, Stagewise LR
37
+
38
+ ![](images/1540d5ce871df5047d32205ee80da3b206c1f58c64c8235724b75ec0f86e9477.jpg)
39
+
40
+ Competing recent efforts to grow deep models from simple architectures [4, 23, 5, 25, 39, 37, 38, 44, 10] draw inspiration from other sources, such as the progressive development processes of biological brains. In particular, Net2Net [4] grows the network by randomly splitting learned neurons from previous phases. This replication scheme, shown in Figure 1(a) is a common paradigm for most existing methods. Gradient-based methods [38, 39] determine which neurons to split and how to split them by solving a combinatorial optimization problem with auxiliary variables.
41
+
42
+ At each growth step, naive random initialization of new weights destroys network functionality and may overwhelm any training progress. Weight rescaling with a static constant from a previous step is not guaranteed to be maintained as the network architecture evolves. Gradient-based methods outperform these simple heuristics but require additional training effort in their parameterization schemes. Furthermore, all existing methods use a global LR scheduler to govern weight updates, ignoring the discrepancy among subnetworks introduced in different growth phases. The gradient itself and other parameterization choices may influence the correct design for scaling weight updates.
43
+
44
+ We develop a growing framework around the principles of enforcing transferability of parameter settings from smaller to larger models (extending [41]), offering functional continuity, smoothing optimization dynamics, and rebalancing learning rates between older and newer subnetworks. Figure 1(b) illustrates key differences with prior work. Our core contributions are:
45
+
46
+ - Parameterization using Variance Transfer: We propose a parameterization scheme accounting for the variance transition among networks of smaller and larger width in a single training process. Initialization of new weights is gradient-free and requires neither additional memory nor training.
47
+ - Improved Optimization with Learning Rate Adaptation: Subnetworks trained for different lengths have distinct learning rate schedules, with dynamic relative scaling driven by weight norm statistics.
48
+ - Better Performance and Broad Applicability: Our method not only trains networks fast, but also yields excellent generalization accuracy, even outperforming the original fixed-size models. Flexibility in designing a network growth curve allows choosing different trade-offs between training resources and accuracy. Furthermore, adopting an adaptive batch size schedule provides acceleration in terms of wall-clock training time. We demonstrate results on image classification and machine translation tasks, across various network architectures.
49
+
50
+ # 2 Related Work
51
+
52
+ Network Growing. A diverse range of techniques train models by progressively expanding the network architecture [36, 9, 5, 37, 44]. Within this space, the methods of [4, 25, 39, 38, 10] are most relevant to our focus - incrementally growing network width across multiple training stages. Net2Net [4] proposes a gradient-free neuron splitting scheme via replication, enabling knowledge transfer from previous training phases; initialization of new weights follows simple heuristics. Liu et al.'s splitting approach [25] derives a gradient-based scheme for duplicating neurons by formulating a combinatorial optimization problem. FireFly [38] gains flexibility by also incorporating brand new neurons. Both methods improve Net2Net's initialization scheme by solving an optimization problem with auxiliary variables, at the cost of extra training effort. GradMax [10], in consideration of training dynamics, performs initialization via solving a singular value decomposition (SVD) problem.
53
+
54
+ Neural Architecture Search (NAS) and Pruning. Another subset of methods mix growth with dynamic reconfiguration aimed at discovering or pruning task-optimized architectures. Network Morphism [36] searches for efficient networks by extending layers while preserving the parameters. AutoGrow [37] takes an AutoML approach governed by heuristic growing and stopping policies. Yuan et al. [44] combine learned pruning with a sampling strategy that dynamically increases or decreases network size. Unlike these methods, we focus on the mechanics of growth when the target architecture is known, addressing the question of how to best transition weight and optimizer state to continue training an incrementally larger model. NAS and pruning are orthogonal to, though potentially compatible with, the technical approach we develop.
55
+
56
+ Hyperparameter Transfer. Multiple works [42, 29, 16] explore transferable hyperparameter (HP) tuning. The recent Tensor Program (TP) work of [40] and [41] focuses on zero-shot HP transfer across model scale and establishes a principled network parameterization scheme to facilitate HP transfer. This serves as an anchor for our strategy, though, as Section 3 details, modifications are required to account for dynamic growth.
57
+
58
+ Learning Rate Adaptation. Surprisingly, the existing spectrum of network growing techniques utilize relatively standard learning rate schedules and do not address potential discrepancy among subcomponents added at different phases. While general-purpose adaptive optimizers, e.g., AdaGrad [8], RMSProp [33], Adam [20], or AvaGrad [31], might ameliorate this issue, we choose to explicitly account for the discrepancy. As layer-adaptive learning rates (LARS) [12, 43] benefit in some contexts, we explore further learning rate adaption specific to both layer and growth stage.
59
+
60
+ # 3 Method
61
+
62
+ # 3.1 Parameterization and Optimization with Growing Dynamics
63
+
64
+ Functionality Preservation. We grow a network's capacity by expanding the width of computational units (e.g., hidden dimensions in linear layers, filters in convolutional layers). To illustrate our scheme, consider a 3-layer fully-connected network with ReLU activations $\phi$ at a growing stage $t$ :
65
+
66
+ $$
67
+ \boldsymbol {u} _ {t} = \phi \left(\boldsymbol {W} _ {t} ^ {x} \boldsymbol {x}\right) \quad \boldsymbol {h} _ {t} = \phi \left(\boldsymbol {W} _ {t} ^ {u} \boldsymbol {u} _ {t}\right) \quad \boldsymbol {y} _ {t} = \boldsymbol {W} _ {t} ^ {h} \boldsymbol {h} _ {t}, \tag {1}
68
+ $$
69
+
70
+ where $\pmb{x} \in \mathbb{R}^{C^x}$ is the network input, $\pmb{y}_t \in \mathbb{R}^{C^y}$ is the output, and $\pmb{u}_t \in \mathbb{R}^{C_t^u}$ , $\pmb{h}_t \in \mathbb{R}^{C_t^h}$ are the hidden activations. In this case, $\pmb{W}_t^x$ is a $C_t^u \times C^x$ matrix, while $\pmb{W}_t^u$ is $C_t^h \times C_t^u$ and $\pmb{W}_t^h$ is $C^y \times C_t^h$ . Our growing process operates by increasing the dimensionality of each hidden state, i.e., from $C_t^u$ and $C_t^h$ to $C_{t+1}^u$ and $C_{t+1}^h$ , effectively expanding the size of the parameter tensors for the next growing stage $t+1$ . The layer parameter matrices $\pmb{W}_t$ have their shapes changed accordingly and become $\pmb{W}_{t+1}$ . Figure 2 illustrates the process for initializing $\pmb{W}_{t+1}$ from $\pmb{W}_t$ at a growing step. $^1$
71
+
72
+ Following Figure 2(a), we first expand $W_{t}^{x}$ along the output dimension by adding two copies of new weights $V_{t}^{x}$ of shape $\frac{C_{t+1}^{u} - C_{t}^{u}}{2} \times C^{x}$ , generating new features $\phi(\mathbf{V}_{t}^{x}\mathbf{x})$ . The first set of activations become
73
+
74
+ $$
75
+ \boldsymbol {u} _ {t + 1} = \operatorname {c o n c a t} \left(\boldsymbol {u} _ {t}, \phi \left(\boldsymbol {V} _ {t} ^ {x} \boldsymbol {x}\right), \phi \left(\boldsymbol {V} _ {t} ^ {x} \boldsymbol {x}\right)\right), \tag {2}
76
+ $$
77
+
78
+ where $\mathrm{concat}$ denotes the concatenation operation. Next, we expand $\pmb{W}_{t}^{u}$ across both input and output dimensions, as shown in Figure 2(b). We initialize new weights $\pmb{Z}_{t}^{u}$ of shape $C_t^h\times \frac{C_{t + 1}^u - C_t^u}{2}$ and add to $\pmb{W}_{t}^{u}$ two copies of it with different signs: $+Z_{t}^{u}$ and $-Z_{t}^{u}$ . This preserves the output of the layer since $\phi (\pmb{W}_t^u\pmb {u}_t + \pmb{Z}_t^u\phi (\pmb{V}_t^x\pmb {x}) + (-\pmb{Z}_t^u)\phi (\pmb{V}_t^x\pmb {x})) = \phi (\pmb{W}_t^u\pmb {u}_t) = \pmb {h}_t$ . We then add two copies of new weights $\pmb{V}_{t}^{u}$ , which has shape $\frac{C_{t + 1}^{h} - C_{t}^{h}}{2}\times C_{t + 1}^{u}$ , yielding activations
79
+
80
+ $$
81
+ \boldsymbol {h} _ {t + 1} = \operatorname {c o n c a t} \left(\boldsymbol {h} _ {t}, \phi \left(\boldsymbol {V} _ {t} ^ {u} \boldsymbol {u} _ {t + 1}\right), \phi \left(\boldsymbol {V} _ {t} ^ {u} \boldsymbol {u} _ {t + 1}\right)\right). \tag {3}
82
+ $$
83
+
84
+ We similarity expand $\mathbf{W}_t^h$ with new weights $\mathbf{Z}_t^h$ to match the dimension of $h_{t+1}$ , as shown in Figure 2(c). The network's output after the growing step is:
85
+
86
+ $$
87
+ \begin{array}{l} \boldsymbol {y} _ {t + 1} = \boldsymbol {W} _ {t} ^ {h} \boldsymbol {h} _ {t} + \boldsymbol {Z} _ {t} ^ {h} \phi \left(\boldsymbol {V} _ {t} ^ {u} \boldsymbol {u} _ {t + 1}\right) + \left(- \boldsymbol {Z} _ {t} ^ {h}\right) \phi \left(\boldsymbol {V} _ {t} ^ {u} \boldsymbol {u} _ {t + 1}\right) \tag {4} \\ = \boldsymbol {W} _ {t} ^ {h} \boldsymbol {h} _ {t} = \boldsymbol {y} _ {t}, \\ \end{array}
88
+ $$
89
+
90
+ which preserves the original output features in Eq. 1. Appendix B provides illustrations for more layers.
91
+
92
+ ![](images/b4b4cfdd9044c5e9f3f54bd97a476d9d63e7e0ad738fd4502567836dd01023fd.jpg)
93
+ (a) Input Layer
94
+
95
+ ![](images/f80f665dfa88b88b3e36afc0da071def33a51b471c8ecd211e9ec84a660696e9.jpg)
96
+ (b) Hidden Layer
97
+
98
+ ![](images/1909c35a40cc62cbb2652158341527f960e49761fa7c6798160c99287ff255aa.jpg)
99
+ (c) Output Layer
100
+ Figure 2: Initialization scheme. In practice, we also add noise to the expanded parameter sets for symmetry breaking.
101
+
102
+ Weights Initialization with Variance Transfer (VT). Yang et al. [41] investigate weight scaling with width at initialization, allowing hyperparameter transfer by calibrating variance across model size. They modify the variance of output layer weights from the commonly used $\frac{1}{\mathrm{fan}_{in}}$ to $\frac{1}{\mathrm{fan}_{in}^2}$ . We adopt this same correction for the added weights with new width: $W^{h}$ and $Z^{h}$ are initialized with variances of $\frac{1}{C_t^{h,2}}$ and $\frac{1}{C_{t + 1}^{h,2}}$ .
103
+
104
+ However, this correction considers training differently-sized models separately, which fails to accommodate the training dynamics in which width grows incrementally. Assuming that the weights of the old subnetwork follow $\pmb{W}_t^h \sim \mathcal{N}(0, \frac{1}{C_t^{h2}})$ (which holds at initialization), we make them compatible
105
+
106
+ with new weight tensor parameterization by rescaling it with the fan $_{in}$ ratio as $\boldsymbol{W}_t^{h'} = \boldsymbol{W}_t^h \cdot \frac{\boldsymbol{C}_t^h}{\boldsymbol{C}_{t+1}^h}$ . See Table 1 (top). Appendix A provides detailed analysis.
107
+
108
+ This parameterization rule transfers to modern CNNs with batch normalization (BN). Given a weight scaling ratio of $c$ , the running mean $\mu$ and variance $\sigma$ of BN layers are modified as $c\mu$ and $c^2\sigma$ , respectively.
109
+
110
+ Stage-wise Learning Rate Adaptation (LRA). Following [41], we employ a learning rate scaling factor of $\propto \frac{1}{\mathrm{fan}_{in}}$ on the output layer when using SGD, compensating for the initialization scheme. However, subnetworks from different growth stages still share a global learning rate, though they have trained for different lengths. This may cause divergent behavior among the corresponding weights, making the training iterations after growing sensitive to the scale of the newly-initialized weights. Instead of adjusting newly added parameters via local optimization [38, 10], we govern the update of each subnetwork in a stage-wise manner.
111
+
112
+ Let $\mathcal{W}_t$ denote the parameter variables of a layer at a growth stage $t$ , where we let $\boldsymbol{W}_t$ and $\boldsymbol{W}_t'$ correspond to the same set of variables such that $\mathcal{W}_{t+1} \setminus \mathcal{W}_t$ denotes the new parameter variables whose values are initialized with $\boldsymbol{Z}_t$ and $\boldsymbol{V}_t$ . Moreover, let $\boldsymbol{W}_{\Delta k}$ and $\boldsymbol{G}_{\Delta k}$ denote the values and gradients of $\mathcal{W}_k \setminus \mathcal{W}_{k-1}$ . We adapt the learning rate used to update each sub-weight $\boldsymbol{W}_{\Delta k}$ , for
113
+
114
+ Table 1: Parameterization and optimization transition for different layers during growing. $C_t$ and $C_{t+1}$ denote the input dimension before and after a growth step.
115
+
116
+ <table><tr><td></td><td></td><td>Input Layer</td><td>Hidden Layer</td><td>Output Layer</td></tr><tr><td rowspan="2">Init.</td><td>Old Re-scaling</td><td>1</td><td>√Ct^u/Ct+1</td><td>Ct^h/Ct+1</td></tr><tr><td>New Init.</td><td>1/Ct^x</td><td>1/Ct+1</td><td>1/(Ct+1)^2</td></tr><tr><td rowspan="3">Adapt.</td><td>0-th Stage</td><td>1</td><td>1</td><td>1/C0</td></tr><tr><td rowspan="2">t-th Stage</td><td>||WΔt^x||</td><td>||WΔt^u||</td><td>||WΔt^h||</td></tr><tr><td>||WΔ0^x||</td><td>||WΔ0^u||</td><td>||WΔ0^h||</td></tr></table>
117
+
118
+ $0 \leq k \leq t$ , as follows:
119
+
120
+ $$
121
+ \eta_ {k} = \eta_ {0} \cdot \frac {f \left(\boldsymbol {W} _ {\Delta k}\right)}{f \left(\boldsymbol {W} _ {\Delta 0}\right)} \quad \boldsymbol {W} _ {\Delta k} \leftarrow \boldsymbol {W} _ {\Delta k} - \eta_ {k} \boldsymbol {G} _ {\Delta k}, \tag {5}
122
+ $$
123
+
124
+ where $\eta_0$ is the base learning rate, $f$ is a function that maps subnetworks of different stages to corresponding train-time statistics, and $W_{\Delta 0}$ are the layer's parameter variables at the first growth stage. Table 1 (bottom) summarizes our LR adaptation rule for SGD when $f$ is instantiated as weight norm, providing an stage-wise extension to the layer-wise adaptation method LARS [12], i.e., $LR \propto ||W||$ . Alternative heuristics are possible; see Appendices C and D.
125
+
126
+ # 3.2 Flexible and Efficient Growth Scheduler
127
+
128
+ We train the model for $T_{total}$ epochs by expanding the channel number of each layer to $C_{final}$ across $N$ growth phases. Existing methods [25, 38] fail to derive a systemic way for distributing training resources across a growth trajectory. Toward maximizing efficiency, we experiment with a coupling between model size and training epoch allocation.
129
+
130
+ Architectural Scheduler. We denote initial channel width as $C_0$ and expand exponentially:
131
+
132
+ $$
133
+ C _ {t} = \left\{ \begin{array}{l l} C _ {t - 1} + \lfloor p _ {c} C _ {t - 1} \rfloor_ {2} & \text {i f} t < N - 1 \\ C _ {\text {f i n a l}} & \text {i f} t = N - 1 \end{array} \right. \tag {6}
134
+ $$
135
+
136
+ where $\lfloor \cdot \rfloor_{2}$ rounds to the nearest even number and $p_c$ is the growth rate between stages.
137
+
138
+ Epoch Scheduler. We denote number of epochs assigned to $t$ -th training stage as $T_{t}$ , with $\sum_{t=0}^{N-1} T_{t} = T_{total}$ . We similarly adapt $T_{t}$ via an exponential growing scheduler:
139
+
140
+ # Algorithm 1: Growing using Var. Transfer and Learning Rate Adapt. with Flexible Scheduler
141
+
142
+ Input: Data $X$ , labels $Y$ , task loss $L$
143
+
144
+ Output: Grown model $\mathcal{W}$
145
+
146
+ Initialize: $\mathcal{W}_0$ with $C_0, T_0, B_0, \eta_0$
147
+
148
+ for $t = 0$ to $N - 1$ do
149
+
150
+ if $t > 0$ then
151
+
152
+ Init. $S_{n}$ from $S_{n - 1}$ using VT in Table 1.
153
+
154
+ Update $C_t$ and $T_{t}$ using Eq. 6 and Eq. 7.
155
+
156
+ Update $B_{t}$ using Eq. 8 (optional)
157
+
158
+ $\operatorname{Iter}_{total} = T_t * \text{len}(X) // B_t$
159
+
160
+ end if
161
+
162
+ foriter $= 1$ toItertotal do
163
+
164
+ Forward and calculate $l = L(\mathcal{W}_t(\pmb{x}), \pmb{y}))$ .
165
+
166
+ Back propagation with $l$ .
167
+
168
+ Update each sub-component using Eq. 5.
169
+
170
+ end for
171
+
172
+ end for
173
+
174
+ return $\mathcal{W}_{N - 1}$
175
+
176
+ $$
177
+ T _ {t} = \left\{ \begin{array}{l l} T _ {t - 1} + \lfloor p _ {t} T _ {t - 1} \rceil & \text {i f} t < N - 1 \\ T _ {\text {t o t a l}} - \sum_ {i = 0} ^ {N - 2} T _ {i} & \text {i f} t = N - 1 \end{array} \right. \tag {7}
178
+ $$
179
+
180
+ Wall-clock Speedup via Batch Size Adaptation. Though the smaller architectures in early growth stages require fewer FLOPs, hardware capabilities may still restrict practical gains. When growing width, in order to ensure that small models fully utilize the benefits of GPU parallelism, we adapt the batch size along with the exponentially-growing architecture in a reverse order:
181
+
182
+ $$
183
+ B _ {t - 1} = \left\{ \begin{array}{l l} B _ {\text {b a s e}} & \text {i f} t = N \\ B _ {t} + \lfloor p _ {b} B _ {t} \rceil & \text {i f} t < N \end{array} \right. \tag {8}
184
+ $$
185
+
186
+ where $B_{base}$ is the batch size of the large baseline model. Algorithm 1 summarizes our full method.
187
+
188
+ # 4 Experiments
189
+
190
+ We evaluate on image classification and machine translation tasks. For image classification, we use CIFAR-10 [21], CIFAR-100 [21] and ImageNet [6]. For the neural machine translation, we use the IWSLT'14 dataset [2] and report the BLEU [28] score on German to English (De-En) translation.
191
+
192
+ Large Baselines via Fixed-size Training. We use VGG-11 [32] with BatchNorm [19], ResNet-20 [15], MobileNetV1 [17] for CIFAR-10 and VGG-19 with BatchNorm, ResNet-18, MobileNetV1 for CIFAR-100. We follow [18] for data augmentation and processing, adopting random shifts/mirroring and channel-wise normalization. CIFAR-10 and CIFAR-100 models are trained for 160 and 200 epochs respectively, with a batch size of 128 and initial learning rate (LR) of 0.1 using SGD. We adopt a cosine LR schedule and set the weights decay and momentum as 5e-4 and 0.9. For ImageNet, we train the baseline ResNet-50 and MobileNetV1 [17] using SGD with batch sizes of 256 and 512, respectively. We adopt the same data augmentation scheme as [14], the cosine LR scheduler with initial LR of 0.1, weight decay of 1e-4 and momentum of 0.9.
193
+
194
+ For IWSLT'14, we train an Encoder-Decoder Transformer (6 attention blocks each) [34]. We set width as $d_{model} = 1 / 4d_{ffn} = 512$ , the number of heads $n_{head} = 8$ and $d_k = d_q = d_v = d_{model} / n_{head} = 64$ . We train the model using Adam for 20 epochs with learning rate 1e-3 and $(\beta_1, \beta_2) = (0.9, 0.98)$ . Batch size is 1500 and we use 4000 warm-up iterations.
195
+
196
+ Table 2: Growing ResNet-20, VGG-11, and MobileNetV1 on CIFAR-10.
197
+
198
+ <table><tr><td rowspan="2">Method</td><td colspan="2">ResNet-20</td><td colspan="2">VGG-11</td><td colspan="2">MobileNetv1</td></tr><tr><td>Train Cost(%)</td><td>Test Accuracy(%) ↑</td><td>Train Cost(%)</td><td>Test Accuracy(%) ↑</td><td>Train Cost(%)</td><td>Test Accuracy(%) ↑</td></tr><tr><td>Large Baseline</td><td>100</td><td>92.62 ± 0.15</td><td>100</td><td>92.14 ± 0.22</td><td>100</td><td>92.27 ± 0.11</td></tr><tr><td>Net2Net</td><td>54.90</td><td>91.60 ± 0.21</td><td>52.91</td><td>91.78 ± 0.27</td><td>53.80</td><td>90.34 ± 0.20</td></tr><tr><td>Splitting</td><td>70.69</td><td>91.80 ± 0.10</td><td>63.76</td><td>91.88 ± 0.15</td><td>65.92</td><td>91.50 ± 0.06</td></tr><tr><td>FireFly-split</td><td>58.47</td><td>91.78 ± 0.11</td><td>56.18</td><td>91.91 ± 0.15</td><td>56.37</td><td>91.56 ± 0.06</td></tr><tr><td>FireFly</td><td>68.96</td><td>92.10 ± 0.13</td><td>60.24</td><td>92.08 ± 0.16</td><td>62.12</td><td>91.69 ± 0.07</td></tr><tr><td>Ours</td><td>54.90</td><td>92.53 ± 0.11</td><td>52.91</td><td>92.34 ± 0.15</td><td>53.80</td><td>92.01 ± 0.10</td></tr></table>
199
+
200
+ Table 3: Growing ResNet-18, VGG-19, and MobileNetV1 on CIFAR-100.
201
+
202
+ <table><tr><td rowspan="2">Method</td><td colspan="2">ResNet-18</td><td colspan="2">VGG-19</td><td colspan="2">MobileNetv1</td></tr><tr><td>Train Cost(%) ↓Accuracy(%) ↑</td><td>Test Cost(%) ↓Accuracy(%) ↑</td><td>Train Cost(%) ↓Accuracy(%) ↑</td><td>Test Cost(%) ↓Accuracy(%) ↑</td><td>Train Cost(%) ↓Accuracy(%) ↑</td><td>Test Cost(%) ↓Accuracy(%) ↑</td></tr><tr><td>Large Baseline</td><td>100</td><td>78.36 ± 0.12</td><td>100</td><td>72.59 ± 0.23</td><td>100</td><td>72.13 ± 0.13</td></tr><tr><td>Net2Net</td><td>52.63</td><td>76.48 ± 0.20</td><td>52.08</td><td>71.88 ± 0.24</td><td>52.90</td><td>70.01 ± 0.20</td></tr><tr><td>Splitting</td><td>68.01</td><td>77.01 ± 0.12</td><td>60.12</td><td>71.96 ± 0.12</td><td>58.39</td><td>70.45 ± 0.10</td></tr><tr><td>FireFly-split</td><td>56.11</td><td>77.22 ± 0.11</td><td>54.64</td><td>72.19 ± 0.14</td><td>54.36</td><td>70.69 ± 0.11</td></tr><tr><td>FireFly</td><td>65.77</td><td>77.25 ± 0.12</td><td>57.48</td><td>72.79 ± 0.13</td><td>56.49</td><td>70.99 ± 0.10</td></tr><tr><td>Ours</td><td>52.63</td><td>78.12 ± 0.15</td><td>52.08</td><td>73.26 ± 0.14</td><td>52.90</td><td>71.53 ± 0.13</td></tr></table>
203
+
204
+ Implementation Details. We compare with the growing methods Net2Net [4], Splitting [25], FireFly-split, FireFly [38] and GradMax [10]. In our method, noise for symmetry breaking is 0.001 to the norm of the initialization. We re-initialize the momentum buffer at each growing step when using SGD while preserving it for adaptive optimizers (e.g., Adam, AvaGrad).
205
+
206
+ For image classification, we run the comparison methods except GradMax alongside our algorithm for all architectures under the same growing scheduler. For the architecture scheduler, we set $p_c$ as 0.2 and $C_0$ as 1/4 of large baselines in Eq. 6 for all layers and grow the seed architecture within $N = 9$ stages towards the large ones. For epoch scheduler, we set $p_t$ as 0.2, $T_0$ as 8, 10, and 4 in Eq. 7 on CIAFR-10, CIFAR-100, and ImageNet respectively. Total training epochs $T_{total}$ are the same as the respective large fixed-size models. We train the models and report the results averaging over 3 random seeds.
207
+
208
+ For machine translation, we grow the encoder and decoder layers' widths while fixing the embedding layer dimension for a consistent positional encoding table. The total number of growing stages is 4, each trained for 5 epochs. The initial width is $1/8$ of the large baseline (i.e., $d_{model} = 64$ and $d_{k,q,v} = 8$ ). We set the growing ratio $p_c$ as 1.0 so that $d_{model}$ evolves as 64, 128, 256 and 512.
209
+
210
+ We train all the models on an NVIDIA 2080Ti 11GB GPU for CIFAR-10, CIFAR-100, and IWSLT'14, and two NVIDIA A40 48GB GPUs for ImageNet.
211
+
212
+ # 4.1 CIFAR Results
213
+
214
+ All models grow from a small seed architecture to the full-sized one in 9 stages, each trained for $\{8,9,11,13,16,19,23,28,33\}$ epochs (160 total) on CIFAR-10, and $\{10,12,14,17,20,24,29,35,39\}$ (200 total) on CIFAR-100. Net2Net follows the design of growing by splitting via simple neuron replication, hence achieving the same training efficiency as our gradient-free method under the same growing schedule. Splitting and Firely require additional training effort for their neuron selection schemes and allocate extra GPU memory for auxiliary variables during the local optimization stage. This is computationally expensive, especially when growing a large model.
215
+
216
+ ResNet-20, VGG-11, and MobileNetV1 on CIFAR-10. Table 2 shows results in terms of test accuracy and training cost calculated based on overall FLOPs. For ResNet-20, Splitting and Firefly achieve better test accuracy than Net2Net, which suggests the additional optimization benefits neuron selection at the cost of training efficiency. Our method requires only $54.9\%$ of the baseline training cost and outperforms all competing methods, while yielding only $0.09p.p$ (percentage points) performance degradation compared to the static baseline. Moreover, our method even outperforms the large fixed-size VGG-11 by $0.20p.p$ test accuracy, while taking only $52.91\%$ of its training cost. For MobileNetV1, our method also achieves the best trade-off between training efficiency and test accuracy among all competitors.
217
+
218
+ ResNet-18, VGG-19, and MobileNetV1 on CIFAR-100. We also evaluate all methods on CIFAR-100 using different network architectures. Results in Table 3 show that Firely consistently achieves better test accuracy than Firefly-split, suggesting that adding new neurons provides more flexibility for exploration than merely splitting. Both Firely and our method achieve better performance than the original VGG-19, suggesting that network growing might have an additional regularizing effect. Our method yields the best accuracy and largest training cost reduction.
219
+
220
+ Table 4: ResNet-50 and MobileNetV1 on ImageNet.
221
+
222
+ <table><tr><td rowspan="2">Method</td><td colspan="2">ResNet-50</td><td colspan="2">MobileNet-v1</td></tr><tr><td>Train Cost(%) ↓</td><td>Test Acc.(%)</td><td>Train Cost(%) ↓</td><td>Test Acc.(%)</td></tr><tr><td>Large</td><td>100</td><td>76.72 ± 0.18</td><td>100</td><td>70.80 ± 0.19</td></tr><tr><td>Net2Net</td><td>60.12</td><td>74.89 ± 0.21</td><td>63.72</td><td>66.19 ± 0.20</td></tr><tr><td>FireFly</td><td>71.20</td><td>75.01 ± 0.11</td><td>86.67</td><td>66.40 ± 0.14</td></tr><tr><td>GradMax</td><td>-</td><td>-</td><td>86.67</td><td>68.60 ± 0.20</td></tr><tr><td>Ours</td><td>60.12</td><td>75.90 ± 0.14</td><td>63.72</td><td>69.91 ± 0.16</td></tr></table>
223
+
224
+ Table 5: Transformer on IWSLT'14.
225
+
226
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Transformer</td></tr><tr><td>Train Cost(%) ↓</td><td>BLEU↑</td></tr><tr><td>Large</td><td>100</td><td>32.82 ± 0.21</td></tr><tr><td>Net2Net</td><td>64.64</td><td>30.97 ± 0.35</td></tr><tr><td>Ours-w/o buffer</td><td>64.64</td><td>31.44 ± 0.18</td></tr><tr><td>Ours-w buffer</td><td>64.64</td><td>31.62 ± 0.16</td></tr><tr><td>Ours-w buffer-RA</td><td>64.64</td><td>32.01 ± 0.16</td></tr></table>
227
+
228
+ # 4.2 ImageNet Results
229
+
230
+ We first grow ResNet-50 on ImageNet and compare the performance of our method to Net2Net and FireFly under the same epoch schedule: $\{4, 4, 5, 6, 8, 9, 11, 14, 29\}$ (90 total) with 9 growing phases. We also grow MobileNetV1 from a small seed architecture, which is more challenging than ResNet-50. We train Net2Net and our method uses the same scheduler as for ResNet-50. We also compare with Firefly-Opt (a variant of FireFly) and GradMax and report their best results from [10]. Note that both methods not only adopt additional local optimization but also train with a less efficient growing scheduler: the final full-sized architecture needs to be trained for a $75\%$ fraction while ours only requires $32.2\%$ . Table 4 shows that our method dominates all competing approaches.
231
+
232
+ # 4.3 IWSLT14 De-En Results
233
+
234
+ We grow a Transformer from $d_{model} = 64$ to $d_{model} = 512$ within 4 stages, each trained with 5 epochs using Adam. Applying gradient-based growing methods to the Transformer architecture is nontrivial due to their domain specific design of local optimization. As such, we only compare with Net2Net. We also design variants of our method for self-comparison, based on the adaptation rules for Adam in Appendix C. As shown in Table 5, our method generalizes well to the Transformer architecture.
235
+
236
+ # 4.4 Analysis
237
+
238
+ Ablation Study. We show the effects of turning on/off each of our modifications to the baseline optimization process of Net2Net (1) Growing: adding neurons with functionality preservation. (2) Growing+VT: only applies variance transfer. (3) Growing+RA: only applies LR rate adaptation. (4) Full method. We conduct experiments using both ResNet-20 on CIFAR-10 and ResNet-18 on CIFAR-100. As shown in Table 6, different variants of our growing method not only outperform Net2Net consistently but also reduce the randomness (std. over 3 runs) caused by random replication. We also see that, both RA and VT boost the baseline growing method. Both components are designed and woven to accomplish the empirical leap. Our full method bests the test accuracy.
239
+
240
+ <table><tr><td>Variant</td><td>Res-20 on C-10 (%)</td><td>Res-18 on C-100 (%)</td></tr><tr><td>Net2Net</td><td>91.60 ± 0.21(+0.00)</td><td>76.48 ± 0.20(+0.00)</td></tr><tr><td>Growing</td><td>91.62 ± 0.12(+0.02)</td><td>76.82 ± 0.17(+0.34)</td></tr><tr><td>Growing+VT</td><td>92.00 ± 0.10(+0.40)</td><td>77.27 ± 0.14(+0.79)</td></tr><tr><td>Growing+RA</td><td>92.24 ± 0.11(+0.64)</td><td>77.74 ± 0.16(+1.26)</td></tr><tr><td>Full</td><td>92.53 ± 0.11(+0.93)</td><td>78.12 ± 0.15(+1.64)</td></tr></table>
241
+
242
+ Table 6: Ablation study on VT and RA components.
243
+
244
+ ![](images/16280ba9f7ec2820faf31862b18f9932879fd20e548e06367d1124c0b2dc2406.jpg)
245
+ (a) Train Loss
246
+ Figure 3:Baselines of 4-layer simple CNN.
247
+
248
+ ![](images/bdd287595d76dbb242389c0f675b2d275501f4198b5025d76df710a42429018d.jpg)
249
+ (b) Test Accuracy
250
+
251
+ ![](images/8a8968b5d631281deb55914118ea4c4f1d49fa7722ca5834c262bef55003a712.jpg)
252
+ (a) Test Accuracy
253
+
254
+ ![](images/b20aa43333482c6a1b80adb2cccbadb3b26e71e035005a743b0b52944d0e6fac.jpg)
255
+ (b) Global LR
256
+ Figure 4: (a) Performance with Var. Transfer and Rate Adaptation growing ResNet-20. (b) and (c) visualize the gradients for different sub-components along training in the last two stages.
257
+
258
+ ![](images/a1b5f11d4bc04a3c5fcd79d492896d2d1debf21495353a5c4b17f32b311cba56.jpg)
259
+ (c) Rate Adaptation
260
+
261
+ Justification for Variance Transfer. We train a simple neural network with 4 convolutional layers on CIFAR-10. The network consists of 4 resolution-preserving convolutional layers; each convolution has 64, 128, 256 and 512 channels, a $3 \times 3$ kernel, and is followed by BatchNorm and ReLU activations. Max-pooling is applied to each layer for a resolution-downsampling of 4, 2, 2, and 2. These CNN layers are then followed by a linear layer for classification. We first alternate this network into four variants, given by combinations of training epochs $\in \{13(1 \times), 30(2.3 \times)\}$ and initialization methods $\in \{\text{standard}, \mu \text{transfer} [41]\}$ . We also grow from a thin architecture within 3 stages, where the channel number of each layer starts with only $1/4$ of the original one, i.e., $\{16, 32, 64, 128\} \to \{32, 64, 128, 256\} \to \{64, 128, 256, 512\}$ , each stage is trained for 10 epochs.
262
+
263
+ For network growing, we compare the baselines with standard initialization and variance transfer. We train all baselines using SGD, with weight decay set as 0 and learning rates sweeping over $\{0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 0.8, 1.0, 1.2, 1.5, 2.0\}$ . In Figure 3(b), growing with Var. Transfer (blue) achieves overall better test accuracy than standard initialization (orange). Large baselines with merely $\mu$ transfer in initialization consistently underperform standard ones, which validate that the compensation from the LR rescaling is necessary in [41]. We also observe, in both Figure 3(a) and 3(b), all growing baselines outperform fixed-size ones under the same training cost, demonstrating positive regularization effects. We also show the effect of our initialization scheme by comparing test performance on standard ResNet-20 on CIFAR-10. As shown in Figure 4(a), compared with standard initialization, our variance transfer not only achieves better final test accuracy but also appears more stable. See Appendix F for a fully-connected network example.
264
+
265
+ Justification for Learning Rate Adaptation. We investigate the value of our proposed stage-wise learning rate adaptation as an optimizer for growing networks. As shown in the red curve in Figure 3, rate adaptation not only bests the train loss and test accuracy among all baselines, but also appears to be more robust over different learning rates. In Figure 4(a), rate adaptation further improves final test accuracy for ResNet-20 on CIFAR-10, under the same initialization scheme.
266
+
267
+ Figure 4(b) and 4(c) visualize the gradients of different sub-components for the 17-th convolutional layer of ResNet-20 during last two growing phases of standard SGD and rate adaptation, respectively. Our rate adaptation mechanism rebalances subcomponents' gradient contributions to appear in lower divergence than global LR, when components are added at different stages and trained for different durations. In Figure 5, we observe that the LR for newly added Subnet-8 (red) in last stage starts around $1.8 \times$ the base LR, then quick adapts to a smoother level. This demonstrates that our method is able to automatically adapt the updates applied to new weights, without any additional local optimization costs [39, 10]. All above show our method has a positive effect in terms of stabilizing training dynamics, which is lost if one attempts to train different subcomponents using a global LR scheduler. Appendix D provides more analysis.
268
+
269
+ Flexible Growing Scheduler. Our growing scheduler gains the flexibility to explore the best trade-offs between training budgets and test performance in a unified configuration scheme (Eq. 6 and Eq. 7). We compare the exponential epoch scheduler $(p_t \in \{0.2, 0.25, 0.3, 0.35\})$ to a linear one $(p_t = 0)$ in ResNet-20 growing on CIFAR-10, denoted as 'Exp.' and 'Linear' baselines in Figure 6. Both baselines use the architectural schedulers with $p_c \in \{0.2, 0.25, 0.3, 0.35\}$ , each generates trade-offs between train costs and test accuracy by alternating $T_0$ . The exponential scheduler yields better
270
+
271
+ ![](images/9822b2a5771f3941f8f2a8184f7f72ffd98c417e5903934699fca45632dbc7ce.jpg)
272
+ Figure 5: Visualization of our adaptive LR.
273
+
274
+ ![](images/783551f857b31c3e80eb7aad5817e81d9b967f6faa96f43a9b621470a985fb66.jpg)
275
+ Figure 6: Comparison of growing schedules.
276
+
277
+ ![](images/03ca3df7625e12aed9944bdc28bc8228f667f4db1487c5111e2f049690dfb153.jpg)
278
+ (a) GPU memory allocations
279
+
280
+ ![](images/3c9cf99ef079fc751a9a3726dd7594a6e02ca0e4b203112c04bc32088014a58f.jpg)
281
+ (b) Training time
282
+ Figure 7: Track of GPU memory and wall clock training time for each growing phase of ResNet-18.
283
+
284
+ overall trade-offs than the linear one with the same $p_c$ . In addition to different growing schedulers, we also plot a baseline for fixed-size training with different models. Growing methods with both schedulers consistently outperform the fixed-size baselines, demonstrating that the regularization effect of network growth benefits generalization performance.
285
+
286
+ Wall-clock Training Speedup. We benchmark GPU memory consumption and wall-clock training time on CIFAR-100 for each stage during training on single NVIDIA 2080Ti GPU. The large baseline ResNet-18 trains for 140 minutes to achieve $78.36\%$ accuracy. As shown in the green bar of Figure 7(b), the growing method only achieves marginal wall-clock acceleration, under the same fixed batch size. As such, the growing ResNet-18 takes 120 minutes to achieve $78.12\%$ accuracy. The low GPU utilization in the green bar in Figure 7(a) hinders FLOPs savings from translating into real-world training acceleration. In contrast, the red bar of Figure 7 shows our batch size adaptation results in a large proportion of wall clock acceleration by filling the GPU memory, and corresponding parallel execution resources, while maintaining test accuracy. ResNet-18 trains for 84 minutes $(1.7\times$ speedup) and achieves $78.01\%$ accuracy.
287
+
288
+ # 5 Conclusion
289
+
290
+ We tackle a set of optimization challenges in network growing and invent a corresponding set of techniques, including initialization with functionality preservation, variance transfer and learning rate adaptation to address these challenges. Each of these techniques can be viewed as 'upgrading' an original part for training static networks into a corresponding one that accounts for dynamic growing. There is a one-to-one mapping of these replacements and a guiding principle governing the formulation of each replacement. Together, they accelerate training without impairing model accuracy - a result that uniquely separates our approach from competitors. Applications to widely-used architectures on image classification and machine translation tasks demonstrate that our method bests the accuracy of competitors while saving considerable training cost.
291
+
292
+ # Acknowledgments and Disclosure of Funding
293
+
294
+ This work was supported by the National Science Foundation under grant CNS-1956180 and the University of Chicago CERES Center.
295
+
296
+ # References
297
+
298
+ [1] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS, 2020.
299
+ [2] Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. Report on the 11th IWSLT evaluation campaign. In Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign, 2014.
300
+ [3] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587, 2017.
301
+ [4] Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2Net: Accelerating learning via knowledge transfer. In ICLR, 2016.
302
+ [5] Xiaoliang Dai, Hongxu Yin, and Niraj K. Jha. NeST: A neural network synthesis tool based on a grow-and-prune paradigm. IEEE Trans. Computers, 2019.
303
+ [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
304
+ [7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2019.
305
+ [8] John C. Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011.
306
+ [9] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Simple and efficient architecture search for convolutional neural networks. In ICLR Workshop, 2018.
307
+ [10] Utku Evci, Bart van Merrienboer, Thomas Unterthiner, Fabian Pedregosa, and Max Vlademyrov. GradMax: Growing neural networks using gradient information. In ICLR, 2022.
308
+ [11] Golnaz Ghiasi, Tsung-Yi Lin, Ruoming Pang, and Quoc V. Le. NAS-FPN: Learning scalable feature pyramid architecture for object detection. arXiv:1904.07392, 2019.
309
+ [12] Boris Ginsburg, Igor Gitman, and Yang You. Large batch training of convolutional networks with layer-wise adaptive rate scaling. 2018.
310
+ [13] Ross B. Girshick. Fast R-CNN. In ICCV, 2015.
311
+ [14] Sam Gross and Michael Wilber. Training and investigating residual nets. http://torch.ch/blog/2016/02/04/resnets.html, 2016.
312
+ [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
313
+ [16] Samuel Horváth, Aaron Klein, Peter Richtárik, and Cédric Archambeau. Hyperparameter transfer learning with adaptive complexity. In AISTATS, 2021.
314
+ [17] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andretto, and Hartwig Adam. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861, 2017.
315
+ [18] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In ECCV, 2016.
316
+ [19] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
317
+ [20] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
318
+ [21] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset. http://www.cs.toronto.edu/~kriz/cifar.html, 2014.
319
+ [22] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In NeurIPS, 2012.
320
+ [23] Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. arXiv:1904.00310, 2019.
321
+ [24] Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L. Yuille, and Li Fei-Fei. Auto-DeepLab: Hierarchical neural architecture search for semantic image segmentation. In CVPR, 2019.
322
+ [25] Qiang Liu, Wu Lemeng, and Wang Dilin. Splitting steepest descent for growing neural architectures. In NeurIPS, 2019.
323
+
324
+ [26] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: Single shot multibox detector. In ECCV, 2016.
325
+ [27] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
326
+ [28] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, 2002.
327
+ [29] Valerio Perrone, Rodolphe Jenatton, Matthias W. Seeger, and Cedric Archambeau. Scalable hyperparameter transfer learning. In NeurIPS, 2018.
328
+ [30] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Regularized evolution for image classifier architecture search. In AAAI, 2019.
329
+ [31] Pedro Savarese, David McAllester, Sudarshan Babu, and Michael Maire. Domain-independent dominance of adaptive methods. In CVPR, 2021.
330
+ [32] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
331
+ [33] Tijmen Tieleman, Geoffrey Hinton, et al. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 2012.
332
+ [34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
333
+ [35] Chengcheng Wan, Henry Hoffmann, Shan Lu, and Michael Maire. Orthogonalized SGD and nested architectures for anytime neural networks. In ICML, 2020.
334
+ [36] Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen. Network morphism. In ICML, 2016.
335
+ [37] Wei Wen, Feng Yan, Yiran Chen, and Hai Li. AutoGrow: Automatic layer growing in deep convolutional networks. In KDD, 2020.
336
+ [38] Lemeng Wu, Bo Liu, Peter Stone, and Qiang Liu. Firefly neural architecture descent: a general approach for growing neural networks. In NeurIPS, 2020.
337
+ [39] Lemeng Wu, Mao Ye, Qi Lei, Jason D Lee, and Qiang Liu. Steepest descent neural architecture optimization: Escaping local optimum with signed neural splitting. arXiv:2003.10392, 2020.
338
+ [40] Greg Yang and Edward J. Hu. Tensor Programs IV: Feature learning in infinite-width neural networks. In ICML, 2021.
339
+ [41] Greg Yang, Edward J Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. Tuning large neural networks via zero-shot hyperparameter transfer. In NeurIPS, 2021.
340
+ [42] Dani Yogatama and Gideon Mann. Efficient transfer learning method for automatic hyperparameter tuning. In AISTATS, 2014.
341
+ [43] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training BERT in 76 minutes. In ICLR, 2020.
342
+ [44] Xin Yuan, Pedro Henrique Pamplona Savarese, and Michael Maire. Growing efficient deep networks by structured continuous sparsification. In ICLR, 2021.
343
+ [45] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv:1605.07146, 2016.
344
+ [46] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. In CVPR, 2022.
345
+
346
+ # A More Analysis on Variance Transfer
347
+
348
+ Our fixed rescaling formulation, regarding relative network width, is an extension to the principled zero-shot HP transfer method [40, 41], based on the stability assumption, denoted as VT. A dynamic rescaling based on actual old weight values is an alternative plausible implementation choice, denoted as VT-constraint.
349
+
350
+ Theorem A.1. Suppose the goal is to enforce the unit variance feature, then the scaling factor of an input layer with weights $\mathbf{W}^x$ with input shape $C^x$ is $\sqrt{\frac{1}{C^x\mathbb{V}[\mathbf{W}^x]}},$ while for a hidden layer with weights $\mathbf{W}^u$ and input shape $C^u$ , it is $\sqrt{\frac{1}{C^u\mathbb{V}[\mathbf{W}^u]}}.$
351
+
352
+ Proof. Consider a hidden layer that computes $\boldsymbol{u} = \boldsymbol{W}^x\boldsymbol{x}$ followed by another layer that computes $\boldsymbol{h} = \boldsymbol{W}^u\boldsymbol{u}$ (ignoring activations for simplicity). At a growth step, the first layer's outputs change from
353
+
354
+ $$
355
+ \boldsymbol {u} _ {t} = \boldsymbol {W} _ {t} ^ {x} \boldsymbol {x} \tag {9}
356
+ $$
357
+
358
+ to
359
+
360
+ $$
361
+ \boldsymbol {u} _ {t + 1} = \left[ \boldsymbol {u} _ {t} ^ {\prime}, \boldsymbol {u} _ {t} ^ {\prime \prime}, \boldsymbol {u} _ {t} ^ {\prime \prime} \right] = \left[ s ^ {x} \boldsymbol {W} _ {t} ^ {x} \boldsymbol {x}, \boldsymbol {V} _ {t} ^ {x} \boldsymbol {x}, \boldsymbol {V} _ {t} ^ {x} \boldsymbol {x} \right]. \tag {10}
362
+ $$
363
+
364
+ where $s^x$ denotes the scaling factor applied to $W^x$ . The second layer's outputs change from
365
+
366
+ $$
367
+ \boldsymbol {h} _ {t} = \boldsymbol {W} _ {t} ^ {u} \boldsymbol {u} _ {t} \tag {11}
368
+ $$
369
+
370
+ to
371
+
372
+ $$
373
+ \boldsymbol {h} _ {t + 1} = \left[ \boldsymbol {h} _ {t} ^ {\prime}, \boldsymbol {h} _ {t} ^ {\prime \prime}, \boldsymbol {h} _ {t} ^ {\prime \prime} \right] = \left[ s ^ {u} \boldsymbol {W} _ {t} ^ {u} \boldsymbol {u} _ {t} ^ {\prime}, \boldsymbol {V} _ {t} ^ {u} \boldsymbol {u} _ {t + 1}, \boldsymbol {V} _ {t} ^ {u} \boldsymbol {u} _ {t + 1} \right]. \tag {12}
374
+ $$
375
+
376
+ where $s^u$ denotes the scaling factor applied to $W^u$ .
377
+
378
+ The variance of the features after the growth step are:
379
+
380
+ $$
381
+ \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime} \right] = \left(s ^ {x}\right) ^ {2} C ^ {x} \mathbb {V} \left[ \boldsymbol {W} _ {t} ^ {x} \right] \tag {13}
382
+ $$
383
+
384
+ $$
385
+ \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime \prime} \right] = C ^ {x} \mathbb {V} \left[ \boldsymbol {V} _ {t} ^ {x} \right] \tag {14}
386
+ $$
387
+
388
+ $$
389
+ \mathbb {V} \left[ \boldsymbol {h} _ {t} ^ {\prime} \right] = \left(s ^ {u}\right) ^ {2} C _ {t} ^ {u} \mathbb {V} \left[ \boldsymbol {W} _ {t} ^ {u} \right] \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime} \right] \tag {15}
390
+ $$
391
+
392
+ $$
393
+ \mathbb {V} \left[ \boldsymbol {h} _ {t} ^ {\prime \prime} \right] = \mathbb {V} \left[ \boldsymbol {V} _ {t} ^ {u} \right] \left(C _ {t} ^ {u} \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime} \right] + \left(C _ {t + 1} ^ {u} - C _ {t} ^ {u}\right) \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime \prime} \right]\right) \tag {16}
394
+ $$
395
+
396
+ Given the goal of enforcing unit-variance features for across all four vectors, we get:
397
+
398
+ $$
399
+ s ^ {x} = \sqrt {\frac {1}{C ^ {x} \mathbb {V} \left[ \boldsymbol {W} _ {t} ^ {x} \right]}} \Rightarrow \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime} \right] = 1 \tag {17}
400
+ $$
401
+
402
+ $$
403
+ s ^ {u} = \sqrt {\frac {1}{C _ {t} ^ {u} \mathbb {V} \left[ \boldsymbol {W} _ {t} ^ {u} \right]}} \Longrightarrow \mathbb {V} \left[ \boldsymbol {h} _ {t} ^ {\prime} \right] = \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime} \right] = 1 \tag {18}
404
+ $$
405
+
406
+ $$
407
+ \mathbb {V} \left[ \boldsymbol {V} _ {t} ^ {x} \right] = \frac {1}{C ^ {x}} \Rightarrow \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime \prime} \right] = 1 \tag {19}
408
+ $$
409
+
410
+ $$
411
+ \mathbb {V} \left[ V _ {t} ^ {u} \right] = \frac {1}{C _ {t + 1} ^ {u}} \Rightarrow \mathbb {V} \left[ \boldsymbol {h} _ {t} ^ {\prime \prime} \right] = \frac {1}{C _ {t + 1} ^ {u}} \left(C _ {t} ^ {u} \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime} \right] + \left(C _ {t + 1} ^ {u} - C _ {t} ^ {u}\right) \mathbb {V} \left[ \boldsymbol {u} _ {t} ^ {\prime \prime} \right]\right) = 1. \tag {20}
412
+ $$
413
+
414
+ ![](images/7260f064a82305a249b9b58e0e629ba99e6da877c9a6b142288f9ae90de28aa7.jpg)
415
+
416
+ This differs from the default VT formulation in Section 3.1, which corresponds to scaling factors of $s^x = 1$ and
417
+
418
+ $$
419
+ s ^ {u} = \sqrt {\frac {C _ {t} ^ {u}}{C _ {t + 1} ^ {u}}}
420
+ $$
421
+
422
+ We compare the default VT with VT-constraint by growing ResNet-20 on CIFAR-10. As shown in Table 7, both VT and VT-constraint outperform the standard baseline, which suggests standard initialization is a suboptimal design in network growing. We also note that involving the weight statistics is not better than our simpler design, which suggests that enforcing the old and the new features to have the same variance is not a good choice.
423
+
424
+ <table><tr><td>Variants</td><td>Test Acc. (%)</td></tr><tr><td>Standard</td><td>91.62 ± 0.12</td></tr><tr><td>VT-constraint</td><td>91.93 ± 0.12</td></tr><tr><td>VT</td><td>92.00 ± 0.10</td></tr></table>
425
+
426
+ Table 7: Comparisons among standard initialization, VT-constraint (Theorem A.1) and the default VT (Section 3.1) for growing ResNet-20 on CIFAR-10.
427
+
428
+ ![](images/1e352ccceef1c6c5233bd668c87fa406d72954e9f6918bae9e3f1b0d0e320c3b.jpg)
429
+ (a) Growing CNN hidden layer
430
+
431
+ ![](images/cf5813c973e908e0664e2fedfed0328f93ffdd62b42987659029acddce8fbf30.jpg)
432
+ (b) Growing residual block
433
+ Figure 8: Illustration for growing other layers.
434
+
435
+ # B General Network Growing in Other Layers
436
+
437
+ We have shown network growing for 3-layer fully connected networks as a motivating example in Section 3.1. We now show how to generalize network growing with K $(>3)$ layers, with conv layers, residual connections.
438
+
439
+ Generalization to $\mathbf{K} (> 3)$ Layers. In our network-width growing formulation, layers may be expanded in 3 patterns. (1) Input layer: output channels only; (2) Hidden layer: Both input (due to expansion of the previous layer) and output channels. (3) Output layer: input channels only. As such, the 3-layer case is sufficient to serve as a motivating example without loss of generality. For $\mathbf{K} (> 3)$ layer networks, the 2nd to $(K - 1)$ th layers simply follow the hidden layer case defined in Section 3.1.
440
+
441
+ Generalization to Convolutional layers. In network width growth, we only need to consider the expansion along input and output channel dimensions no matter for fully connected networks or CNNs. Equations 1-4 still hold for CNN layers. Specifically, when generalizing to CNNs (from 2-d $C_{out} \times C_{in}$ weight matrices to 4-d $C_{out} \times C_{in} \times k \times k$ ones), we only consider the matrix operations on the first two dimensions since we do not change the kernel size $k$ . For example, in Figure 2(b), newly added weights $+Z_{t}^{u}$ in linear layer can be indexed as $W[0:C_t^h,C_t^u:C_t^u +\frac{C_{t + 1}^u - C_t^u}{2}]$ . In CNN layer, it is simply $W[0:C_t^h,C_t^u:C_t^u +\frac{C_{t + 1}^u - C_t^u}{2},\therefore]$ .
442
+
443
+ Residual Connections. We note that differently from plain CNNs like VGG, networks with residual connections do require that the dimension of an activation and its residual match since these will be added together. Our method handles this by growing all layers at the same rate. Hence, at each growth step, the shape of the two tensors to be added by the skip connections always matches.
444
+
445
+ Concrete Example of Adding Units to a Network. In model width growth, our common practice is to first determine the output dimension expansion from the growth scheduler. If a previous layer's output channels are to be grown, then the input dimension of the current layer should be also expanded to accommodate that change.
446
+
447
+ Let's consider a concrete example for a CNN with more layers ( $>3$ ) and residual blocks. Without loss of generality, we omit the kernel size $k$ and denote each convolutional layer channel shape as $(C_{in}, C_{out})$ . We denote the input feature with 1 output dimension as $x$ . Initially, we have the input layer $W^{1}$ with channel dimensions of $(1,2)$ to generate $h_{1}$ (2-dimensional), followed by a 2-layer residual block (identity mapping for residual connection) with weights $W^{(2)}(2,2)$ and $W^{(3)}(2,2)$ , followed by an output layer with weights $W^{(4)}(2,1)$ .
448
+
449
+ The corresponding computation graph (omitting BN and ReLU for simplification) is
450
+
451
+ $$
452
+ y = W ^ {(4)} \left(W ^ {(3)} W ^ {(2)} W ^ {(1)} x + W ^ {(1)} x\right),
453
+ $$
454
+
455
+ In more detail, we rewrite the computation in the matrix formulation:
456
+
457
+ $$
458
+ \begin{array}{l} h ^ {(1)} = W ^ {(1)} x = \left[ \begin{array}{c} w _ {1, 1} ^ {(1)} \\ w _ {2, 1} ^ {(1)} \end{array} \right] x = \left[ \begin{array}{c} h _ {1} ^ {(1)} \\ h _ {2} ^ {(1)} \end{array} \right] \\ h ^ {(2)} = W ^ {(2)} h ^ {(1)} = \left[ \begin{array}{c c} w _ {1, 1} ^ {(2)} & w _ {1, 2} ^ {(2)} \\ w _ {2, 1} ^ {(2)} & w _ {2, 2} ^ {(2)} \end{array} \right] \left[ \begin{array}{c} h _ {1} ^ {(1)} \\ h _ {2} ^ {(1)} \end{array} \right] = \left[ \begin{array}{c} h _ {1} ^ {(2)} \\ h _ {2} ^ {(2)} \end{array} \right] \\ h ^ {(3)} = W ^ {(3)} h ^ {(2)} = \left[ \begin{array}{c c} w _ {1, 1} ^ {(3)} & w _ {1, 2} ^ {(3)} \\ w _ {2, 1} ^ {(3)} & w _ {2, 2} ^ {(3)} \end{array} \right] \left[ \begin{array}{c} h _ {1} ^ {(2)} \\ h _ {2} ^ {(2)} \end{array} \right] = \left[ \begin{array}{c} h _ {1} ^ {(3)} \\ h _ {2} ^ {(3)} \end{array} \right] \\ h ^ {(4)} = h ^ {(3)} + h ^ {(1)} = \left[ \begin{array}{c} h _ {1} ^ {(3)} \\ h _ {2} ^ {(3)} \end{array} \right] + \left[ \begin{array}{c} h _ {1} ^ {(1)} \\ h _ {2} ^ {(1)} \end{array} \right] = \left[ \begin{array}{c} h _ {1} ^ {(4)} \\ h _ {2} ^ {(4)} \end{array} \right] \\ y = W ^ {(4)} h ^ {(4)} = \left[ \begin{array}{c c} w _ {1, 1} ^ {(4)} & w _ {1, 2} ^ {(4)} \end{array} \right] \left[ \begin{array}{c} h _ {1} ^ {(4)} \\ h _ {2} ^ {(4)} \end{array} \right] \\ \end{array}
459
+ $$
460
+
461
+ Now assume that we want to grow the dimension of the network's hidden activations from 2 to 4 (i.e., $h^{(1)}, h^{(2)}, h^{(3)}, h^{(4)}$ , which are 2-dimensional, should become 4-dimensional each).
462
+
463
+ $$
464
+ \begin{array}{l} h ^ {(1)} = W ^ {(1)} x = \left[ \begin{array}{c} w _ {1, 1} ^ {(1)} \\ w _ {2, 1} ^ {(1)} \\ w _ {3, 1} ^ {(1)} \\ w _ {4, 1} ^ {(1)} \end{array} \right] x = \left[ \begin{array}{c} h _ {1} ^ {(1)} \\ h _ {2} ^ {(1)} \\ h _ {3} ^ {(1)} \\ h _ {4} ^ {(1)} \end{array} \right] \\ h ^ {(2)} = W ^ {(2)} h ^ {(1)} = \left[ \begin{array}{c c c c} w _ {1, 1} ^ {(2)} & w _ {1, 2} ^ {(2)} & w _ {1, 3} ^ {(2)} & w _ {1, 4} ^ {(2)} \\ w _ {2, 1} ^ {(2)} & w _ {2, 2} ^ {(2)} & w _ {2, 3} ^ {(2)} & w _ {2, 4} ^ {(2)} \\ w _ {3, 1} ^ {(2)} & w _ {3, 2} ^ {(2)} & w _ {3, 3} ^ {(2)} & w _ {3, 4} ^ {(2)} \\ w _ {4, 1} ^ {(2)} & w _ {4, 2} ^ {(2)} & w _ {4, 3} ^ {(2)} & w _ {4, 4} ^ {(2)} \end{array} \right] \left[ \begin{array}{c} h _ {1} ^ {(1)} \\ h _ {2} ^ {(1)} \\ h _ {3} ^ {(1)} \\ h _ {4} ^ {(1)} \end{array} \right] = \left[ \begin{array}{c} h _ {1} ^ {(2)} \\ h _ {2} ^ {(2)} \\ h _ {3} ^ {(2)} \\ h _ {4} ^ {(2)} \end{array} \right] \\ h ^ {(3)} = W ^ {(3)} h ^ {(2)} = \left[ \begin{array}{c c c c} w _ {1, 1} ^ {(3)} & w _ {1, 2} ^ {(3)} & w _ {1, 3} ^ {(3)} & w _ {1, 4} ^ {(3)} \\ w _ {2, 1} ^ {(3)} & w _ {2, 2} ^ {(3)} & w _ {2, 3} ^ {(3)} & w _ {2, 4} ^ {(3)} \\ w _ {3, 1} ^ {(3)} & w _ {3, 2} ^ {(3)} & w _ {3, 3} ^ {(3)} & w _ {3, 4} ^ {(3)} \\ w _ {4, 1} ^ {(3)} & w _ {4, 2} ^ {(3)} & w _ {4, 3} ^ {(3)} & w _ {4, 4} ^ {(3)} \end{array} \right] \left[ \begin{array}{c} h _ {1} ^ {(2)} \\ h _ {2} ^ {(2)} \\ h _ {3} ^ {(2)} \\ h _ {4} ^ {(2)} \end{array} \right] = \left[ \begin{array}{c} h _ {1} ^ {(3)} \\ h _ {2} ^ {(3)} \\ h _ {3} ^ {(3)} \\ h _ {4} ^ {(3)} \end{array} \right] \\ h ^ {(4)} = h ^ {(3)} + h ^ {(1)} = \left[ \begin{array}{c} h _ {1} ^ {(3)} \\ h _ {2} ^ {(3)} \\ h _ {3} ^ {(3)} \\ h _ {4} ^ {(3)} \end{array} \right] + \left[ \begin{array}{c} h _ {1} ^ {(1)} \\ h _ {2} ^ {(1)} \\ h _ {3} ^ {(1)} \\ h _ {4} ^ {(1)} \end{array} \right] = \left[ \begin{array}{c} h _ {1} ^ {(4)} \\ h _ {2} ^ {(4)} \\ h _ {3} ^ {(4)} \\ h _ {4} ^ {(4)} \end{array} \right] \\ y = W ^ {(4)} h ^ {(4)} = \left[ \begin{array}{c c c c} w _ {1, 1} ^ {(4)} & w _ {1, 2} ^ {(4)} & w _ {1, 3} ^ {(4)} & w _ {1, 4} ^ {(4)} \end{array} \right] \left[ \begin{array}{c} h _ {1} ^ {(4)} \\ h _ {2} ^ {(4)} \\ h _ {3} ^ {(4)} \\ h _ {4} ^ {(4)} \end{array} \right] \\ \end{array}
465
+ $$
466
+
467
+ We added 2 rows to the input layer's weights $W^{(1)}$ (to increase its output dimension from 2 to 4), added 2 rows and 2 columns to the hidden layer's weights $W^{(2)}$ , $W^{(3)}$ (to increase its output dimensions from 2 to 4, and to increase its input dimension from 2 to 4 so they are consistent with the previous layers), and added 2 columns to the output layer's weights $W^{(4)}$ (to increase its input dimension from 2 to 4 so it is consistent with the increase
468
+
469
+ Table 8: Concrete growing example.
470
+
471
+ <table><tr><td>Layer</td><td>Initial Arch</td><td>After Growth</td><td>New weights added</td></tr><tr><td>W(1)</td><td>(1,2)</td><td>(1,4)</td><td>4 × 1 − 2 × 1 = 2</td></tr><tr><td>W(2)</td><td>(2,2)</td><td>(4,4)</td><td>4 × 4 − 2 × 2 = 12</td></tr><tr><td>W(3)</td><td>(2,2)</td><td>(4,4)</td><td>4 × 4 − 2 × 2 = 12</td></tr><tr><td>W(4)</td><td>(2,1)</td><td>(4,1)</td><td>4 × 1 − 2 × 1 = 2</td></tr></table>
472
+
473
+ in dimensionality of $h^{(4)}$ . Note that $h^{(3)}$ and $h^{(1)}$ still maintain matching shapes (to be added up) in the residual block since we grow $W^{(1)}$ and $W^{(3)}$ 's output dimensions with the same rate.
474
+
475
+ We summarize the architectural growth in terms of $C_{in}$ and $C_{out}$ (omitting kernel size $k$ ) in Table 8. We also show the growing for CNN hidden layer in Figure 8(a) and residual blocks in Figure 8(b) for better illustration.
476
+
477
+ Note that this example shows how growing works in general; our specific method also includes particulars as to what values are used to initialize the newly-added weights, as well as modifications to optimizer state.
478
+
479
+ # C Generalization to Other Optimizers
480
+
481
+ We generalize our LR adaptation rule to Adam [20] and AvaGrad [31] in Table 9. Both methods are adaptive optimizers where different heuristics are adopted to derive a parameter-wise learning rate strategy, which provides primitives that can be extended using our stage-wise adaptation principle for network growing. For example, vanilla Adam adapts the global learning rate with running estimates of the first moment $m_t$ and the second moment $v_t$ of the gradients, where the number of global training steps $t$ is an integer when training a fixed-size model. When growing networks, our learning rate adaptation instead considers a vector $t$ which tracks each subcomponent's 'age' (i.e., number of steps it has been trained for). As such, for a newly grown subcomponent at a stage $i > 0$ , $t[i]$ starts as 0 and the learning rate is adapted from $m_t / \sqrt{v_t}$ (global) to $\frac{m_{t[i]}\backslash m_{t[i-1]}}{\sqrt{v_{t[i]}\backslash v_{t[i-1]}}}$ (stage-wise). Similarly, we also generalize our approach to AvaGrad by adopting $\eta_t, d_t, m_t$ of the original paper as a stage-wise variables.
482
+
483
+ Preserving Optimizer State/Buffer. Essential to adaptive methods are training-time statistics (e.g., running averages $m_t$ and $v_t$ in Adam) which are stored as buffers and used to compute parameter-wise learning rates. Different from fixed-size models, parameter sets are expanded when growing networks, which in practice requires re-instantiating a new optimizer at each growth step. Given that our initialization scheme maintains functionality of the network, we are also able to preserve and inherit buffers from previous states, effectively maintaining the optimizer's state intact when adding new parameters. We experimentally investigate the effects of this state preservation.
484
+
485
+ Table 9: Rate adaptation rules for Adam [20] and AvaGrad [31].
486
+
487
+ <table><tr><td colspan="3">Our LR Adaptation</td></tr><tr><td rowspan="2">Adam</td><td>0-th Stage</td><td>mt[0]/√vt[0]</td></tr><tr><td>i-th Stage</td><td>mt[i] \ mt[i-1] / √vt[i] \ vt[i-1]</td></tr><tr><td rowspan="2">AvaGrad</td><td>0-th Stage</td><td>ηt[0] / [|ηt[0] / √dt[0]|2] ⊙ mt[0]</td></tr><tr><td>i-th Stage</td><td>ηt[i] \ nt[i-1] / [|ηt[i] \ nt[i-1] / √dt[i] - dt[i-1]|2] ⊙ (mt[i] \ mt[i-1])</td></tr></table>
488
+
489
+ Table 10: Generalization to Adam and AvaGrad for ResNet-20 on CIFAR-10.
490
+
491
+ <table><tr><td>Optimizer</td><td>Training Method</td><td>Preserve Opt. Buffer</td><td>Train Cost (%)</td><td>Test Acc. (%)</td></tr><tr><td>Adam</td><td>Large fixed-size</td><td>NA</td><td>100</td><td>92.29</td></tr><tr><td>Adam</td><td>Growing</td><td>No</td><td>54.90</td><td>91.44</td></tr><tr><td>Adam</td><td>Growing</td><td>Yes</td><td>54.90</td><td>91.61</td></tr><tr><td>Adam+our RA.</td><td>Growing</td><td>Yes</td><td>54.90</td><td>92.13</td></tr><tr><td>AvaGrad</td><td>Large fixed-size</td><td>NA</td><td>100</td><td>92.45</td></tr><tr><td>AvaGrad</td><td>Growing</td><td>No</td><td>54.90</td><td>90.71</td></tr><tr><td>AvaGrad</td><td>Growing</td><td>Yes</td><td>54.90</td><td>91.27</td></tr><tr><td>AvaGrad+our RA.</td><td>Growing</td><td>Yes</td><td>54.90</td><td>91.72</td></tr></table>
492
+
493
+ <table><tr><td>Optimizer</td><td>Test Acc. (%)</td></tr><tr><td>Standard SGD</td><td>91.95 ± 0.09</td></tr><tr><td>SGD with Layer-wise Adapt. (LARS)</td><td>91.32 ± 0.11</td></tr><tr><td>Ours</td><td>92.53 ± 0.11</td></tr></table>
494
+
495
+ Results with Adam and AvaGrad. Table 10 shows the results of growing ResNet-20 on CIFAR-10 with Adam and Avagrad. For the large, fixed-size baseline, we train Adam with $lr = 0.1$ , $\epsilon = 0.1$ and AvaGrad with $lr = 0.5$ , $\epsilon = 10.0$ , which yields the best results for ResNet-20 following [31]. We consider different settings for comparison: (1) optimizer without buffer preservation: the buffers are refreshed at each new growing phase; (2) optimizer with buffer preservation: the buffer/state is inherited from the previous phase, hence being preserved at growth steps; (3) optimizer with buffer and rate adaptation (RA): applies our rate adaptation strategy described in Table 9 while also preserving internal state/buffers. We observe that (1) consistently underperforms (2), which suggests that preserving the state/buffers in adaptive optimizers is crucial when growing networks. Option (3) bests the other settings for both Adam and AvaGrad, indicating that our rate adaptation strategy generalizes effectively to Adam and AvaGrad for the growing scenario. Together, these also demonstrate that our method has the flexibility to incorporate different statistics that are tracked and used by distinct optimizers, where we take Adam and AvaGrad as examples. Finally, our proposed stage-wise rate adaptation strategy can be employed with virtually any optimizer.
496
+
497
+ Comparison with Layer-wise Adaptive Optimizer. We also consider LARS [12, 43], a layer-wise adaptive variant of SGD, to compare different adaptation concepts: layer-wise versus layer + stage-wise (ours). Note that although LARS was originally designed for training with large batches, we adopt a batch size of 128 when growing ResNet-20 on CIFAR-10. We search the initial learning rate (LR) for LARS over $\{1\mathrm{e} - 3,2\mathrm{e} - 3,5\mathrm{e} - 3,1\mathrm{e} - 2,2\mathrm{e} - 2,5\mathrm{e} - 2,1\mathrm{e} - 1,2\mathrm{e} - 1,5\mathrm{e} - 1\}$ and observe that a value of 0.02 yields the best results. We adopt the default initial learning rate of 0.1 for both standard SGD and our method. As shown in Table 11, LARS underperforms both standard SGD and our adaptation strategy, suggesting that layer-wise learning rate adaptation by itself – i.e., without accounting for stage-wise discrepancies – is not sufficient for successful growing of networks.
498
+
499
+ # D More Analysis on Rate Adaptation
500
+
501
+ We show additional plots of stage-wise rate adaptation when growing a ResNet-20 on CIFAR-10. Figure 9 shows the of adaptation factors based on the LR of the seed architecture from 1st to 8th stages (the stage index starts at 0). We see an overall trend that for newly-added weights, its learning rate starts at $>1\times$ of the base LR then quickly adapts to a relatively stable level. This demonstrates that our approach is able to efficiently and automatically adapt new weights to gradually and smoothly fade in throughout the current stage's optimization procedure.
502
+
503
+ We also note that rate adaptation is a general design that different subnets should not share a global learning rate. The RA formulation is designed empirically. $\max(1, ||\boldsymbol{W}_i \setminus \boldsymbol{W}_{i-1}||)$ is a plausible implementation choice,
504
+
505
+ Table 11: Comparisons among standard SGD, LARS, and our adaptation method for growing ResNet-20 on CIFAR-10.
506
+
507
+ <table><tr><td>RA Implementation Choice</td><td>Test Acc. (%)</td></tr><tr><td>NA (Standard SGD)</td><td>91.62 ± 0.12</td></tr><tr><td>max(1, ||Wi \ Wi-1||)</td><td>91.42 ± 0.12</td></tr><tr><td>Ours</td><td>92.53 ± 0.11</td></tr></table>
508
+
509
+ Table 12: Comparisons among different RA implementation choices for growing ResNet-20 on CIFAR-10.
510
+
511
+ ![](images/ac6876d5f3d4aff0125b909a13a0d5c342867e1e4b4f885638b349b9415d6ff8.jpg)
512
+ (a) 1-st Stage
513
+
514
+ ![](images/14ff2c034b66324e3ae9e0166e2a7dcc6375b8d307b43565e17d528f795534f3.jpg)
515
+ (b) 2-nd Stage
516
+
517
+ ![](images/c4bd450834200765edb77456fd3da63d66439243c70b31e22abc424c14a3edd3.jpg)
518
+ (c) 3-rd Stage
519
+
520
+ ![](images/90072a1bf11956b3b85be01ceefc4f78c04d6ddee297e8a4c3147737f95f1664.jpg)
521
+ (d) 4-th Stage
522
+
523
+ ![](images/b63ebc8a9296fc0868f59f9fe330dcc4ceb48cf37a033f6f4406da9de8e275cf.jpg)
524
+ (e) 5-th Stage
525
+
526
+ ![](images/46fdb2767eab9ddef09e0a8e8e6e43aeef0f281847a59fafb24a9d0840ed48fd.jpg)
527
+ (f) 6-th Stage
528
+
529
+ ![](images/04b7b93face44066d996355e5d1a7809b94271780d0100399e3d2fe3e1b5a653.jpg)
530
+ (g) 7-th Stage
531
+ Figure 9: Visualization of rate adaptation factor dynamics across all growing stages (except 0-th)
532
+
533
+ ![](images/22e136c0a7e59c88ab9295fc2a710356de8d91c50aa2a946c708270bf0b0e035.jpg)
534
+ (h) 8-th Stage
535
+
536
+ based on the assumption that new weights must have a higher learning rate. We conducted experiments by growing ResNet-20 on CIFAR-10. As shown in Table 12, we see that this alternative does not work better than our original design, and even underperforms standard SGD.
537
+
538
+ # E More Visualizations on Sub-Component Gradients
539
+
540
+ We further compare global LR and our rate adaptation by showing additional visualizations of sub-component gradients of different layers and stages when growing ResNet-20 on CIFAR-10. We select the 2nd (layer1-block1-conv1) and 17th (layer3-block2-conv2) convolutional layers and plot the gradients of each sub-component at the 3rd and 5th growing stages, respectively, in Figures 10, 11, 12, 13. These demonstrate that our rate adaptation strategy is able to re-balance and stabilize the gradient's contribution of different subcomponents, hence improving the training dynamics compared to a global scheduler.
541
+
542
+ # F Simple Example on Fully-Connected Neural Networks
543
+
544
+ Additionally, we train a simple fully-connected neural network with 8 hidden layers on CIFAR-10 - each hidden layer has 500 neurons and is followed by ReLU activations. The network is has a final linear layer with 10 neurons for classification. Note that each CIFAR-10 image is flattened to a 3072-dimensional $(32 \times 32 \times 3)$ vector as prior to being given as input to the network. We consider two variants of this baseline network by adopting training epochs (costs) $\in \{25(1 \times), 50(2 \times)\}$ . We also grow from a thin architecture to the original one within 10 stages, each stage consisting of 5 epochs, where the number of units of each hidden layer grows from 50 to 100, 150, ..., 500. The total training cost is equivalent to the fixed-size one trained for 25 epochs. We train all baselines using SGD, with weight decay set as 0 and learning rates sweeping over $\{0.01, 0.02, 0.05, 0.1, 0.2, 0.5\}$ : results are shown in Figure 14(a). Compared to standard initialization (green),
545
+
546
+ ![](images/54138d6eddb0ce7fb1974b1988a5e608e9ed84f8dbe0c80fc306bb39e4b65ed6.jpg)
547
+ (a) Using Global
548
+ Figure 10: Gradients of 2nd conv at 3rd stage. Figure 11: Gradients of 2nd conv at 5th stage.
549
+
550
+ ![](images/945507638ac40182aadffdc992206b25886a4bc84c02f43fd74132eebc8d524e.jpg)
551
+ (b) Using RA
552
+
553
+ ![](images/ddc15a31d87251cd70e5e2029441f2ca51197ce8b32ef743a4dd990c3a42a0d1.jpg)
554
+ (a) Using Global
555
+
556
+ ![](images/3660459cc99a7032db0f274ffa1ddd155099d151a4e84d3d9e15dbf36e834bac.jpg)
557
+ (b) Using RA
558
+
559
+ ![](images/1c3b72b9030a3bc2cdc44e43cdfcf8c3ac81a51bd1387e6aed3026161805e2c9.jpg)
560
+ (a) Using Global
561
+
562
+ ![](images/1ec8cf950eb6b753b1869bb4df330e0d6e311196cf4d0ed853f024cc6cc2b239.jpg)
563
+ (b) Using RA
564
+
565
+ ![](images/bc197c00b56adee84d999281d9b69fcbf5858ff5faf9a53585878f4c265c95c5.jpg)
566
+ (a) Using Global
567
+
568
+ ![](images/cbf6d0fef82e2eca5538591c71794e47558403414661b2553f40e2e51688fab0.jpg)
569
+ (b) Using RA
570
+
571
+ ![](images/7394b0a6f99d9b8044f31cdbf424307b6526357bf37760514305fc0356131095.jpg)
572
+ Figure 12: Gradients of 17th conv at 3rd stage. Figure 13: Gradients of 17th conv at 5th stage.
573
+ (a) Train Loss
574
+ Figure 14: Results of simple fully-connected neural network.
575
+
576
+ ![](images/5859f852eff83659b62e6a59943b213264c6ccd4f54ceedba4c9fddbe2743a04.jpg)
577
+ (b) Test Accuracy
578
+
579
+ the loss curve given by growing variance transfer (blue) is more similar to the curve of the large baseline - all using standard SGD - which is also consistent with the observations when training model of different scales separately [41]. Rate adaptation (in red) further lowers training loss. Interestingly, we observe in Figure 14(b) that the test accuracy behavior differs from the training loss one given in Figure 14(a), which may suggest that regularization is missing due to, for example, the lack of parameter-sharing schemes (like CNN) in this fully-connected network.
580
+
581
+ # G More Comparisons with GradMax [10]
582
+
583
+ For CIFAR-10 and CIFAR-100, GradMax used different models for growing and we did not re-implement GradMax on both datasets. Also, generalizing such gradient-based growing methods to the Transformer architecture is nontrivial. As such, we only cover MobileNet on ImageNet which is used in both ours and GradMax. Our accuracy outperforms GradMax by 1.3 while lowering training costs, which is significant to demonstrate the benefit of our method. We also trained our method to grow WRN-28-1 (w/wo BatchNorm, used in GradMax paper) on CIFAR-10 and CIFAR-100 and compare it with GradMax in Table 13. We see that ours still consistently outperforms GradMax.
584
+
585
+ Table 13: Comparison with GradMax.
586
+
587
+ <table><tr><td rowspan="2">Method</td><td colspan="2">CIFAR-10 (w BN)</td><td colspan="2">CIFAR-10 (w/o BN)</td><td colspan="2">CIFAR-100 (w/o BN)</td></tr><tr><td>Train Cost (%) ↓</td><td>Test Acc. (%) ↑</td><td>Train Cost (%) ↓</td><td>Test Acc. (%) ↑</td><td>Train Cost (%) ↓</td><td>Test Acc. (%) ↑</td></tr><tr><td>Large Baseline</td><td>100</td><td>93.40 ± 0.10</td><td>100</td><td>92.90 ± 0.20</td><td>100</td><td>69.30 ± 0.10</td></tr><tr><td>GradMax [10]</td><td>77.32</td><td>93.00 ± 0.10</td><td>77.32</td><td>92.40 ± 0.10</td><td>77.32</td><td>66.80 ± 0.20</td></tr><tr><td>Ours</td><td>58.24</td><td>93.29 ± 0.12</td><td>58.24</td><td>92.61 ± 0.10</td><td>58.24</td><td>67.83 ± 0.15</td></tr></table>
588
+
589
+ # H Extension to Continuously Incremental Datastream
590
+
591
+ Another direct and intuitive application for our method is to fit continuously incremental datastream where $D_0 \subset D_1, \ldots \subset D_n, \ldots \subset D_{N-1}$ . The network complexity scales up together with the data so that larger capacity can be trained on more data samples. Orthogonalized SGD (OSGD) [35] address the optimization difficulty in this context, which dynamically re-balances task-specific gradients via prioritizing the specific loss influence. We further extend our optimizer by introducing a dynamic variant of orthogonalized SGD, which progressively adjusts the priority of tasks on different subnets during network growth.
592
+
593
+ Suppose the data increases from $D_{n-1}$ to $D_n$ , we first accumulate the old gradients $G_{n-1}$ using one additional epoch on $D_{n-1}$ and then grow the network width. For each batch of $D_n$ , we first project gradients of the new architecture ( $n$ -th stage), denoted as $G_n$ , onto the parameter subspace that is orthogonal to $G_{n-1}^{pad}$ , a zero-padded version of $G_{n-1}$ with desirable shape. The final gradients $G_n^*$ are then calculated by re-weighting the original $G_n$ and its orthogonal counterparts:
594
+
595
+ $$
596
+ \boldsymbol {G} _ {n} ^ {*} = \boldsymbol {G} _ {n} - \lambda * \operatorname {p r o j} _ {\boldsymbol {G} _ {n - 1} ^ {p a d}} \left(\boldsymbol {G} _ {n}\right), \quad \lambda : 1 \rightarrow 0 \tag {21}
597
+ $$
598
+
599
+ where $\lambda$ is a dynamic hyperparameter which weights the original and orthogonal gradients. When $\lambda = 1$ , subsequent outputs do not interfere with earlier directions of parameters updates. We then anneal $\lambda$ to 0 so that the newly-introduced data and subnetwork can smoothly fade in throughout the training procedure.
600
+
601
+ Implementation Details. We implement the task in two different settings, denoted as 'progressive class' and 'progressive data' on CIFAR-100 dataset within 9 stages. In the progressive class setting, we first randomly select 20 classes in the first stage and then add 10 new classes at each growing stage. In the progressive data setting, we sequentially sample a fraction of the data with replacement for each stage, i.e., $20\%$ , $30\%$ , ..., $100\%$ .
602
+
603
+ ResNet-18 on Continuous CIFAR-100. We evaluate our method on continuous datastreams by growing a ResNet-18 on CIFAR-100 and comparing the final test accuracies. As shown in Table 14, compared with the large baseline, our growing method achieves $1.53 \times$ cost savings with a slight performance degradation in both settings. The dynamic OSGD variant outperforms the large baseline with $1.46 \times$ acceleration, demonstrating that the new extension improves the optimization on continuous datastream through gradually re-balancing the task-specific gradients of dynamic networks.
604
+
605
+ Table 14: Growing ResNet-18 on incremental CIFAR-100.
606
+
607
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Progressive Class</td><td colspan="2">Progressive Data</td></tr><tr><td>Train Cost (%) ↓</td><td>Test Acc. (%) ↑</td><td>Train Cost (%) ↓</td><td>Test Acc. (%) ↑</td></tr><tr><td>Large fixed-size Model</td><td>100</td><td>76.80</td><td>100</td><td>76.65</td></tr><tr><td>Ours</td><td>65.36</td><td>76.50</td><td>65.36</td><td>76.34</td></tr><tr><td>Ours-Dynamic-OSGD</td><td>68.49</td><td>77.53</td><td>68.49</td><td>77.85</td></tr></table>
acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21be647e0aca352e40d41d8a15dd3028f991929ce285533c0bd3fe44fdc90529
3
+ size 1268237
acceleratedtrainingviaincrementallygrowingneuralnetworksusingvariancetransferandlearningrateadaptation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba1537b596f89d3388a009c08d0a6da28e7f2b1a430e0bae4cc7b6495a36fa3a
3
+ size 740168
acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/47208fdc-bb32-4554-b369-ea7c723de9b9_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16a1a1d2e8de55b8a8ecca99bdf85348d5c52b0e4b77c46695fb15594f227d8f
3
+ size 137938
acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/47208fdc-bb32-4554-b369-ea7c723de9b9_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72c12447091c737897aee10db0a6362e8c2a18f92c59ecb784b3e1f07b029d91
3
+ size 163883
acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/47208fdc-bb32-4554-b369-ea7c723de9b9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:563e127542f02116a3b06ccb3f98f17a86e531d4c5e59154b89ec4182ea60c91
3
+ size 446302
acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/full.md ADDED
@@ -0,0 +1,799 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance
2
+
3
+ Nikita Kornilov
4
+
5
+ MIPT, SkolTech
6
+
7
+ kornilov.nm@phystech.edu
8
+
9
+ Ohad Shamir
10
+
11
+ Weizmann Institute of Science
12
+
13
+ ohad.shamir@weizmann.ac.il
14
+
15
+ Aleksandr Lobanov
16
+
17
+ MIPT, ISP RAS
18
+
19
+ lobanov.av@mipt.ru
20
+
21
+ Darina Dvinskikh
22
+
23
+ HSE University, ISP RAS
24
+
25
+ dmdvinskikh@hse.ru
26
+
27
+ Alexander Gasnikov
28
+
29
+ MIPT,
30
+
31
+ ISP RAS, SkolTech
32
+
33
+ gasnikov@yandex.ru
34
+
35
+ Innokentiy Shibaev
36
+
37
+ MIPT,
38
+
39
+ IITP RAS
40
+
41
+ innokenti.shibayev@phystech.edu
42
+
43
+ Eduard Gorbunov
44
+
45
+ MBZUAI
46
+
47
+ eduard.gorbunov@mbzuai.ac.ae
48
+
49
+ Samuel Horváth
50
+
51
+ MBZUAI
52
+
53
+ samuel.horvath@mbzuai.ac.ae
54
+
55
+ # Abstract
56
+
57
+ In this paper, we consider non-smooth stochastic convex optimization with two function evaluations per round under infinite noise variance. In the classical setting when noise has finite variance, an optimal algorithm, built upon the batched accelerated gradient method, was proposed in [17]. This optimality is defined in terms of iteration and oracle complexity, as well as the maximal admissible level of adversarial noise. However, the assumption of finite variance is burdensome and it might not hold in many practical scenarios. To address this, we demonstrate how to adapt a refined clipped version of the accelerated gradient (Stochastic Similar Triangles) method from [35] for a two-point zero-order oracle. This adaptation entails extending the batching technique to accommodate infinite variance — a non-trivial task that stands as a distinct contribution of this paper.
58
+
59
+ # 1 Introduction
60
+
61
+ In this paper, we consider stochastic non-smooth convex optimization problem
62
+
63
+ $$
64
+ \min _ {x \in \mathbb {R} ^ {d}} \left\{f (x) \stackrel {\text {d e f}} {=} \mathbb {E} _ {\xi \sim \mathcal {D}} [ f (x, \xi) ] \right\}, \tag {1}
65
+ $$
66
+
67
+ where $f(x,\xi)$ is $M_2(\xi)$ -Lipschitz continuous in $x$ w.r.t. the Euclidean norm, and the expectation $\mathbb{E}_{\xi \sim \mathcal{D}}[f(x,\xi)]$ is with respect to a random variable $\xi$ with unknown distribution $\mathcal{D}$ . The optimization is performed only by accessing two function evaluations per round rather than sub-gradients, i.e., for any pair of points $x,y\in \mathbb{R}^d$ , an oracle returns $f(x,\xi)$ and $f(y,\xi)$ with the same $\xi$ . The primary motivation for this gradient-free oracle arises from different applications where calculating gradients is computationally infeasible or even impossible. For instance, in medicine, biology, and physics, the objective function can only be computed through numerical simulation or as the result of a real experiment, i.e., automatic differentiation cannot be employed to calculate function derivatives. Usually, a black-box function we are optimizing is affected by stochastic or computational noise. This noise can arise naturally from modeling randomness within a simulation or by computer discretization.
68
+
69
+ In classical setting, this noise has light tails. However, usually in black-box optimization, we know nothing about the function, only its values at the requested points are available/computable, so light tails assumption may be violated. In this case, gradient-free algorithms may diverge. We aim to develop an algorithm that is robust even to heavy-tailed noise, i.e., noise with infinite variance. In theory, one can consider heavy-tailed noise to simulate situations where noticeable outliers may occur (even if the nature of these outliers is non-stochastic). Therefore we relax classical finite variance assumption and consider less burdensome assumption of finite $\alpha$ -th moment, where $\alpha \in (1,2]$ .
70
+
71
+ In machine learning, interest in gradient-free methods is mainly driven by the bandit optimization problem [14, 2, 5], where a learner engages in a game with an adversary: the learner selects a point $x$ , and the adversary chooses a point $\xi$ . The learner's goal is to minimize the average regret based solely on observations of function values (losses) $f(x, \xi)$ . As feedback, at each round, the learner receives losses at two points. This corresponds to a zero-order oracle in stochastic convex optimization with two function evaluations per round. The vast majority of researches assume sub-Gaussian distribution of rewards. However, in some practical cases (e.g., in finance [33]) reward distribution has heavy-tails or can be adversarial. For the heavy-tailed bandit optimization, we refer to [9].
72
+
73
+ Two-point gradient-free optimization for non-smooth (strongly) convex objective is a well-researched area. Numerous algorithms have been proposed which are optimal with respect to two criteria: oracle and iteration complexity. For a detailed overview, see the recent survey [15] and the references therein. Optimal algorithms, in terms of oracle call complexity, are presented in [10, 36, 3]. The distinction between the number of successive iterations (which cannot be executed in parallel) and the number of oracle calls was initiated with the lower bound obtained in [6]. It culminated with the optimal results from [17], which provides an algorithm which is optimal in both criteria. Specifically, the algorithm produces $\hat{x}$ , an $\varepsilon$ -solution of (1), such that we can guarantee $\mathbb{E}[f(\hat{x})] - \min_{x \in \mathbb{R}^d} f(x)$ after
74
+
75
+ $$
76
+ \begin{array}{l} \sim d ^ {\frac {1}{4}} \varepsilon^ {- 1} \quad \text {s u c c e s s i v e i t e r a t i o n s}, \\ \sim d \varepsilon^ {- 2} \quad \text {o r a c l e c a l l s (n u m b e r o f} f (x, \xi) \text {c a l c u l a t i o n s)}. \\ \end{array}
77
+ $$
78
+
79
+ The convergence guarantee for this optimal algorithm from [17] was established in the classical setting of light-tailed noise, i.e., when noise has finite variance: $\mathbb{E}_{\xi}[M_2(\xi)^2] < \infty$ . However, in many modern learning problems the variance might not be finite, leading the aforementioned algorithms to potentially diverge. Indeed, heavy-tailed noise is prevalent in contemporary applications of statistics and deep learning. For example, heavy-tailed behavior can be observed in training attention models [41] and convolutional neural networks [37, 20]. Consequently, our goal is to develop an optimal algorithm whose convergence is not hampered by this restrictive assumption. To the best of our knowledge, no existing literature on gradient-free optimization allows for $\mathbb{E}_{\xi}[M_2(\xi)^2]$ to be infinite. Furthermore, convergence results for all these gradient-free methods were provided in expectation, that is, without (non-trivial) high-probability bounds. Although the authors of [17] mentioned (without proof) that their results can be formulated in high probability using [19], this aspect notably affects the oracle calls complexity bound and complicates the analysis.
80
+
81
+ A common technique to relax finite variance assumption is gradient clipping [31]. Starting from the work of [26] (see also [8, 19]), there has been increased interest in algorithms employing gradient clipping to achieve high-probability convergence guarantees for stochastic optimization problems with heavy-tailed noise. Particularly, in just the last two years there have been proposed
82
+
83
+ - an optimal algorithm with a general proximal setup for non-smooth stochastic convex optimization problems with infinite variance [39] that converges in expectation (also referenced in [27]),
84
+ - an optimal adaptive algorithm with a general proximal setup for non-smooth online stochastic convex optimization problems with infinite variance [42] that converges with high probability,
85
+ - optimal algorithms using the Euclidean proximal setup for both smooth and non-smooth stochastic convex optimization problems and variational inequalities with infinite variance [35, 30, 29] that converge with high probability,
86
+ - an optimal variance-adaptive algorithm with the Euclidean proximal setup for non-smooth stochastic (strongly) convex optimization problems with infinite variance [24] that converges with high probability.
87
+
88
+ None of these papers discuss a gradient-free oracle. Moreover, they do not incorporate acceleration (given the non-smooth nature of the problems) with the exception of [35]. Acceleration is a crucial
89
+
90
+ component to achieving optimal bounds on the number of successive iterations. However, the approach in [35] presumes smoothness and does not utilize batching. To apply the convergence results from [35] to our problem, we need to adjust our problem formulation to be smooth. This is achieved by using the Smoothing technique [27, 36, 17]. In the work [22] authors proposed an algorithm based on Smoothing technique and non-accelerated Stochastic Mirror Descent with clipping. However, this work also does not support acceleration, minimization on the whole space and batching. Adapting the technique from [35] to incorporate batching necessitates a substantial generalization. We regard this aspect of our work as being of primary interest.
91
+
92
+ Heavy-tailed noise can also be handled without explicit gradient clipping, e.g., by using Stochastic Mirror Descent algorithm with a particular class of uniformly convex mirror maps [39]. However, the convergence guarantee for this algorithm is given in expectation. Moreover, applying batching and acceleration is a non-trivial task. Without this, we are not able to get the optimal method in terms of the number of iterations and not only in terms of oracle complexity. There are also some studies on the alternatives to gradient clipping [21] but the results for these alternatives are given in expectation and are weaker than the state-of-the-art results for the methods with clipping. This is another reason why we have chosen gradient clipping to handle the heavy-tailed noise.
93
+
94
+ # 1.1 Contributions
95
+
96
+ We generalize the optimal result from [17] to accommodate a weaker assumption that allows the noise to exhibit heavy tails. Instead of the classical assumption of finite variance, we require the boundedness of the $\alpha$ -moment: there exists $\alpha \in (1,2]$ such that $\mathbb{E}_{\xi}[M_2(\xi)^{\alpha}] < \infty$ . Notably, when $\alpha < 2$ , this assumption is less restrictive than the assumption of a finite variance and thus it has garnered considerable interest recently, see [41, 35, 29] and the references therein. Under this assumption we prove that for convex $f$ , an $\varepsilon$ -solution can be found with high probability after
97
+
98
+ $$
99
+ \begin{array}{l} \sim d ^ {\frac {1}{4}} \varepsilon^ {- 1} \quad \text {s u c c e s s i v e i t e r a t i o n s}, \\ \sim \left(\sqrt {d} / \varepsilon\right) ^ {\frac {\alpha}{\alpha - 1}} \quad \text {o r a c l e} \\ \end{array}
100
+ $$
101
+
102
+ and for $\mu$ -strongly convex $f$ , the $\varepsilon$ -solution can be found with high probability after
103
+
104
+ $$
105
+ \begin{array}{l} \sim d ^ {\frac {1}{4}} (\mu \varepsilon) ^ {- \frac {1}{2}} \quad \text {s u c c e s s i v e i t e r a t i o n s}, \\ \sim \left(d / (\mu \varepsilon)\right) ^ {\frac {\alpha}{2 (\alpha - 1)}} \quad \text {o r a c l e} \\ \end{array}
106
+ $$
107
+
108
+ In both instances, the number of oracle calls is optimal in terms of $\varepsilon$ -dependency within the non-smooth setting [27, 39]. For first-order optimization under heavy-tailed noise, the optimal $\varepsilon$ -dependency remains consistent, as shown in [35, Table 1].
109
+
110
+ In what follows, we highlight the following several important aspects of our results
111
+
112
+ - High-probability guarantees. We provide upper-bounds on the number of iterations/oracle calls needed to find a point $\hat{x}$ such that $f(\hat{x}) - \min_{x\in \mathbb{R}^d}f(x)\leq \varepsilon$ with probability at least $1 - \beta$ . The derived bounds have a poly-logarithmic dependence on $1 / \beta$ . To the best of our knowledge, there are no analogous high-probability results, even for noise with bounded variance.
113
+ - Generality of the setup. Our results are derived under the assumption that gradient-free oracle returns values of stochastic realizations $f(x, \xi)$ subject to (potentially adversarial) bounded noise. We further provide upper bounds for the magnitude of this noise, contingent upon the target accuracy $\varepsilon$ and confidence level $\beta$ . Notably, our assumptions about the objective and noise are confined to a compact subset of $\mathbb{R}^d$ . This approach, which differs from standard ones in derivative-free optimization, allows us to encompass a wide range of problems. This approach differs from standard ones in derivative-free optimization
114
+ - Batching without bounded variance. To establish the aforementioned upper bounds, we obtain the following: given $X_{1},\ldots ,X_{B}$ as independent random vectors in $\mathbb{R}^d$ where $\mathbb{E}[X_i] = x\in \mathbb{R}^d$ and $\mathbb{E}\| X_i - x\| _2^\alpha \leq \sigma^\alpha$ for some $\sigma \geq 0$ and $\alpha \in (1,2]$ , then
115
+
116
+ $$
117
+ \mathbb {E} \left[ \left\| \frac {1}{B} \sum_ {i = 1} ^ {B} X _ {i} - x \right\| _ {2} ^ {\alpha} \right] \leq \frac {2 \sigma^ {\alpha}}{B ^ {\alpha - 1}}. \tag {2}
118
+ $$
119
+
120
+ When $\alpha = 2$ , this result aligns with the conventional case of bounded variance (accounting for a numerical factor of 2). Unlike existing findings, such as [40, Lemma 7] where $\alpha < 2$ , the relation (2) does not exhibit a dependency on the dimension $d$ . Moreover, (2) offers a theoretical basis to highlight the benefits of mini-batching, applicable to methods highlighted in this paper as well as first-order methods presented in [35, 29].
121
+
122
+ - Dependency on $d$ . As far as we are aware, an open question remains: is the bound $\left(\sqrt{d} / \varepsilon\right)^{\frac{\alpha}{\alpha - 1}}$ optimal regarding its dependence on $d$ ? For smooth stochastic convex optimization problems using a $(d + 1)$ -points stochastic zeroth-order oracle, the answer is negative. The optimal bound is proportional to $d\varepsilon^{-\frac{\alpha}{\alpha - 1}}$ . Consequently, for $\alpha \in (1,2)$ , our results are intriguing because the dependence on $d$ in our bound differs from known results in the classical case where $\alpha = 2$ .
123
+
124
+ # 1.2 Paper organization
125
+
126
+ The paper is organized as follows. In Section 2, we give some preliminaries, such as smoothing technique and gradient estimation, that are workhorse of our algorithms. Section 3 is the main section presenting two novel gradient-free algorithms along with their convergence results in high probability. These algorithms solve non-smooth stochastic optimization under heavy-tailed noise, and they will be referred to as ZO-clipped-SSTM and R-ZO-clipped-SSTM (see Algorithms 1 and 2 respectively). In Section 4, we extend our results to gradient-free oracle corrupted by additive deterministic adversarial noise. In Section 5, we describe the main ideas behind the proof and emphasize key lemmas. In Section 6, we provide numerical experiments on the synthetic task that demonstrate the robustness of the proposed algorithms towards heavy-tailed noise.
127
+
128
+ # 2 Preliminaries
129
+
130
+ Assumptions on a subset. Although we consider an unconstrained optimization problem, our analysis does not require any assumptions to hold on the entire space. For our purposes, it is sufficient to introduce all assumptions only on some convex set $Q \in \mathbb{R}^d$ since we prove that the considered methods do not leave some ball around the solution or some level-set of the objective function with high probability. This allows us to consider fairly large classes of problems.
131
+
132
+ Assumption 1 (Strong convexity) There exist a convex set $Q \subset \mathbb{R}^d$ and $\mu \geq 0$ such that function $f(x, \xi)$ is $\mu$ -strongly convex on $Q$ for any fixed $\xi$ , i.e.
133
+
134
+ $$
135
+ f (\lambda x _ {1} + (1 - \lambda) x _ {2}) \leq \lambda f (x _ {1}) + (1 - \lambda) f (x _ {2}) - \frac {1}{2} \mu \lambda (1 - \lambda) \| x _ {1} - x _ {2} \| _ {2} ^ {2},
136
+ $$
137
+
138
+ for all $x_{1},x_{2}\in Q,\lambda \in [0,1]$
139
+
140
+ This assumption implies that $f(x)$ is $\mu$ -strongly convex as well.
141
+
142
+ For a small constant $\tau > 0$ , let us define an expansion of set $Q$ namely $Q_{\tau} = Q + B_2^d$ , where $+$ stands for Minkowski addition. Using this expansion we make the following assumption.
143
+
144
+ Assumption 2 (Lipschitz continuity and boundedness of $\alpha$ -moment) There exist a convex set $Q \subset \mathbb{R}^d$ and $\tau > 0$ such that function $f(x, \xi)$ is $M_2(\xi)$ -Lipschitz continuous w.r.t. the Euclidean norm on $Q_{\tau}$ , i.e., for all $x_1, x_2 \in Q_{\tau}$
145
+
146
+ $$
147
+ \left| f \left(x _ {1}, \xi\right) - f \left(x _ {2}, \xi\right) \right| \leq M _ {2} (\xi) \| x _ {1} - x _ {2} \| _ {2}.
148
+ $$
149
+
150
+ Moreover, there exist $\alpha \in (1,2]$ and $M_2 > 0$ such that $\mathbb{E}_{\xi}[M_2(\xi)^{\alpha}]\leq M_2^{\alpha}$ .
151
+
152
+ If $\alpha < 2$ , we say that noise is heavy-tailed. When $\alpha = 2$ , the above assumption recovers the standard uniformly bounded variance assumption.
153
+
154
+ Lemma 1 Assumption 2 implies that $f(x)$ is $M_2$ -Lipschitz on $Q$ .
155
+
156
+ Randomized smoothing. The main scheme that allows us to develop batch-parallel gradient-free methods for non-smooth convex problems is randomized smoothing [13, 17, 27, 28, 38] of a non-smooth function $f(x,\xi)$ . The smooth approximation to a non-smooth function $f(x,\xi)$ is defined as
157
+
158
+ $$
159
+ \hat {f} _ {\tau} (x) \stackrel {\text {d e f}} {=} \mathbb {E} _ {\mathbf {u}, \xi} [ f (x + \tau \mathbf {u}, \xi) ], \tag {3}
160
+ $$
161
+
162
+ where $\mathbf{u} \sim U(B_2^d)$ is a random vector uniformly distributed on the Euclidean unit ball $B_2^d \stackrel{\mathrm{def}}{=} \{x \in \mathbb{R}^d : \|x\|_2 \leq 1\}$ . In this approximation, a new type of randomization appears in addition to stochastic variable $\xi$ .
163
+
164
+ The next lemma gives estimates for the quality of this approximation. In contrast to $f(x)$ , function $\hat{f}_{\tau}(x)$ is smooth and has several useful properties.
165
+
166
+ Lemma 2 [17, Theorem 2.1.] Let there exist a subset $Q \subset \mathbb{R}^d$ and $\tau > 0$ such that Assumptions 1 and 2 hold on $Q_{\tau}$ . Then,
167
+
168
+ 1. Function $\hat{f}_{\tau}(x)$ is convex, Lipschitz with constant $M_2$ on $Q$ , and satisfies
169
+
170
+ $$
171
+ \sup _ {x \in Q} | \hat {f} _ {\tau} (x) - f (x) | \leq \tau M _ {2}.
172
+ $$
173
+
174
+ 2. Function $\hat{f}_{\tau}(x)$ is differentiable on $Q$ with the following gradient
175
+
176
+ $$
177
+ \nabla \hat {f} _ {\tau} (x) = \mathbb {E} _ {\mathbf {e}} \left[ \frac {d}{\tau} f (x + \tau \mathbf {e}) \mathbf {e} \right],
178
+ $$
179
+
180
+ where $\mathbf{e} \sim U(S_2^d)$ is a random vector uniformly distributed on unit Euclidean sphere $S_2^d \stackrel{\text{def}}{=} \{x \in \mathbb{R}^d : \|x\|_2 = 1\}$ .
181
+
182
+ 3. Function $\hat{f}_{\tau}(x)$ is $L$ -smooth with $L = \sqrt{d} M_2 / \tau$ on $Q$ .
183
+
184
+ Our algorithms will aim at minimizing the smooth approximation $\hat{f}_{\tau}(x)$ . Given Lemma 2, the output of the algorithm will also be a good approximate minimizer of $f(x)$ when $\tau$ is sufficiently small.
185
+
186
+ Gradient estimation. Our algorithms will be based on randomized gradient estimates, which will then be used in a first order algorithm. Following [36], the gradient can be estimated by the following vector:
187
+
188
+ $$
189
+ g (x, \xi , \mathbf {e}) = \frac {d}{2 \tau} \left(f (x + \tau \mathbf {e}, \xi) - f (x - \tau \mathbf {e}, \xi)\right) \mathbf {e}, \tag {4}
190
+ $$
191
+
192
+ where $\tau > 0$ and $\mathbf{e} \sim U(S_2^d)$ . We also use batching technique in order to allow parallel calculation of gradient estimation and acceleration. Let $B$ be a batch size, we sample $\{\mathbf{e}_i\}_{i=1}^B$ and $\{\xi_i\}_{i=1}^B$ independently, then
193
+
194
+ $$
195
+ g ^ {B} (x, \{\xi \}, \{\mathbf {e} \}) = \frac {d}{2 B \tau} \sum_ {i = 1} ^ {B} \left(f \left(x + \tau \mathbf {e} _ {i}, \xi_ {i}\right) - f \left(x - \tau \mathbf {e} _ {i}, \xi_ {i}\right)\right) \mathbf {e} _ {i}. \tag {5}
196
+ $$
197
+
198
+ The next lemma states that $g^{B}(x, \{\xi\}, \{\mathbf{e}\})$ from (5) is an unbiased estimate of the gradient of $\hat{f}_{\tau}(x)$ (3). Moreover, under heavy-tailed noise (Assumption 2) with bounded $\alpha$ -moment $g^{B}(x, \{\xi\}, \{\mathbf{e}\})$ has also bounded $\alpha$ -moment.
199
+
200
+ Lemma 3 Under Assumptions 1 and 2, it holds
201
+
202
+ $$
203
+ \mathbb {E} _ {\xi , \mathbf {e}} [ g (x, \xi , \mathbf {e}) ] = \mathbb {E} _ {\{\xi \}, \{\mathbf {e} \}} [ g ^ {B} (x, \{\xi \}, \{\mathbf {e} \}) ] = \nabla \hat {f} _ {\tau} (x).
204
+ $$
205
+
206
+ and
207
+
208
+ $$
209
+ \mathbb {E} _ {\xi , \mathbf {e}} [ \| g (x, \xi , \mathbf {e}) - \mathbb {E} _ {\xi , \mathbf {e}} [ g (x, \xi , \mathbf {e}) ] \| _ {2} ^ {\alpha} ] \leq \sigma^ {\alpha} \stackrel {{d e f}} {{=}} \left(\frac {\sqrt {d} M _ {2}}{2 ^ {\frac {1}{4}}}\right) ^ {\alpha},
210
+ $$
211
+
212
+ $$
213
+ \mathbb {E} _ {\{\xi \}, \{\mathbf {e} \}} [ \| g ^ {B} (x, \{\xi \}, \{\mathbf {e} \}) - \mathbb {E} _ {\{\xi \}, \{\mathbf {e} \}} [ g ^ {B} (x, \{\xi \}, \{\mathbf {e} \}) ] \| _ {2} ^ {\alpha} ] \leq \frac {2 \sigma^ {\alpha}}{B ^ {\alpha - 1}}. \tag {6}
214
+ $$
215
+
216
+ # 3 Main Results
217
+
218
+ In this section, we present two our new zero-order algorithms, which we will refer to as ZO-clipped-SSTM and R-ZO-clipped-SSTM, to solve problem (1) under heavy-tailed noise assumption. To deal with heavy-tailed noise, we use clipping technique which clips heavy tails. Let $\lambda > 0$ be clipping constant and $g \in \mathbb{R}^d$ , then clipping operator clip is defined as
219
+
220
+ $$
221
+ \operatorname {c l i p} (g, \lambda) = \left\{ \begin{array}{l l} \frac {g}{\| g \| _ {2}} \min \left(\| g \| _ {2}, \lambda\right), & g \neq 0, \\ 0, & g = 0. \end{array} \right. \tag {7}
222
+ $$
223
+
224
+ We apply clipping operator for batched gradient estimate $g^{B}(x,\{\xi \} ,\{\mathbf{e}\})$ from (5) and then feed it into first-order Clipped Stochastic Similar Triangles Method (clipped-SSTM) from [18]. We will refer to our zero-order versions of clipped-SSTM as ZO-clipped-SSTM and R-ZO-clipped-SSTM for the convex case and strongly convex case respectively.
225
+
226
+ # 3.1 Convex case
227
+
228
+ Let us suppose that Assumption 1 is satisfied with $\mu = 0$
229
+
230
+ Algorithm 1 ZO-clipped-SSTM $\left(x^{0},K,B,a,\tau ,\{\lambda_{k}\}_{k = 0}^{K - 1}\right)$
231
+ Input: starting point $x^0$ , number of iterations $K$ , batch size $B$ , stepsize $a > 0$ , smoothing parameter $\tau$ , clipping levels $\{\lambda_k\}_{k = 0}^{K - 1}$
232
+ 1: Set $L = \sqrt{d} M_2 / \tau$ $A_0 = \alpha_0 = 0$ $y^{0} = z^{0} = x^{0}$
233
+ 2: for $k = 0,\dots ,K - 1$ do
234
+ 3: Set $\alpha_{k + 1} = k + 2 / 2aL$ $A_{k + 1} = A_{k} + \alpha_{k + 1}$
235
+ 4: $x^{k + 1} = \frac{A_ky^k + \alpha_{k + 1}z^k}{A_{k + 1}}.$
236
+ 5: Sample $\{\xi_i^k\}_{i = 1}^B\sim \mathcal{D}$ and $\{\mathbf{e}_i^k\}_{i = 1}^B\sim S_2^d$ independently.
237
+ 6: Compute gradient estimation $g^{B}(x^{k + 1},\{\xi^{k}\} ,\{\mathbf{e}^{k}\})$ as defined in (5).
238
+ 7: Compute clipped $\tilde{g}_{k + 1} = \mathrm{clip}\left(g^{B}(x^{k + 1},\{\xi^{k}\} ,\{\mathbf{e}^{k}\}),\lambda_{k}\right)$ as defined in (7).
239
+ 8: $z^{k + 1} = z^{k} - \alpha_{k + 1}\tilde{g}_{k + 1}$
240
+ 9: $y^{k + 1} = \frac{A_ky^k + \alpha_{k + 1}z^{k + 1}}{A_{k + 1}}.$
241
+ 10: end for
242
+ Output: $y^{K}$
243
+
244
+ Theorem 1 (Convergence of ZO-clipped-SSTM) Let Assumptions 1 and 2 hold with $\mu = 0$ and $Q = \mathbb{R}^d$ . Let $\| x^0 - x^*\|^2 \leq R^2$ , where $x^0$ is a starting point and $x^*$ is an optimal solution to (1). Then for the output $y^K$ of ZO-clipped-SSTM, run with batchsize $B$ , $A = \ln^{4K} / \beta \geq 1$ , $a = \Theta(\max\{A^2, \sqrt{d}M_2K^{\frac{(\alpha + 1)}{\alpha}} A^{\frac{(\alpha - 1)}{\alpha}} /_{LRB}\frac{(\alpha - 1)}{\alpha}\})$ , $\tau = \varepsilon / 4M_2$ and $\lambda_k = \Theta(R / (\alpha_{k+1}A))$ , we can guarantee $f(y^K) - f(x^*) \leq \varepsilon$ with probability at least $1 - \beta$ (for any $\beta \in (0,1]$ ) after
245
+
246
+ $$
247
+ K = \widetilde {\mathcal {O}} \left(\max \left\{\frac {M _ {2} \sqrt [ 4 ]{d} R}{\varepsilon}, \frac {1}{B} \left(\frac {\sqrt {d} M _ {2} R}{\varepsilon}\right) ^ {\frac {\alpha}{\alpha - 1}} \right\}\right) \tag {8}
248
+ $$
249
+
250
+ successive iterations and $K \cdot B$ oracle calls. Moreover, with probability at least $1 - \beta$ the iterates of ZO-clipped-SSTM remain in the ball $B_{2R}(x^*)$ , i.e., $\{x^k\}_{k=0}^{K+1}, \{y^k\}_{k=0}^{K}, \{z^k\}_{k=0}^{K} \subseteq B_{2R}(x^*)$ .
251
+
252
+ Here and below notation $\tilde{\mathcal{O}}$ means an upper bound on the growth rate hiding logarithms. The first term in bound (8) is optimal for the deterministic case for non-smooth problem (see [6]) and the second term in bound (8) is optimal in $\varepsilon$ for $\alpha \in (1,2]$ and zero-point oracle (see [27]).
253
+
254
+ We notice that increasing the batch size $B$ to reduce the number of successive iterations makes sense only as long as the first term of (8) lower than the second one, i.e. there exists optimal value of batchsize
255
+
256
+ $$
257
+ B \leq \left(\frac {\sqrt {d} M _ {2} R}{\varepsilon}\right) ^ {\frac {1}{\alpha - 1}}.
258
+ $$
259
+
260
+ # 3.2 Strongly-convex case
261
+
262
+ Now we suppose that Assumption 1 is satisfied with $\mu >0$ . For this case we employ ZO-clippedSSTM with restarts technique. We will call this algorithm as R-ZO-clipped-SSTM (see Algorithm 2). At each round R-ZO-clipped-SSTM call ZO-clipped-SSTM with starting point $\hat{x}^t$ , which is the output from the previous round, and for $K_{t}$ iterations.
263
+
264
+ # Algorithm 2 R-ZO-clipped-SSTM
265
+
266
+ Input: starting point $x^0$ , number of restarts $N$ , number of steps $\{K_t\}_{t=1}^N$ , batchesizes $\{B_t\}_{t=1}^N$ , stepsizes $\{a_t\}_{t=1}^N$ , smoothing parameters $\{\tau_t\}_{t=1}^N$ , clipping levels $\{\lambda_k^1\}_{k=0}^{K_1-1}, \dots, \{\lambda_k^N\}_{k=0}^{K_N-1}$
267
+
268
+ 1: $\hat{x}^0 = x^0$
269
+ 2: for $t = 1, \dots, N$ do
270
+ 3: $\hat{x}^t = \mathsf{ZO}$ -clipped-SSTM $\left(\hat{x}^{t - 1},K_t,B_t,a_t,\tau_t,\{\lambda_k^t\}_{k = 0}^{K_t - 1}\right)$ .
271
+ 4: end for
272
+
273
+ Output: $\hat{x}^N$
274
+
275
+ Theorem 2 (Convergence of R-ZO-clipped-SSTM) Let Assumptions 1, 2 hold with $\mu >0$ and $Q = \mathbb{R}^d$ . Let $\| x^0 -x^*\| ^2\leq R^2$ , where $x^0$ is a starting point and $x^{*}$ is the optimal solution to (1). Let also $N = \lceil \log_2(\mu R^2 /2\varepsilon)\rceil$ be the number of restarts. Let at each stage $t = 1,\dots,N$ of R-ZOclipped-SSTM, ZO-clipped-SSTM is run with batchsize $B_{t}$ , $\tau_t = \varepsilon_t / 4M_2$ , $L_{t} = M_{2}\sqrt{d} /\tau_{t}$ , $K_{t} = \widetilde{\Theta} (\max \{\sqrt{L_{t}R_{t - 1}^{2} / \varepsilon_{t}},(\sigma R_{t - 1} / \varepsilon_{t})^{\frac{\alpha}{(\alpha - 1)}} / B_{t}\})$ , $a_{t} = \widetilde{\Theta} (\max \{1,\sigma K_{t}^{\frac{\alpha + 1}{\alpha}} / L_{t}R_{t}\})$ and $\lambda_k^t = \widetilde{\Theta} (R / \alpha_{k + 1}^t)$ where $R_{t - 1} = 2^{-\frac{(t - 1)}{2}}R$ , $\varepsilon_t = \mu R_{t - 1}^2 /4$ , $\ln^{4NK_t / \beta}\geq 1$ , $\beta \in (0,1]$ . Then to guarantee $f(\hat{x}^N) - f(x^*)\leq \varepsilon$ with probability at least $1 - \beta$ , R-ZO-clipped-SSTM requires
276
+
277
+ $$
278
+ \widetilde {\mathcal {O}} \left(\max \left\{\sqrt {\frac {M _ {2} ^ {2} \sqrt {d}}{\mu \varepsilon}}, \left(\frac {d M _ {2} ^ {2}}{\mu \varepsilon}\right) ^ {\frac {\alpha}{2 (\alpha - 1)}} \right\}\right) \tag {9}
279
+ $$
280
+
281
+ oracle calls. Moreover, with probability at least $1 - \beta$ the iterates of R-ZO-clipped-SSTM at stage $t = 1,\ldots ,N$ stay in the ball $B_{2R_{t - 1}}(x^{*})$ .
282
+
283
+ The obtained complexity bound (see the proof in Appendix C.2) is the first optimal (up to logarithms) high-probability complexity bound under Assumption 2 for the smooth strongly convex problems. Indeed, the first term cannot be improved in view of the deterministic lower bound [27], and the second term is optimal [41].
284
+
285
+ # 4 Setting with Adversarial Noise
286
+
287
+ Often, black-box access of $f(x,\xi)$ are affected by some deterministic noise $\delta (x)$ . Thus, now we suppose that a zero-order oracle instead of objective values $f(x,\xi)$ returns its noisy approximation
288
+
289
+ $$
290
+ f _ {\delta} (x, \xi) \stackrel {\text {d e f}} {=} f (x, \xi) + \delta (x). \tag {10}
291
+ $$
292
+
293
+ This noise $\delta (x)$ can be interpreted, e.g., as a computer discretization when calculating $f(x,\xi)$ . For our analysis, we need this noise to be uniformly bounded. Recently, noisy «black-box» optimization with bounded noise has been actively studied [11, 25]. The authors of [11] consider deterministic adversarial noise, while in [25] stochastic adversarial noise was explored.
294
+
295
+ Assumption 3 (Boundedness of noise) There exists a constant $\Delta >0$ such that $|\delta (x)|\leq \Delta$ for all $x\in Q$
296
+
297
+ This is a standard assumption often used in the literature (e.g., [17]). Moreover, in some applications [4] the bigger the noise the cheaper the zero-order oracle. Thus, it is important to understand the maximum allowable level of adversarial noise at which the convergence of the gradient-free algorithm is unaffected.
298
+
299
+ # 4.1 Non-smooth setting
300
+
301
+ In noisy setup, gradient estimate from (4) is replaced by
302
+
303
+ $$
304
+ g (x, \xi , \mathbf {e}) = \frac {d}{2 \tau} \left(f _ {\delta} (x + \tau \mathbf {e}, \xi) - f _ {\delta} (x - \tau \mathbf {e}, \xi)\right) \mathbf {e}. \tag {11}
305
+ $$
306
+
307
+ Then (6) from Lemma 3 will have an extra factor driven by noise (see Lemma 2.3 from [22])
308
+
309
+ $$
310
+ \mathbb {E} _ {\{\xi \}, \{\mathbf {e} \}} \left[ \left\| g ^ {B} (x, \{\xi \}, \{\mathbf {e} \}) - \mathbb {E} _ {\{\xi \}, \{\mathbf {e} \}} [ g ^ {B} (x, \{\xi \}, \{\mathbf {e} \}) ] \right\| _ {2} ^ {\alpha} \right] \leq \frac {2}{B ^ {\alpha - 1}} \left(\frac {\sqrt {d} M _ {2}}{2 ^ {\frac {1}{4}}} + \frac {d \Delta}{\tau}\right) ^ {\alpha}.
311
+ $$
312
+
313
+ To guarantee the same convergence of the algorithm as in Theorem 1 (see (8)) for the adversarial deterministic noise case, the variance term must dominate the noise term, i.e. $d\Delta \tau^{-1} \lesssim \sqrt{d} M_2$ . Note that if the term with noise dominates the term with variance, it does not mean that the gradient-free algorithm will not converge. In contrast, algorithm will still converge, only slower (oracle complexity will be $\sim \varepsilon^{-2}$ times higher). Thus, if we were considering the zero-order oracle concept with adversarial stochastic noise, it would be enough to express the noise level $\Delta$ , and substitute the value of the smoothing parameter $\tau$ to obtain the maximum allowable noise level. However, since we are considering the concept of adversarial noise in a deterministic setting, following previous work [11, 1] we can say that adversarial noise accumulates not only in the variance, but also in the bias:
314
+
315
+ $$
316
+ \mathbb {E} _ {\xi , \mathbf {e}} \langle [ g (x, \xi , \mathbf {e}) ] - \nabla \hat {f} _ {\tau} (x), r \rangle \lesssim \sqrt {d} \Delta \| r \| _ {2} \tau^ {- 1}, \quad \text {f o r a l l} r \in \mathbb {R} ^ {d}.
317
+ $$
318
+
319
+ This bias can be controlled by the noise level $\Delta$ , i.e., in order to achieve the $\varepsilon$ -accuracy algorithm considered in this paper, the noise condition must be satisfied:
320
+
321
+ $$
322
+ \Delta \lesssim \frac {\tau \varepsilon}{R \sqrt {d}}. \tag {12}
323
+ $$
324
+
325
+ As we can see, we have a more restrictive condition on the noise level in the bias (12) than in the variance $(\Delta \lesssim \gamma^{M_2} / \sqrt{d})$ . Thus, the maximum allowable level of adversarial deterministic noise, which guarantees the same convergence of ZO-clipped-SSTM, as in Theorem 1 (see (8)) is as follows
326
+
327
+ $$
328
+ \Delta \lesssim \frac {\varepsilon^ {2}}{R M _ {2} \sqrt {d}},
329
+ $$
330
+
331
+ where $\tau = \varepsilon /2M_2$ the smoothing parameter from Lemma 2.
332
+
333
+ Remark 1 ( $\mu$ -strongly convex case) Let us assume that $f(x)$ is also $\mu$ -strongly convex (see Assumption 1). Then, following the works [11, 22], we can conclude that the R-ZO-clipped-SSTM has the same oracle and iteration complexity as in Theorem 2 at the following maximum allowable level of adversarial noise: $\Delta \lesssim \mu^{1/2} \varepsilon^{3/2} / \sqrt{d} M_2$ .
334
+
335
+ # 4.2 Smooth setting
336
+
337
+ Now we examine the maximum allowable level of noise at which we can solve optimization problem (1) with $\varepsilon$ -precision under the following additional assumption
338
+
339
+ Assumption 4 (Smoothness) The function $f$ is $L$ -smooth, i.e., it is differentiable on $Q$ and for all $x, y \in Q$ with $L > 0$ :
340
+
341
+ $$
342
+ \| \nabla f (y) - \nabla f (x) \| _ {2} \leq L \| y - x \| _ {2}.
343
+ $$
344
+
345
+ If Assumption 4 holds, then Lemma 2 can be rewritten as
346
+
347
+ $$
348
+ \sup _ {x \in Q} | \hat {f} _ {\tau} (x) - f (x) | \leq \frac {L \tau^ {2}}{2}.
349
+ $$
350
+
351
+ Thus, we can now present the convergence results of the gradient-free algorithm in the smooth setting. Specifically, if the Assumptions 2-4 are satisfied, then $\widetilde{ZO}$ -clipped-SSTM converges to $\varepsilon$ -accuracy after $K = \widetilde{\mathcal{O}}\left(\sqrt{LR^2\varepsilon^{-1}}\right)$ iterations with probability at least $1 - \beta$ . It is easy to see that the iteration complexity improves in the smooth setting (since the Lipschitz gradient constant $L$ already exists,
352
+
353
+ i.e., no smoothing is needed), but oracle complexity remained unchanged (since we are still using the gradient approximation via $l_{2}$ randomization (11) instead of the true gradient $\nabla f(x)$ ), consistent with the already optimal estimate on oracle calls: $\widetilde{\mathcal{O}}\left(\left(\sqrt{d} M_2R\varepsilon^{-1}\right)^{\frac{\alpha}{\alpha - 1}}\right)$ . And to obtain the maximum allowable level of adversarial noise $\Delta$ in the smooth setting, which guarantees such convergence, it is sufficient to substitute the smoothing parameter $\tau = \sqrt{\varepsilon / L}$ in the inequality (12):
354
+
355
+ $$
356
+ \Delta \lesssim \frac {\varepsilon^ {3 / 2}}{R \sqrt {d L}}.
357
+ $$
358
+
359
+ Thus, we can conclude that smooth setting improves iteration complexity and the maximum allowable noise level for the gradient-free algorithm, but the oracle complexity remains unchanged.
360
+
361
+ Remark 2 ( $\mu$ -strongly convex case) Suppose that $f(x)$ is also $\mu$ -strongly convex (see Assumption 1). Then we can conclude that R-ZO-clipped-SSTM has the oracle and iteration complexity just mentioned above at the following maximum allowable level of adversarial noise: $\Delta \lesssim \mu^{1/2} \varepsilon / \sqrt{dL}$ .
362
+
363
+ Remark 3 (Upper bounds optimality) The upper bounds on maximum allowable level of adversarial noise obtained in this section in both non-smooth and smooth settings are optimal in terms of dependencies on $\varepsilon$ and $d$ according to the works [32, 34].
364
+
365
+ Remark 4 (Better oracle complexity) In the aforementioned approach in the case when $f(x, \xi)$ has Lipschitz gradient in $x$ (for all $\xi$ ) one can improve oracle complexity from $\widetilde{\mathcal{O}}\left(\left(\sqrt{d}M_2R\varepsilon^{-1}\right)^{\frac{\alpha}{\alpha - 1}}\right)$ to $\widetilde{\mathcal{O}}\left(d(M_2R\varepsilon^{-1})^{\frac{\alpha}{\alpha - 1}}\right)$ . This can be done by using component-wise finite-difference stochastic gradient approximation [15]. Iteration complexity remains $\widetilde{\mathcal{O}}\left(\sqrt{LR^2\varepsilon^{-1}}\right)$ . The same can be done for $\mu$ -strongly convex case: from $\widetilde{\mathcal{O}}\left((dM_2^2(\mu\varepsilon)^{-1})^{\frac{\alpha}{2(\alpha - 1)}}\right)$ to $\widetilde{\mathcal{O}}\left(d(M_2^2(\mu\varepsilon)^{-1})^{\frac{\alpha}{2(\alpha - 1)}}\right)$ .
366
+
367
+ # 5 Details of the proof
368
+
369
+ The proof is built upon a combination of two techniques. The first one is the Smoothing technique from [17] that is used to develop a gradient-free method for convex non-smooth problems based on full-gradient methods. The second technique is the Accelerated Clipping technique that has been recently developed for smooth problems with the noise having infinite variance and first-order oracle [35]. The authors of [35] propose clipped-SSTM method that we develop in our paper. We modify clipped-SSTM by introducing batching into it. Note that due to the infinite variance, such a modification is interesting in itself. Then we run batched clipped-SSTM with gradient estimations of function $f$ obtained via Smoothing technique and two-point zeroth-order oracle. To do this, we need to estimate the variance of the clipped version of the batched gradient estimation.
370
+
371
+ In more detail, we replace the initial problem of minimizing $f$ by minimizing its smoothed approximation $\hat{f}_{\tau}$ , see Lemma 2 In order to use estimated gradient of $\hat{f}_{\tau}$ defined in (4) or (5), we make sure that it has bounded $\alpha$ -th moment. For these purposes we proof Lemma 3. First part shows boundness of unbatched estimated gradient $g$ defined in (4). It follows from measure concentration phenomenon for the Euclidean sphere for $\frac{M_2}{\tau}$ -Lipschitz function $f(x + \mathbf{e}\tau)$ . According to this phenomenon probability of the functions deviation from its math expectation becomes exponentially small and $\alpha$ -th moment of this deviation becomes bounded. Furthermore, the second part of Lemma 3 shows that batching helps to bound $\alpha$ -th moment of batched gradient $g^B$ defined in (5) even more. Also batching allows parallel calculations reducing number of necessary iteration with the same number of oracle calls. All this is possible thanks to the result, interesting in itself, presented in the following Lemma.
372
+
373
+ Lemma 4 Let $X_{1},\ldots ,X_{B}$ be $d$ -dimensional martingale difference sequence (i.e. $\mathbb{E}[X_i|X_{i - 1},\dots ,X_1] = 0$ for $1 < i\leq B)$ satisfying for $1\leq \alpha \leq 2$
374
+
375
+ $$
376
+ \mathbb {E} \left[ \| X _ {i} \| ^ {\alpha} \mid X _ {i - 1}, \dots , X _ {1} \right] \leq \sigma^ {\alpha}.
377
+ $$
378
+
379
+ Then we have
380
+
381
+ $$
382
+ \mathbb {E} \left[ \left\| \frac {1}{B} \sum_ {i = 1} ^ {B} X _ {i} \right\| _ {2} ^ {\alpha} \right] \leq \frac {2 \sigma^ {\alpha}}{B ^ {\alpha - 1}}.
383
+ $$
384
+
385
+ Next, we use the clipped-SSTM for function $\hat{f}_{\tau}$ with heavy-tailed gradient estimates. This algorithm was initially proposed for smooth functions in the work [35]. The scheme for proving convergence with high probability is also taken from it, the only difference is additional randomization caused by smoothing scheme.
386
+
387
+ # 6 Numerical experiments
388
+
389
+ We tested ZO-clipped-SSTM on the following problem
390
+
391
+ $$
392
+ \min _ {x \in \mathbb {R} ^ {d}} \| A x - b \| _ {2} + \langle \xi , x \rangle ,
393
+ $$
394
+
395
+ where $\xi$ is a random vector with independent components sampled from the symmetric Levy $\alpha$ -stable distribution with $\alpha = 3/2$ , $A \in \mathbb{R}^{m \times d}$ , $b \in \mathbb{R}^m$ (we used $d = 16$ and $m = 500$ ). For this problem, Assumption 1 holds with $\mu = 0$ and Assumption 2 holds with $\alpha = 3/2$ and $M_2(\xi) = \| A \|_2 + \| \xi \|_2$ .
396
+
397
+ We compared ZO-clipped-SSTM, proposed in this paper, with ZOSGD and ZO-SSTM. For these algorithms, we gridsearched batchsize $B:5,10,50,100,500$ and stepsize $\gamma :1e - 3,1e - 4,1e - 5,1e - 6$ . The best convergence was achieved with the following parameters:
398
+
399
+ ZO-clipped-SSTM: $\gamma = 1e - 3,$ $B = 10,\lambda = 0.01$
400
+ - ZO-SSTM: $\gamma = 1e - 5, B = 500$
401
+ - ZO-SGD: $\gamma = 1e - 4, B = 100$ , $\omega = 0.9$ , where $\omega$ is a heavy-ball momentum parameter.
402
+
403
+ ![](images/41ecf37ef01075b3a678823e36d8f28947b564de9e594dda9e972cb1c632416d.jpg)
404
+ Figure 1: Convergence of ZO-clipped-SSTM, ZO-SGD and ZO-clipped-SSTM in terms of a gap function w.r.t. the number of consumed samples.
405
+
406
+ The code is written on Pythone and is publicly available at https://github.com/ClippedStochasticMethods/Z0-clipped-SSTM. Figure 1 presents the comparison of convergences averaged over 15 launches with different noise. In contrast to ZO-clipped-SSTM, the last two methods are unclipped and therefore failed to converge under haivy-tailed noise.
407
+
408
+ # 7 Conclusion and future directions
409
+
410
+ In this paper, we propose a first gradient-free algorithm ZO-clipped-SSTM and to solve problem (1) under heavy-tailed noise assumption. By using the restart technique we extend this algorithm for strongly convex objective, we refer to this algorithm as R-ZO-clipped-SSTM. The proposed algorithms are optimal with respect to oracle complexity (in terms of the dependence on $\varepsilon$ ), iteration complexity and the maximal level of noise (possibly adversarial). The algorithms can be adapted to composite and distributed minimization problems, saddle-point problems, and variational inequalities. Despite the fact that the algorithms utilize the two-point feedback, they can be modified to the one-point feedback. We leave it for future work.
411
+
412
+ Moreover, we provide theoretical basis to demonstrate benefits of batching technique in case of heavy-tailed stochastic noise and apply it to methods from this paper. Thanks to this basis, it is possible to use batching in other methods with heavy-tiled noise, e.g. first-order methods presented in [35, 29].
413
+
414
+ # 8 Acknowledgment
415
+
416
+ The work of Alexander Gasnikov, Aleksandr Lobanov, and Darina Dvinskikh was supported by a grant for research centers in the field of artificial intelligence, provided by the Analytical Center for the Government of the Russian Federation in accordance with the subsidy agreement (agreement identifier 000000D730321P5Q0002) and the agreement with the Ivannikov Institute for System Programming of dated November 2, 2021 No. 70-2021-00142.
417
+
418
+ # References
419
+
420
+ [1] BA Alashkar, Alexander Vladimirovich Gasnikov, Darina Mikhailovna Dvinskikh, and Aleksandr Vladimirovich Lobanov. Gradient-free federated learning methods with 1_1 and 1_2-randomization for non-smooth convex stochastic optimization problems. Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki, 63(9):1458-1512, 2023.
421
+ [2] Peter Bartlett, Varsha Dani, Thomas Hayes, Sham Kakade, Alexander Rakhlin, and Ambuj Tewari. High-probability regret bounds for bandit online linear optimization. In Proceedings of the 21st Annual Conference on Learning Theory-COLT 2008, pages 335-342. Omnipress, 2008.
422
+ [3] Anastasia Sergeevna Bayandina, Alexander V Gasnikov, and Anastasia A Lagunovskaya. Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises. Automation and Remote Control, 79:1399-1408, 2018.
423
+ [4] Lev Bogolubsky, Pavel Dvurechenskii, Alexander Gasnikov, Gleb Gusev, Yuri Nesterov, Andrei M Raigorodskii, Aleksey Tikhonov, and Maksim Zhukovskii. Learning supervised pagerank with gradient-based and gradient-free optimization methods. Advances in neural information processing systems, 29, 2016.
424
+ [5] Sébastien Bubeck and Nicoló Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning, 5(1):1–122, 2012.
425
+ [6] Sébastien Bubeck, Qijia Jiang, Yin-Tat Lee, Yuzhhi Li, and Aaron Sidford. Complexity of highly parallel non-smooth convex optimization. Advances in neural information processing systems, 32, 2019.
426
+ [7] Yeshwanth Cherapanamjeri, Nilesh Tripuraneni, Peter Bartlett, and Michael Jordan. Optimal mean estimation without a variance. In Conference on Learning Theory, pages 356-357. PMLR, 2022.
427
+ [8] Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, and Junyu Zhang. From low probability to high confidence in stochastic convex optimization. Journal of Machine Learning Research, 22(49):1-38, 2021.
428
+ [9] Yuriy Dorn, Kornilov Nikita, Nikolay Kutuzov, Alexander Nazin, Eduard Gorbunov, and Alexander Gasnikov. Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits. arXiv preprint arXiv:2305.06743, 2023.
429
+ [10] John C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory, 61(5):2788-2806, 2015.
430
+ [11] Darina Dvinskikh, Vladislav Tominin, Iaroslav Tominin, and Alexander Gasnikov. Noisy zeroth-order optimization for non-smooth saddle point problems. In Mathematical Optimization Theory and Operations Research: 21st International Conference, MOTOR 2022, Petrozavodsk, Russia, July 2–6, 2022, Proceedings, pages 18–33. Springer, 2022.
431
+ [12] Pavel Dvurechenskii, Darina Dvinskikh, Alexander Gasnikov, Cesar Uribe, and Angela Nedich. Decentralize and randomize: Faster algorithm for wasserstein barycenters. Advances in Neural Information Processing Systems, 31, 2018.
432
+ [13] Yu Ermoliev. Stochastic programming methods, 1976.
433
+ [14] Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007, 2004.
434
+ [15] Alexander Gasnikov, Darina Dvinskikh, Pavel Dvurechensky, Eduard Gorbunov, Aleksander Beznosikov, and Alexander Lobanov. Randomized gradient-free methods in convex optimization. arXiv preprint arXiv:2211.13566, 2022.
435
+
436
+ [16] Alexander Gasnikov and Yurii Nesterov. Universal fast gradient method for stochastic composit optimization problems. arXiv preprint arXiv:1604.05275, 2016.
437
+ [17] Alexander Gasnikov, Anton Novitskii, Vasilii Novitskii, Farshed Abdukhakimov, Dmitry Kamzolov, Aleksandr Beznosikov, Martin Takac, Pavel Dvurechensky, and Bin Gu. The power of first-order smooth optimization for black-box non-smooth problems. In International Conference on Machine Learning, pages 7241-7265. PMLR, 2022.
438
+ [18] Eduard Gorbunov, Marina Danilova, and Alexander Gasnikov. Stochastic optimization with heavy-tailed noise via accelerated gradient clipping. Advances in Neural Information Processing Systems, 33:15042-15053, 2020.
439
+ [19] Eduard Gorbunov, Marina Danilova, Innokentiy Shibaev, Pavel Dvurechensky, and Alexander Gasnikov. Near-optimal high probability complexity bounds for non-smooth stochastic optimization with heavy-tailed noise. arXiv preprint arXiv:2106.05958, 2021.
440
+ [20] Mert Gurbuzbalaban and Yuanhan Hu. Fractional moment-preserving initialization schemes for training deep neural networks. In International Conference on Artificial Intelligence and Statistics, pages 2233-2241. PMLR, 2021.
441
+ [21] Dusan Jakovetic, Dragana Bajovic, Anit Kumar Sahu, Soummya Kar, Nemanja Milosevich, and Dusan Stamenkovic. Nonlinear gradient mappings and stochastic optimization: A general framework with applications to heavy-tail noise. SIAM Journal on Optimization, 33(2):394-423, 2023.
442
+ [22] Nikita Kornilov, Alexander Gasnikov, Pavel Dvurechensky, and Darina Dvinskikh. Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact. Computational Management Science, 20(1):37, 2023.
443
+ [23] Michel Ledoux. The concentration of measure phenomenon. ed. by peter landweber et al. vol. 89. Mathematical Surveys and Monographs. Providence, Rhode Island: American Mathematical Society, page 181, 2005.
444
+ [24] Zijian Liu and Zhengyuan Zhou. Stochastic nonsmooth convex optimization with heavy-tailed noises. arXiv preprint arXiv:2303.12277, 2023.
445
+ [25] Aleksandr Lobanov. Stochastic adversarial noise in the" black box" optimization problem. arXiv preprint arXiv:2304.07861, 2023.
446
+ [26] Aleksandr Viktorovich Nazin, AS Nemirovsky, Aleksandr Borisovich Tsybakov, and AB Juditsky. Algorithms of robust stochastic optimization based on mirror descent method. Automation and Remote Control, 80(9):1607-1627, 2019.
447
+ [27] Arkadj Semenović Nemirovskij and David Borisovich Yudin. Problem complexity and method efficiency in optimization. M.: Nauka, 1983.
448
+ [28] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17:527-566, 2017.
449
+ [29] Ta Duy Nguyen, Alina Ene, and Huy L Nguyen. Improved convergence in high probability of clipped gradient methods with heavy tails. arXiv preprint arXiv:2304.01119, 2023.
450
+ [30] Ta Duy Nguyen, Thien Hang Nguyen, Alina Ene, and Huy Le Nguyen. High probability convergence of clipped-sgd under heavy-tailed noise. arXiv preprint arXiv:2302.05437, 2023.
451
+ [31] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310-1318, 2013.
452
+ [32] Dmitry A Pasechnyuk, Aleksandr Lobanov, and Alexander Gasnikov. Upper bounds on maximum admissible noise in zeroth-order optimisation. arXiv preprint arXiv:2306.16371, 2023.
453
+ [33] Svetlozar Todorov Rachev. Handbook of heavy tailed distributions in finance: Handbooks in finance, Book 1. Elsevier, 2003.
454
+
455
+ [34] Andrej Risteski and Yuanzhi Li. Algorithms and matching lower bounds for approximately-convex optimization. Advances in Neural Information Processing Systems, 29, 2016.
456
+ [35] Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, and Peter Richtárik. High-probability bounds for stochastic optimization and variational inequalities: the case of unbounded variance. arXiv preprint arXiv:2302.00999, 2023.
457
+ [36] Ohad Shamir. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. The Journal of Machine Learning Research, 18(1):1703-1713, 2017.
458
+ [37] Umut Simsekli, Levent Sagun, and Mert Gurbuzbalaban. A tail-index analysis of stochastic gradient noise in deep neural networks. In International Conference on Machine Learning, pages 5827-5837. PMLR, 2019.
459
+ [38] James C Spall. Introduction to stochastic search and optimization: estimation, simulation, and control. John Wiley & Sons, 2005.
460
+ [39] Nuri Mert Vural, Lu Yu, Krishna Balasubramanian, Stanislav Volgushev, and Murat A Erdogdu. Mirror descent strikes again: Optimal stochastic convex optimization under infinite noise variance. In Conference on Learning Theory, pages 65-102. PMLR, 2022.
461
+ [40] Hongjian Wang, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Simsekli, and Murat A Erdogdu. Convergence rates of stochastic gradient descent under infinite noise variance. Advances in Neural Information Processing Systems, 34:18866-18877, 2021.
462
+ [41] Jingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sashank J Reddi, Sanjiv Kumar, and Suvrit Sra. Why are adaptive methods good for attention models? Advances in Neural Information Processing Systems, 33, 2020.
463
+ [42] Jiujia Zhang and Ashok Cutkosky. Parameter-free regret in high probability with heavy tails. Advances in Neural Information Processing Systems, 35:8000-8012, 2022.
464
+
465
+ # A Batching with unbounded variance
466
+
467
+ To prove batching Lemma 4 we generalize Lemma 4.2 from [7] not only for i.i.d. random variables with zero mean but for martingale difference sequence.
468
+
469
+ Lemma 5 Let $g(x) = \text{sign}(x)|x|^{\alpha - 1} = \nabla \left(\frac{|x|^{\alpha}}{\alpha}\right)$ for $1 < \alpha \leq 2$ . Then for any $h \geq 0$
470
+
471
+ $$
472
+ \max _ {h} g (x + h) - g (x) = 2 ^ {2 - \alpha} h ^ {\alpha - 1} = 2 ^ {2 - \alpha} g (h).
473
+ $$
474
+
475
+ Proof. Consider $l(x) = g(x + h) - g(x)$ . We see that $l$ is differentiable everywhere except $x = 0$ and $x = -h$ . As long as $x \neq 0, -h$ , we have
476
+
477
+ $$
478
+ l ^ {\prime} (x) = g ^ {\prime} (x + h) - g ^ {\prime} (x) = (\alpha - 1) \left(| x + h | ^ {\alpha - 2} - | x | ^ {\alpha - 2}\right).
479
+ $$
480
+
481
+ Since, we have $\alpha > 1$ , $x = -\frac{h}{2}$ is local maxima for $l(x)$ . Furthermore, note that $l'(x) \geq 0$ for $x \in (-\infty, -\frac{h}{2}) \setminus \{-h\}$ and $l'(x) \leq 0$ for $x \in \left(-\frac{h}{2}, \infty\right) \setminus \{0\}$ . Therefore, $\frac{-h}{2}$ is global maxima.
482
+
483
+ Lemma 6 Let $x_{1}, \ldots, x_{B}$ be one-dimensional martingale difference sequence, i.e. $\mathbb{E}[x_i | x_{i-1}, \ldots, x_1] = 0$ for $1 < i \leq B$ , satisfying for $1 \leq \alpha \leq 2$
484
+
485
+ $$
486
+ \mathbb {E} [ | x _ {i} | ^ {\alpha} | x _ {i - 1}, \dots , x _ {1} ] \leq \sigma^ {\alpha}.
487
+ $$
488
+
489
+ We have:
490
+
491
+ $$
492
+ \mathbb {E} \left[ \left| \frac {1}{B} \sum_ {i = 1} ^ {B} x _ {i} \right| ^ {\alpha} \right] \leq \frac {2 \sigma^ {\alpha}}{B ^ {\alpha - 1}}.
493
+ $$
494
+
495
+ Proof. Everywhere below we will use the following notations.
496
+
497
+ $$
498
+ \mathbb {E} _ {< i} [ \cdot ] \stackrel {\mathrm {d e f}} {=} \mathbb {E} _ {x _ {i - 1}, \ldots , x _ {1}} [ \cdot ], \qquad \mathbb {E} _ {| < i} [ \cdot ] \stackrel {\mathrm {d e f}} {=} \mathbb {E} [ \cdot | x _ {i - 1}, \ldots , x _ {1} ].
499
+ $$
500
+
501
+ For $\alpha = 1$ proof follows from triangle inequality for $|\cdot|$ . When $\alpha > 1$ , we start by defining
502
+
503
+ $$
504
+ S _ {i} = \sum_ {j = 1} ^ {i} x _ {j}, \qquad S _ {0} = 0, \qquad f (x) = | x | ^ {\alpha}.
505
+ $$
506
+
507
+ Then we can calculate desired expectation as
508
+
509
+ $$
510
+ \begin{array}{l} \mathbb {E} [ f (S _ {B}) ] = \mathbb {E} \left[ \sum_ {i = 1} ^ {B} f (S _ {i}) - f (S _ {i - 1}) \right] = \sum_ {i = 1} ^ {B} \mathbb {E} [ f (S _ {i}) - f (S _ {i - 1}) ] \\ = \sum_ {i = 1} ^ {B} \mathbb {E} \left[ \int_ {S _ {i - 1}} ^ {S _ {i}} f ^ {\prime} (x) d x \right] \\ = \sum_ {i = 1} ^ {B} \mathbb {E} \left[ x _ {i} f ^ {\prime} \left(S _ {i - 1}\right) + \int_ {S _ {i - 1}} ^ {S _ {i}} f ^ {\prime} (x) - f ^ {\prime} \left(S _ {i - 1}\right) d x \right]. \tag {13} \\ \end{array}
511
+ $$
512
+
513
+ While $\{x_i\}$ is martingale difference sequence then $\mathbb{E}[x_if'(S_{i - 1})] = \mathbb{E}_{< i}[\mathbb{E}_{| < i}[x_if'(S_{i - 1})]] = 0$ From (13) and Lemma 5 $(g(x) = f^{\prime}(x) / \alpha)$ we obtain
514
+
515
+ $$
516
+ \begin{array}{l} \mathbb {E} [ f (S _ {B}) ] = \sum_ {i = 1} ^ {B} \mathbb {E} \left[ x _ {i} f ^ {\prime} (S _ {i - 1}) + \int_ {S _ {i - 1}} ^ {S _ {i}} f ^ {\prime} (x) - f ^ {\prime} (S _ {i - 1}) d x \right] \leq 2 ^ {1 - \alpha} \sum_ {i = 1} ^ {B} \mathbb {E} \left[ \int_ {0} ^ {| x _ {i} |} f ^ {\prime} (t / 2) d t \right] \\ = \sum_ {i = 1} ^ {B} \mathbb {E} \left[ \int_ {0} ^ {| x _ {i} | / 2} 2 f ^ {\prime} (s) d s \right] = 2 ^ {2 - \alpha} \sum_ {i = 1} ^ {B} \mathbb {E} [ f (| x _ {i} | / 2) ] \\ = 2 ^ {2 - \alpha} \sum_ {i = 1} ^ {B} \mathbb {E} _ {< i} \left[ \mathbb {E} _ {| < i} [ f (| x _ {i} | / 2) ] \right] \leq 2 B \sigma^ {\alpha}. \tag {14} \\ \end{array}
517
+ $$
518
+
519
+ Now we are ready to prove batching lemma for random variables with infinite variance.
520
+
521
+ Lemma 7 Let $X_{1},\ldots ,X_{B}$ be $d$ -dimensional martingale difference sequence, i.e., $\mathbb{E}[X_i|X_{i - 1},\dots ,X_1] = 0$ for $1 < i\leq B$ satisfying for $1\leq \alpha \leq 2$
522
+
523
+ $$
524
+ \mathbb {E} \left[ \| X _ {i} \| _ {2} ^ {\alpha} | X _ {i - 1}, \dots , X _ {1} \right] \leq \sigma^ {\alpha}.
525
+ $$
526
+
527
+ We have
528
+
529
+ $$
530
+ \mathbb {E} \left[ \left\| \frac {1}{B} \sum_ {i = 1} ^ {B} X _ {i} \right\| _ {2} ^ {\alpha} \right] \leq \frac {2 \sigma^ {\alpha}}{B ^ {\alpha - 1}}.
531
+ $$
532
+
533
+ Proof.
534
+
535
+ Let $g \sim \mathcal{N}(0, I)$ and $y_i \stackrel{\text{def}}{=} X_i^\top g$ . Firstly, we prove that $\mathbb{E}[y_i^\alpha] \leq \mathbb{E}[\|X_i\|^{\alpha}]$ . Indeed, using conditional expectation we get
536
+
537
+ $$
538
+ \begin{array}{l} \mathbb {E} \left[ \left| y _ {i} \right| ^ {\alpha} \right] = \mathbb {E} \left[ \left(\left| X _ {i} ^ {\top} g \right|\right) ^ {\alpha} \right] = \mathbb {E} _ {X _ {i}} \left[ \mathbb {E} _ {g | X _ {i}} \left[ \left(\left| X _ {i} ^ {\top} g\right)\right) ^ {\alpha} \right] \right] \\ = \mathbb {E} _ {X _ {i}} \left[ \mathbb {E} _ {g | X _ {i}} \left[ \left(\left(| X _ {i} ^ {\top} g |\right) ^ {2}\right) ^ {\alpha / 2} \right] \right] \\ \end{array}
539
+ $$
540
+
541
+ $$
542
+ \begin{array}{l} \begin{array}{l} \text {J e s e n i n q} \\ \leq \end{array} \quad \mathbb {E} _ {X _ {i}} \left[ \left(\mathbb {E} _ {g | X _ {i}} \left[ (X _ {i} ^ {\top} g) ^ {2} \right]\right) ^ {\alpha / 2} \right] \\ = \mathbb {E} _ {X _ {i}} \left[ \left(\| X _ {i} \| _ {2} ^ {2}\right) ^ {\alpha / 2} \right] = \mathbb {E} [ \| X _ {i} \| _ {2} ^ {\alpha} ]. \tag {15} \\ \end{array}
543
+ $$
544
+
545
+ Next, considering $X_{i}^{\top}g\sim \mathcal{N}(0,\| X\|^{2})$ and, thus, $\mathbb{E}_g|X_i^\top g| = \| X\|$ , we bound desired expectation as
546
+
547
+ $$
548
+ \mathbb {E} _ {X} \left[ \left\| \sum_ {i = 1} ^ {B} X _ {i} \right\| _ {2} ^ {\alpha} \right] \quad = \quad \mathbb {E} _ {X} \left[ \left(\mathbb {E} _ {g} \left| \sum_ {i = 1} ^ {B} X _ {i} ^ {\top} g \right|\right) ^ {\alpha} \right] \tag {16}
549
+ $$
550
+
551
+ $$
552
+ \leq^ {\text {J e s s e n ' s i n q}} \mathbb {E} _ {X, g} \left[ \left| \sum_ {i = 1} ^ {B} X _ {i} ^ {\top} g \right| ^ {\alpha} \right] = \mathbb {E} _ {X, g} \left[ \left| \sum_ {i = 1} ^ {B} y _ {i} \right| ^ {\alpha} \right]. \tag {17}
553
+ $$
554
+
555
+ Finally, we apply Lemma 6 for $y_{i}$ sequence with bounded $\alpha$ -th moment from (15) and get
556
+
557
+ $$
558
+ \mathbb {E} _ {X} \left\| \left| \sum_ {i = 1} ^ {B} X _ {i} \right| \right\| _ {2} ^ {\alpha} \biggr ] \leq \mathbb {E} _ {X, g} \left[ \left| \sum_ {i = 1} ^ {B} y _ {i} \right| ^ {\alpha} \right] \leq 2 \sigma^ {\alpha} B.
559
+ $$
560
+
561
+ # B Smoothing scheme
562
+
563
+ Lemma 8 Assumption 2 implies that $f(x)$ is $M_2$ Lipschitz on $Q$ .
564
+
565
+ Proof. For all $x_1, x_2 \in Q$
566
+
567
+ $$
568
+ \begin{array}{l} | f (x _ {1}) - f (x _ {2}) | \leq | \mathbb {E} _ {\xi} [ f (x _ {1}, \xi) - f (x _ {2}, \xi) ] | \leq \mathbb {E} _ {\xi} [ | f (x _ {1}, \xi) - f (x _ {2}, \xi) | ] (18) \\ \leq \mathbb {E} _ {\xi} \left[ M _ {2} (\xi) \right] \| x _ {1} - x _ {2} \| _ {2} \leq M _ {2} \| x _ {1} - x _ {2} \| _ {2}. (19) \\ \end{array}
569
+ $$
570
+
571
+ The following lemma gives some facts about the measure concentration on the Euclidean unit sphere for next proof.
572
+
573
+ Lemma 9 Let $f(x)$ be $M_2$ Lipschitz continuous function w.r.t $\|\cdot\|$ . If $e$ is random and uniformly distributed on the Euclidean sphere and $\alpha \in (1,2]$ , then
574
+
575
+ $$
576
+ \mathbb {E} _ {\mathbf {e}} \left[ (f (\mathbf {e}) - \mathbb {E} _ {\mathbf {e}} [ f (\mathbf {e}) ]) ^ {2 \alpha} \right] \leq \left(\frac {b M _ {2} ^ {2}}{d}\right) ^ {\alpha}, b = \frac {1}{\sqrt {2}}.
577
+ $$
578
+
579
+ Proof. A standard result of the measure concentration on the Euclidean unit sphere implies that $\forall t > 0$
580
+
581
+ $$
582
+ P r \left(| f (\mathbf {e}) - \mathbb {E} [ f (\mathbf {e}) ] | > t\right) \leq 2 \exp \left(- b ^ {\prime} d t ^ {2} / M _ {2} ^ {2}\right), \quad b ^ {\prime} = 2 \tag {20}
583
+ $$
584
+
585
+ (see the proof of Proposition 2.10 and Corollary 2.6 in [23]). Therefore,
586
+
587
+ $$
588
+ \begin{array}{l} \mathbb {E} _ {\mathbf {e}} \left[ (f (\mathbf {e}) - \mathbb {E} _ {\mathbf {e}} [ f (\mathbf {e}) ]) ^ {2 \alpha} \right] = \int_ {t = 0} ^ {\infty} P r \left(| f (\mathbf {e}) - \mathbb {E} [ f (\mathbf {e}) ] | ^ {2 \alpha} > t\right) d t \\ = \int_ {t = 0} ^ {\infty} P r \left(| f (\mathbf {e}) - \mathbb {E} [ f (\mathbf {e}) ] | > t ^ {1 / 2 \alpha}\right) d t \\ \leq \int_ {t = 0} ^ {\infty} 2 \exp \left(- b ^ {\prime} d \cdot t ^ {1 / \alpha} / M _ {2} ^ {2}\right) d t \leq \left(\frac {b M _ {2} ^ {2}}{d}\right) ^ {\alpha}. \tag {21} \\ \end{array}
589
+ $$
590
+
591
+ Finally, below we prove Lemma 3 which states that batched gradient estimation from (5) has bounded $\alpha$ -th moment.
592
+
593
+ Proof of Lemma 3.
594
+
595
+ 1. We will prove equality immediately for $g^B$ . Firstly, we notice that distribution of $\mathbf{e}$ is symmetric and by definition (5) we get
596
+
597
+ $$
598
+ \begin{array}{l} \mathbb {E} _ {\xi , \mathbf {e}} [ g ^ {B} (x, \xi , \mathbf {e}) ] = \left(\frac {d}{2 B \tau}\right) \sum_ {i = 1} ^ {B} \mathbb {E} _ {\xi_ {i}, \mathbf {e} _ {i}} \left[ f (x + \tau \mathbf {e} _ {i}, \xi_ {i}) \mathbf {e} _ {i} - f (x - \tau \mathbf {e} _ {i}, \xi_ {i}) \mathbf {e} _ {\mathrm {i}} \right] \\ = \frac {d}{B \tau} \sum_ {i = 1} ^ {B} \mathbb {E} _ {\mathbf {e} _ {i}} [ \mathbb {E} _ {\xi_ {i}} [ f (x + \tau \mathbf {e} _ {i}, \xi_ {i}) ] \mathbf {e} _ {i} ] \\ = \frac {d}{B \tau} \sum_ {i = 1} ^ {B} \mathbb {E} _ {\mathbf {e} _ {i}} [ f (x + \tau \mathbf {e} _ {i}) \mathbf {e} _ {i} ]. \tag {22} \\ \end{array}
599
+ $$
600
+
601
+ Using $\nabla \hat{f}_{\tau}(x) = \frac{d}{\tau}\mathbb{E}_{\mathbf{e}}[f(x + \tau \mathbf{e})\mathbf{e}]$ from Lemma 2 we obtain necessary result.
602
+
603
+ 2. By definition (4) of estimated gradient $g$ we bound $\alpha$ -th moment as
604
+
605
+ $$
606
+ \begin{array}{l} \mathbb {E} _ {\xi , \mathbf {e}} [ \| g (x, \xi , \mathbf {e}) \| ^ {\alpha} ] = \mathbb {E} _ {\xi , \mathbf {e}} \left[ \left| \frac {d}{2 \tau} (f (x + \tau \mathbf {e}, \xi) - f (x - \tau \mathbf {e}, \xi)) \mathbf {e} \right\rVert_ {2} ^ {\alpha} \right] \\ = \left(\frac {d}{2 \tau}\right) ^ {\alpha} \mathbb {E} _ {\xi , \mathbf {e}} \left[ \| \mathbf {e} \| _ {2} ^ {\alpha} | f (x + \tau \mathbf {e}, \xi) - f (x - \tau \mathbf {e}, \xi) | ^ {\alpha} \right]. \tag {23} \\ \end{array}
607
+ $$
608
+
609
+ Considering $\| \mathbf{e}\| _2 = 1$ we can omit it. Next we add $\pm \delta (\xi)$ in (23) for all $\delta (\xi)$ and get
610
+
611
+ $$
612
+ \begin{array}{l} \mathbb {E} _ {\xi , \mathbf {e}} \left[ | f (x + \tau \mathbf {e}, \xi) - f (x - \tau \mathbf {e}, \xi) | ^ {\alpha} \right] \\ = \mathbb {E} _ {\xi , \mathbf {e}} [ | (f (x + \tau \mathbf {e}, \xi) - \delta) - (f (x - \tau \mathbf {e}, \xi) - \delta) | ^ {\alpha} ]. \\ \end{array}
613
+ $$
614
+
615
+ Using Jensen's inequality for $|\cdot|^{\alpha}$ we bound
616
+
617
+ $$
618
+ \begin{array}{l} \mathbb {E} _ {\xi , \mathbf {e}} [ | (f (x + \tau \mathbf {e}, \xi) - \delta) - (f (x - \tau \mathbf {e}, \xi) - \delta) | ^ {\alpha} ] \\ \leq 2 ^ {\alpha - 1} \mathbb {E} _ {\xi , \mathbf {e}} \left[ | f (x + \tau \mathbf {e}, \xi) - \delta | ^ {\alpha} \right] + 2 ^ {\alpha - 1} \mathbb {E} _ {\xi , \mathbf {e}} \left[ | f (x - \tau \mathbf {e}, \xi) - \delta | ^ {\alpha} \right]. \\ \end{array}
619
+ $$
620
+
621
+ We note that distribution of $\mathbf{e}$ is symmetric and add two terms together
622
+
623
+ $$
624
+ \begin{array}{l} 2 ^ {\alpha - 1} \mathbb {E} _ {\xi , \mathbf {e}} \left[ | f (x + \tau \mathbf {e}, \xi) - \delta | ^ {\alpha} \right] + 2 ^ {\alpha - 1} \mathbb {E} _ {\xi , \mathbf {e}} \left[ | f (x - \tau \mathbf {e}, \xi) - \delta | ^ {\alpha} \right] \\ \leq 2 ^ {\alpha} \mathbb {E} _ {\xi , \mathbf {e}} \left[ | f (x + \tau \mathbf {e}, \xi) - \delta | ^ {\alpha} \right]. \\ \end{array}
625
+ $$
626
+
627
+ Let $\delta (\xi) = \mathbb{E}_{\mathbf{e}}[f(x + \tau \mathbf{e},\xi)]$ , then because of Cauchy-Schwartz inequality and conditional expectation properties we obtain
628
+
629
+ $$
630
+ \begin{array}{l} 2 ^ {\alpha} \mathbb {E} _ {\xi , \mathbf {e}} [ | f (x + \tau \mathbf {e}, \xi) - \delta | ^ {\alpha} ] = 2 ^ {\alpha} \mathbb {E} _ {\xi} [ \mathbb {E} _ {\mathbf {e}} [ | f (x + \tau \mathbf {e}, \xi) - \delta | ^ {\alpha} ] ] \\ \leq 2 ^ {\alpha} \mathbb {E} _ {\xi} \left[ \sqrt {\mathbb {E} _ {\mathbf {e}} \left[ | f (x + \tau \mathbf {e} , \xi) - \mathbb {E} _ {\mathbf {e}} [ f (x + \tau \mathbf {e} , \xi) ] | ^ {2 \alpha} \right]} \right]. \\ \end{array}
631
+ $$
632
+
633
+ Next, we use Lemma 9 for $f(x + \tau \mathbf{e},\xi)$ with fixed $\xi$ and Lipschitz constant $M_2(\xi)\tau$
634
+
635
+ $$
636
+ \begin{array}{l} 2 ^ {\alpha} \mathbb {E} _ {\xi} \left[ \sqrt {\mathbb {E} _ {\mathbf {e}} \left[ | f (x + \tau \mathbf {e} , \xi) - \mathbb {E} _ {\mathbf {e}} [ f (x + \tau \mathbf {e} , \xi) ] | ^ {2 \alpha} \right]} \right] \leq 2 ^ {\alpha} \mathbb {E} _ {\xi} \left[ \sqrt {\left(\frac {2 ^ {- 1 / 2} \tau^ {2} M _ {2} ^ {2} (\xi)}{d}\right) ^ {\alpha}} \right] \\ = 2 ^ {\alpha} \left(\frac {\tau^ {2} 2 ^ {- 1 / 2}}{d}\right) ^ {\alpha / 2} \mathbb {E} _ {\xi} \left[ M _ {2} ^ {\alpha} (\xi) \right] \leq 2 ^ {\alpha} \left(\sqrt {\frac {2 ^ {- 1 / 2}}{d}} M _ {2} \tau\right) ^ {\alpha}. \\ \end{array}
637
+ $$
638
+
639
+ Finally, we get desired bound of estimated gradient
640
+
641
+ $$
642
+ \mathbb {E} _ {\xi , \mathbf {e}} [ \| g (x, \xi , \mathbf {e}) \| _ {2} ^ {\alpha} ] = \left(\frac {\sqrt {d}}{2 ^ {1 / 4}} M _ {2}\right) ^ {\alpha}. \tag {24}
643
+ $$
644
+
645
+ Now we apply Jensen inequality to
646
+
647
+ $$
648
+ \mathbb {E} _ {\xi , \mathbf {e}} [ \| g (x, \xi , \mathbf {e}) - \mathbb {E} _ {\xi , \mathbf {e}} [ g (x, \xi , \mathbf {e}) ] \| _ {2} ^ {\alpha} ] \leq 2 ^ {\alpha - 1} \left(\mathbb {E} _ {\xi , \mathbf {e}} [ \| g (x, \xi , \mathbf {e}) \| _ {2} ] + \mathbb {E} _ {\xi , \mathbf {e}} [ \| g (x, \xi , \mathbf {e}) \| _ {2} ]\right) \tag {25}
649
+ $$
650
+
651
+ And get necessary result.
652
+
653
+ For batched gradient $g^{B}$ we use Batching Lemma 7 and estimation (25).
654
+
655
+ # C Missing Proofs for ZO-clipped-SSTM and R-ZO-clipped-SSTM
656
+
657
+ In this section, we provide the complete formulation of the main results for clipped-SSTM and R-clipped-SSTM and the missing proofs.
658
+
659
+ Minimization on the subset $Q$ In order to work with subsets of $Q \subseteq \mathbb{R}^d$ we must assume one more condition on $\hat{f}_{\tau}(x)$ .
660
+
661
+ Assumption 5 We assume that there exist some convex set $Q \subseteq \mathbb{R}^d$ , constants $\tau, L > 0$ such that for all $x, y \in Q$
662
+
663
+ $$
664
+ \left\| \nabla \hat {f} _ {\tau} (x) \right\| _ {2} ^ {2} \leq 2 L \left(\hat {f} _ {\tau} (x) - \hat {f} _ {\tau} ^ {*}\right), \tag {26}
665
+ $$
666
+
667
+ where $\hat{f}_{\tau}^{*} = \inf_{x\in Q}\hat{f}_{\tau}(x) > - \infty$
668
+
669
+ When $Q = \mathbb{R}^d$ (26) follows from Lemma 2 as well. But in general case this is not true. In work [35] in Section "Useful facts" authors show that, in the worst case, to have (26) on a set $Q$ one needs to assume smoothness on a slightly larger set.
670
+
671
+ Thus, in the full version of the theorems in which we can require much smaller $Q$ , we will also require satisfying all three Assumptions 1, 2, 5.
672
+
673
+ # C.1 Convex Functions
674
+
675
+ We start with the following lemma, which is a special case of Lemma 6 from [19]. This result can be seen the "optimization" part of the analysis of clipped-SSTM: the proof follows the same steps as the analysis of deterministic Similar Triangles Method [16], [12] and separates stochasticity from the deterministic part of the method.
676
+
677
+ Pay attention that in full version of Theorem 1 we require Assumptions 1, 2 to hold only on $Q = B_{3R}(x^{*})$ , however we need to require one more smoothness Assumption 5 as well.
678
+
679
+ Theorem 3 (Full version of Theorem 1) Let Assumptions 1,2, 5 with $\mu = 0$ hold on $Q = B_{3R}(x^{*})$ , where $R \geq \| x^0 - x^*\|$ , and
680
+
681
+ $$
682
+ a \geq \max \left\{4 8 6 0 0 \ln^ {2} \frac {4 K}{\beta}, \frac {1 8 0 0 \sigma (K + 1) K ^ {\frac {1}{\alpha}} \ln^ {\frac {\alpha - 1}{\alpha}} \frac {4 K}{\beta}}{B ^ {\frac {\alpha - 1}{\alpha}} L R} \right\}, \tag {27}
683
+ $$
684
+
685
+ $$
686
+ \lambda_ {k} = \frac {R}{3 0 \alpha_ {k + 1} \ln \frac {4 K}{\beta}}, \tag {28}
687
+ $$
688
+
689
+ $$
690
+ L = \frac {M _ {2} \sqrt {d}}{\tau}, \tag {29}
691
+ $$
692
+
693
+ for some $K > 0$ and $\beta \in (0,1]$ such that $\ln \frac{4K}{\beta} \geq 1$ . Then, after $K$ iterations of ZO-clipped-SSTM the iterates with probability at least $1 - \beta$ satisfy
694
+
695
+ $$
696
+ f \left(y ^ {K}\right) - f \left(x ^ {*}\right) \leq 2 M _ {2} \tau + \frac {6 a L R ^ {2}}{K (K + 3)} \quad a n d \quad \left\{x ^ {k} \right\} _ {k = 0} ^ {K + 1}, \left\{z ^ {k} \right\} _ {k = 0} ^ {K}, \left\{y ^ {k} \right\} _ {k = 0} ^ {K} \subseteq B _ {2 R} \left(x ^ {*}\right). \tag {30}
697
+ $$
698
+
699
+ In particular, when parameter $a$ equals the maximum from (27), then the iterates produced by ZO-clipped-SSTM after $K$ iterations with probability at least $1 - \beta$ satisfy
700
+
701
+ $$
702
+ f \left(y ^ {K}\right) - f \left(x ^ {*}\right) \leq 2 M _ {2} \tau + \mathcal {O} \left(\max \left\{\frac {L R ^ {2} \ln^ {2} \frac {K}{\beta}}{K ^ {2}}, \frac {\sigma R \ln^ {\frac {\alpha - 1}{\alpha}} \frac {K}{\beta}}{(B K) ^ {\frac {\alpha - 1}{\alpha}}} \right\}\right), \tag {31}
703
+ $$
704
+
705
+ meaning that to achieve $f(y^{K}) - f(x^{*}) \leq \varepsilon$ with probability at least $1 - \beta$ with $\tau = \frac{\varepsilon}{4M_2} ZO$ -clipped-SSTM requires
706
+
707
+ $$
708
+ K = \mathcal {O} \left(\sqrt {\frac {M _ {2} ^ {2} \sqrt {d} R ^ {2}}{\varepsilon^ {2}}} \ln \frac {M _ {2} ^ {2} \sqrt {d} R ^ {2}}{\varepsilon^ {2} \beta}, \frac {1}{B} \left(\frac {\sigma R}{\varepsilon}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {1}{B \beta} \left(\frac {\sigma R}{\varepsilon}\right) ^ {\frac {\alpha}{\alpha - 1}}\right)\right) \quad i t e r a t i o n s. \tag {32}
709
+ $$
710
+
711
+ In case when second term in max in (31) is greater total number of oracle calls is
712
+
713
+ $$
714
+ K \cdot B = \mathcal {O} \left(\left(\frac {\sigma R}{\varepsilon}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {1}{\beta} \left(\frac {\sigma R}{\varepsilon}\right) ^ {\frac {\alpha}{\alpha - 1}}\right)\right).
715
+ $$
716
+
717
+ Proof. The proof is based on the proof of Theorem F.2 from [35]. We apply first-order algorithm clipped-SSTM for $\frac{M_2\sqrt{d}}{\tau}$ -smooth function $\hat{f}_{\tau}$ with unbiased gradient estimation $g^{B}$ , that has $\alpha$ -th moment bounded with $\frac{2\sigma^{\alpha}}{B^{\alpha - 1}}$ . Additional randomization caused by smoothing doesn't affect the proof of the original Theorem.
718
+
719
+ According to it after $K$ iterations we have that with probability at least $1 - \beta$
720
+
721
+ $$
722
+ \hat {f} _ {\tau} (y ^ {K}) - \hat {f} _ {\tau} (x ^ {*}) \leq \frac {6 a L R ^ {2}}{K (K + 3)}
723
+ $$
724
+
725
+ and $\{x^k\}_{k = 0}^{K + 1},\{z^k\}_{k = 0}^K,\{y^k\}_{k = 0}^K\subseteq B_{2R}(x^*)$
726
+
727
+ Considering approximation properties of $\hat{f}_{\tau}$ from Lemma 2
728
+
729
+ $$
730
+ f (y ^ {K}) - f (x ^ {*}) \leq 2 M _ {2} \tau + \frac {6 a L R ^ {2}}{K (K + 3)}.
731
+ $$
732
+
733
+ Finally, if
734
+
735
+ $$
736
+ a = \max \left\{4 8 6 0 0 \ln^ {2} \frac {4 K}{\beta}, \frac {1 8 0 0 \sigma (K + 1) K ^ {\frac {1}{\alpha}} \ln^ {\frac {\alpha - 1}{\alpha}} \frac {4 K}{\beta}}{B ^ {\frac {\alpha - 1}{\alpha}} L R} \right\},
737
+ $$
738
+
739
+ then with probability at least $1 - \beta$
740
+
741
+ $$
742
+ \begin{array}{l} f (y ^ {K}) - f (x ^ {*}) \leq 2 M _ {2} \tau + \frac {6 a L R ^ {2}}{K (K + 3)} \\ = 2 M _ {2} \tau + \max \left\{\frac {2 9 1 6 0 0 L R ^ {2} \ln^ {2} \frac {4 K}{\beta}}{K (K + 3)}, \frac {1 0 8 0 0 \sigma R (K + 1) K ^ {\frac {1}{\alpha}} \ln^ {\frac {\alpha - 1}{\alpha}} \frac {4 K}{\beta}}{K (K + 3) B ^ {\frac {\alpha - 1}{\alpha}}} \right\} \\ = 2 M _ {2} \tau + \mathcal {O} \left(\max \left\{\frac {L R ^ {2} \ln^ {2} \frac {K}{\beta}}{K ^ {2}}, \frac {\sigma R \ln^ {\frac {\alpha - 1}{\alpha}} \frac {K}{\beta}}{(B K) ^ {\frac {\alpha - 1}{\alpha}}} \right\}\right), \\ \end{array}
743
+ $$
744
+
745
+ where $L = \frac{M_2\sqrt{d}}{\tau}$ by Lemma 2.
746
+
747
+ To get $f(y^{K}) - f(x^{*}) \leq \varepsilon$ with probability at least $1 - \beta$ it is sufficient to choose $\tau = \frac{\varepsilon}{4M_2}$ and $K$ such that both terms in the maximum above are $\mathcal{O}(\varepsilon)$ . This leads to
748
+
749
+ $$
750
+ K = \mathcal {O} \left(\sqrt {\frac {M _ {2} ^ {2} \sqrt {d} R ^ {2}}{\varepsilon^ {2}}} \ln \frac {M _ {2} ^ {2} \sqrt {d} R ^ {2}}{\varepsilon^ {2} \beta}, \frac {1}{B} \left(\frac {\sigma R}{\varepsilon}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {1}{B \beta} \left(\frac {\sigma R}{\varepsilon}\right) ^ {\frac {\alpha}{\alpha - 1}}\right)\right)
751
+ $$
752
+
753
+ that concludes the proof.
754
+
755
+ # C.2 Strongly Convex Functions
756
+
757
+ In the strongly convex case, we consider the restarted version of ZO-clipped-SSTM (R-ZO-clipped-SSTM). The main result is summarized below.
758
+
759
+ Pay attention that in full version of Theorem 2 we require Assumptions 1, 2 to hold only on $Q = B_{3R}(x^{*})$ , however we need to require one more smoothness Assumption 5 as well.
760
+
761
+ Theorem 4 (Full version of Theorem 2) Let Assumptions 1, 2, 5 with $\mu > 0$ hold for $Q = B_{3R}(x^{*})$ , where $R \geq \| x^0 - x^*\|^2$ and R-ZO-clipped-SSTM runs ZO-clipped-SSTM $N$ times. Let
762
+
763
+ $$
764
+ L _ {t} = \frac {\sqrt {d} M _ {2}}{\tau_ {k}}, \quad \tau_ {k} = \frac {\varepsilon_ {k}}{M _ {2}}, \tag {33}
765
+ $$
766
+
767
+ $$
768
+ K _ {t} = \left. \right.\left[ \max \left\{1 0 8 0 \sqrt {\frac {L _ {t} R _ {t - 1} ^ {2}}{\varepsilon_ {t}}} \ln \frac {2 1 6 0 \sqrt {L _ {t} R _ {t - 1} ^ {2}} N}{\sqrt {\varepsilon_ {t}} \beta}, \frac {2}{B _ {t}} \left(\frac {1 0 8 0 0 \sigma R _ {t - 1}}{\varepsilon_ {t}}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {4 N}{B _ {t} \beta} \left(\frac {5 4 0 0 \sigma R _ {t - 1}}{\varepsilon_ {t}}\right) ^ {\frac {\alpha}{\alpha - 1}}\right)\right\}\right],
769
+ $$
770
+
771
+ $$
772
+ \varepsilon_ {t} = \frac {\mu R _ {t - 1} ^ {2}}{4}, \quad R _ {t - 1} = \frac {R}{2 ^ {(t - 1) / 2}}, \quad N = \left\lceil \log_ {2} \frac {\mu R ^ {2}}{2 \varepsilon} \right\rceil , \quad \ln \frac {4 K _ {t} N}{\beta} \geq 1, \tag {35}
773
+ $$
774
+
775
+ $$
776
+ a _ {t} = \max \left\{4 8 6 0 0 \ln^ {2} \frac {4 K _ {t} N}{\beta}, \frac {1 8 0 0 \sigma \left(K _ {t} + 1\right) K _ {t} ^ {\frac {1}{\alpha}} \ln^ {\frac {\alpha - 1}{\alpha}} \frac {4 K _ {t} N}{\beta}}{B _ {t} ^ {\frac {\alpha - 1}{\alpha}} L _ {t} R _ {t}} \right\}, \tag {36}
777
+ $$
778
+
779
+ $$
780
+ \lambda_ {k} ^ {t} = \frac {R _ {t}}{3 0 \alpha_ {k + 1} ^ {t} \ln \frac {4 K _ {t} N}{\beta}} \tag {37}
781
+ $$
782
+
783
+ for $t = 1,\dots ,\tau$ . Then to guarantee $f(\hat{x}^{\tau}) - f(x^{*})\leq \varepsilon$ with probability $\geq 1 - \beta$ R-clipped-SSTM requires
784
+
785
+ $$
786
+ \mathcal {O} \left(\max \left\{\sqrt {\frac {M _ {2} ^ {2} \sqrt {d}}{\varepsilon \mu}} \ln \left(\frac {\mu R ^ {2}}{\varepsilon}\right) \ln \left(\frac {M _ {2} d ^ {\frac {1}{4}}}{\sqrt {\mu \varepsilon} \beta} \ln \left(\frac {\mu R ^ {2}}{\varepsilon}\right)\right), \left(\frac {\sigma^ {2}}{\mu \varepsilon}\right) ^ {\frac {\alpha}{2 (\alpha - 1)}} \ln \left(\frac {1}{\beta} \left(\frac {\sigma^ {2}}{\mu \varepsilon}\right) ^ {\frac {\alpha}{2 (\alpha - 1)}} \ln \left(\frac {\mu R ^ {2}}{\varepsilon}\right)\right) \right\}\right) \tag {38}
787
+ $$
788
+
789
+ oracle calls. Moreover, with probability $\geq 1 - \beta$ the iterates of R-clipped-SSTM at stage $t$ stay in the ball $B_{2R_{t-1}}(x^*)$ .
790
+
791
+ Proof. The proof itself repeats of the proof Theorem F.3 from [35]. In this theorem authors prove convergence of restarted clipped-SSTM. In our case it is sufficient to change clipped-SSTM to ZO-clipped-SSTM and put results of Theorem 3 in order to guarantee $\varepsilon$ -solution after $\sum_{t=1}^{N} K_t$ successive iterations.
792
+
793
+ It remains to calculate the overall number of oracle calls during all runs of clipped-SSTM. We have
794
+
795
+ $$
796
+ \begin{array}{l} \sum_ {t = 1} ^ {N} B _ {t} K _ {t} = \\ = \mathcal {O} \left(\sum_ {t = 1} ^ {N} \max \left\{\sqrt {\frac {M _ {2} ^ {2} \sqrt {d} R _ {t - 1} ^ {2}}{\varepsilon_ {t} ^ {2}}} \ln \left(\frac {\sqrt {M _ {2} ^ {2} \sqrt {d} R _ {t - 1} ^ {2}} N}{\varepsilon_ {t} \beta}\right), \frac {1}{B} \left(\frac {\sigma R _ {t - 1}}{\varepsilon_ {t}}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {N}{B \beta} \left(\frac {\sigma R _ {t - 1}}{\varepsilon_ {t}}\right) ^ {\frac {\alpha}{\alpha - 1}}\right) \right\}\right) \\ = \mathcal {O} \left(\sum_ {t = 1} ^ {N} \max \left\{\sqrt {\frac {M _ {2} ^ {2} \sqrt {d}}{R _ {t - 1} ^ {2} \mu^ {2}}} \ln \left(\frac {\sqrt {M _ {2} ^ {2} \sqrt {d}} N}{\mu R _ {t - 1} \beta}\right), \left(\frac {\sigma}{\mu R _ {t - 1}}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {N}{\beta} \left(\frac {\sigma}{\mu R _ {t - 1}}\right) ^ {\frac {\alpha}{\alpha - 1}}\right) \right\}\right) \\ = \mathcal {O} \left(\max \left\{\sum_ {t = 1} ^ {N} 2 ^ {t / 2} \sqrt {\frac {M _ {2} ^ {2} \sqrt {d}}{R ^ {2} \mu^ {2}}} \ln \left(2 ^ {t / 2} \frac {\sqrt {M _ {2} ^ {2} \sqrt {d}} N}{\mu R \beta}\right), \sum_ {t = 1} ^ {N} \left(\frac {\sigma \cdot 2 ^ {t / 2}}{\mu R}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {N}{\beta} \left(\frac {\sigma \cdot 2 ^ {t / 2}}{\mu R}\right) ^ {\frac {\alpha}{\alpha - 1}}\right) \right\}\right) \\ = \mathcal {O} \left(\max \left\{\sqrt {\frac {M _ {2} ^ {2} \sqrt {d}}{R ^ {2} \mu^ {2}}} 2 ^ {N / 2} \ln \left(2 ^ {N / 2} \frac {\sqrt {M _ {2} ^ {2} \sqrt {d}}}{\mu R \beta} \ln \left(\frac {\mu R ^ {2}}{\varepsilon}\right)\right), \left(\frac {\sigma}{\mu R}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {N}{\beta} \left(\frac {\sigma \cdot 2 ^ {N / 2}}{\mu R}\right) ^ {\frac {\alpha}{\alpha - 1}}\right) \sum_ {t = 1} ^ {N} 2 ^ {\frac {\alpha t}{2 (\alpha - 1)}} \right\}\right) \\ = \mathcal {O} \left(\max \left\{\sqrt {\frac {M _ {2} ^ {2} \sqrt {d}}{\varepsilon \mu}} \ln \left(\frac {\sqrt {M _ {2} ^ {2} \sqrt {d}}}{\sqrt {\varepsilon \mu} \beta} \ln \left(\frac {\mu R ^ {2}}{\varepsilon}\right)\right), \left(\frac {\sigma}{\mu R}\right) ^ {\frac {\alpha}{\alpha - 1}} \ln \left(\frac {N}{\beta} \left(\frac {\sigma}{\mu R}\right) ^ {\frac {\alpha}{\alpha - 1}} \cdot 2 ^ {\frac {\alpha}{2 (\alpha - 1)}}\right) 2 ^ {\frac {\alpha N}{2 (\alpha - 1)}} \right\}\right) \\ = \mathcal {O} \left(\max \left\{\sqrt {\frac {M _ {2} ^ {2} \sqrt {d}}{\varepsilon \mu}} \ln \left(\frac {\sqrt {M _ {2} ^ {2} \sqrt {d}}}{\sqrt {\varepsilon \mu} \beta} \ln \left(\frac {\mu R ^ {2}}{\varepsilon}\right)\right), \left(\frac {\sigma^ {2}}{\mu \varepsilon}\right) ^ {\frac {\alpha}{2 (\alpha - 1)}} \ln \left(\frac {1}{\beta} \left(\frac {\sigma^ {2}}{\mu \varepsilon}\right) ^ {\frac {\alpha}{2 (\alpha - 1)}} \ln \left(\frac {\mu R ^ {2}}{\varepsilon}\right)\right) \right\}\right), \\ \end{array}
797
+ $$
798
+
799
+ which concludes the proof.
acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4a6497d58f13ba24130b2b4faa644af19dde801bd61b102ef8e42b2ae7ccbf8
3
+ size 817443
acceleratedzerothordermethodfornonsmoothstochasticconvexoptimizationproblemwithinfinitevariance/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a922c7d5ca0cf0186e50b1e044902454e2f000e74c4831eb724d5344bc8edfbb
3
+ size 873854
acceleratingexplorationwithunlabeledpriordata/ea7177fa-8e77-4951-ba37-20328d4330c6_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9402644345f50b185050ba89f173e84e8bbe5bed3f9cc4a2b82dcda73bc4a79b
3
+ size 132469
acceleratingexplorationwithunlabeledpriordata/ea7177fa-8e77-4951-ba37-20328d4330c6_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15e71416d5262c65c4c4bc1fe4bb1213d865ccb96634f81e10a4a64785dbbc77
3
+ size 158700