Buckets:

|
download
raw
74.2 kB

Title: BAFFLE: A Baseline of Backpropagation-Free Federated Learning

URL Source: https://arxiv.org/html/2301.12195

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Preliminaries 3Backpropagation-Free Federated Learning 4Experiments 5Related Work 6Conclusion and Discussion References

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: axessibility failed: orcidlink

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: arXiv.org perpetual non-exclusive license arXiv:2301.12195v3 [cs.LG] 21 Jul 2024 123 BAFFLE: A Baseline of Backpropagation-Free Federated Learning Haozhe Feng1 1133 Tianyu Pang2 22 Chao Du 22 Wei Chen2 11

Shuicheng Yan 22 Min Lin 22 Abstract

Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data. FL is a promising framework with practical applications, but its standard training paradigm requires the clients to backpropagate through the model to compute gradients. Since these clients are typically edge devices and not fully trusted, executing backpropagation on them incurs computational and storage overhead as well as white-box vulnerability. In light of this, we develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients. BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments, because the clients in BAFFLE only execute forward propagation and return a set of scalars to the server. Empirically we use BAFFLE to train deep models from scratch or to finetune pretrained models, achieving acceptable results.

Keywords: Federated Learning Finite Difference Backpropagation-Free Learning 1Introduction

Federated learning (FL) allows decentralized clients to collaboratively train a server model [62]. In each training round, the selected clients compute model gradients or updates on their local private datasets, without explicitly exchanging sample points to the server. While FL describes a promising blueprint and has several applications [98, 32, 51], the mainstream training paradigm of FL is still gradient-based that requires the clients to locally execute backpropagation, which leads to two practical limitations:

(i) Overhead for edge devices. The clients in FL are usually edge devices, such as mobile phones and IoT sensors, whose hardware is primarily optimized for inference-only purposes [79, 88], rather than for backpropagation. Due to the limited resources, computationally affordable models running on edge devices are typically quantized and pruned [93], making exact backpropagation difficult. In addition, standard implementations of backpropagation rely on either forward-mode or reverse-mode auto-differentiation in contemporary machine learning packages [14, 73], which increases storage requirements.

(ii) White-box vulnerability. To facilitate gradient computing, the server regularly distributes its model status to the clients, but this white-box exposure of the model renders the server vulnerable to, e.g., poisoning or inversion attacks from malicious clients [80, 97, 103, 25]. With that, recent attempts are made to exploit trusted execution environments (TEEs) in FL, which can isolate the model status within a black-box secure area and significantly reduce the success rate of malicious evasion [19, 64, 65]. However, TEEs are highly memory-constrained [87], while backpropagation is memory-consuming to restore intermediate states.

While numerous solutions have been proposed to alleviate these limitations (related work discussed in Section 5), we raise an essential question: how to perform backpropagation-free FL? Inspired by the literature on zero-order optimization [82], we intend to substitute backpropagation with multiple forward or inference processes to estimate the gradients. Technically speaking, we propose the framework of BAckpropagation-Free Federated LEarning (BAFFLE). As illustrated in Figure 1, BAFFLE consists of three conceptual steps: (1) each client locally perturbs the model parameters 2 ⁢ 𝐾 times as 𝐖 ± 𝜹 𝑘 (the server sends the random seed to clients for generating { 𝜹 𝑘 } 𝑘

1 𝐾 ); (2) each client executes forward processes on the perturbed models using its private dataset 𝔻 𝑐 and obtains 𝐾 loss differences { Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 𝑘 ; 𝔻 𝑐 ) } 𝑘

1 𝐾 ; (3) the server aggregates loss differences to estimate gradients.

Figure 1:A sketch map of BAFFLE. In addition to the global parameters update Δ ⁢ 𝐖 , each client downloads random seeds to locally generate perturbations ± 𝜹 1 : 𝐾 and perform 2 ⁢ 𝐾 times of forward propagation (i.e., inference) to compute loss differences. The server can recover these perturbations using the same random seeds and obtain Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 𝑘 ) by secure aggregation. Each loss difference Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 𝑘 ; 𝔻 𝑐 ) is a floating-point number, so 𝐾 can be easily adjusted to fit the uploading bandwidth.

BAFFLE’s defining characteristic is that it only utilizes forward propagation, which is memory-efficient and does not require auto-differentiation. It is well-adapted to model quantization and pruning as well as inference-only hardware optimization on edge devices. Compared to backpropagation, the computation graph of BAFFLE is more easily optimized, such as by slicing it into per-layer calculation [44]. Since each loss difference Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 𝑘 ; 𝔻 𝑐 ) is a scalar, BAFFLE can easily accommodate the uploading bandwidth of clients by adjusting the value of 𝐾 as opposed to using, e.g., gradient compression [84]. BAFFLE is also compatible with recent advances in inference approaches for TEE [85, 87], providing an efficient solution for combining TEE into FL and preventing white-box evasion.

We adapt secure aggregation [10] to zero-order optimization and investigate ways to improve gradient estimation in BAFFLE. Empirically, BAFFLE is used to train models from scratch on MNIST [49] and CIFAR-10/100 [48], and transfer ImageNet-pretrained models to OfficeHome [89]. Compared to conventional FL, it achieves suboptimal but acceptable performance. These results shed light on the potential of BAFFLE and general backpropagation-free methods in FL.

2Preliminaries

Finite difference. Gradient-based optimization techniques (either first-order or higher-order) are the most frequently used tools to train deep networks [27]. Nevertheless, recent progress demonstrates promising applications of zero-order optimization methods for training, particularly when exact derivatives cannot be obtained [23, 69, 55] or backward processes are computationally prohibitive [70, 34]. Zero-order approaches require only multiple forward processes that may be executed in parallel. Along this routine, finite difference stems from the definition of derivatives and can be generalized to higher-order and multivariate cases by Taylor’s expansion. For any differentiable loss ℒ ⁢ ( 𝐖 ; 𝔻 ) and a small perturbation 𝜹 ∈ ℝ 𝑛 , finite difference employs the forward difference scheme

ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) − ℒ ⁢ ( 𝐖 ; 𝔻 )

𝜹 ⊤ ⁢ ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) + 𝑜 ⁢ ( ‖ 𝜹 ‖ 2 ) ⁢ ,

(1)

where 𝜹 ⊤ ⁢ ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) is a scaled directional derivative along 𝜹 . Furthermore, we can use the central difference scheme to obtain higher-order residuals as

ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) − ℒ ⁢ ( 𝐖 − 𝜹 ; 𝔻 )

2 ⁢ 𝜹 ⊤ ⁢ ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) + 𝑜 ⁢ ( ‖ 𝜹 ‖ 2 2 ) ⁢ .
(2) Algorithm 1 Backpropagation-free federated learning (BAFFLE) 1:  Notations: Se denotes the operations done on servers; Cl denotes the operations done on clients; TEE for the TEE module; and ⇒ denotes the communication process. 2:  Inputs: 𝐶 clients with local dataset { 𝔻 𝑐 } 𝑐

1 𝐶 containing 𝑁 𝑐 input-label pairs, 𝑁

∑ 𝑐

1 𝐶 𝑁 𝑐 ; learning rate 𝜂 , training iterations 𝑇 , perturbation number 𝐾 , noise scale 𝜎 . 3:   Se: initializing model parameters 𝐖 ← 𝐖 0 ; 4:  // optional Se: encoding the computing paradigm into TEE as TEE ∘ Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 ; 𝔻 ) ; 5:  for  𝑡

0 to 𝑇 − 1  do 6:      Se ⇒ all Cl: downloading model parameters 𝐖 𝑡 and the computing paradigm; 7:     // 4 Bytes Se ⇒ all Cl: downloading the random seed 𝑠 𝑡 ; 8:      Se: sampling 𝐾 perturbations { 𝜹 𝑘 } 𝑘

1 𝐾 from 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) using the random seed 𝑠 𝑡 ; 9:     all Cl: negotiating a group of zero-sum noises { 𝜖 𝑐 } 𝑐

1 𝐶 for secure aggregation; 10:     for  𝑐

1 to 𝐶  do 11:         Cl: sampling 𝐾 perturbations { 𝜹 𝑘 } 𝑘

1 𝐾 from 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) using the random seed 𝑠 𝑡 ; 12:         Cl: computing TEE ∘ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) via forward propagation for each 𝑘 ; 13:        // 4 × 𝐾 Bytes Cl ⇒ Se: uploading 𝐾 outputs { TEE ∘ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) + 𝑁 𝑁 𝑐 ⁢ 𝜖 𝑐 } 𝑘

1 𝐾 ; 14:     end for 15:      Se: aggregating Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ) ← ∑ 𝑐

1 𝐶 𝑁 𝑐 𝑁 ⁢ [ TEE ∘ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) + 𝑁 𝑁 𝑐 ⁢ 𝜖 𝑐 ] for each 𝑘 ; 16:      Se: computing ∇ 𝐖 𝑡 ^ ⁢ ℒ ⁢ ( 𝐖 𝑡 ) ← 1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ) ; 17:      Se: 𝐖 𝑡 + 1 ← 𝐖 𝑡 − 𝜂 ⁢ ∇ 𝐖 𝑡 ^ ⁢ ℒ ⁢ ( 𝐖 𝑡 ) ; 18:  end for 19:  Return: final model parameters 𝐖 𝑇 .

Federated learning. Suppose we have 𝐶 clients, and the 𝑐 -th client’s private dataset is defined as 𝔻 𝑐 := { ( 𝐗 𝑖 𝑐 , 𝐲 𝑖 𝑐 ) } 𝑖

1 𝑁 𝑐 with 𝑁 𝑐 input-label pairs. Let ℒ ⁢ ( 𝐖 ; 𝔻 𝑐 ) represent the loss function for the dataset 𝔻 𝑐 , where 𝐖 ∈ ℝ 𝑛 denotes the server model’s global parameters. The training objective of FL is to find 𝐖 that minimize the total loss function as

ℒ ⁢ ( 𝐖 ) := ∑ 𝑐

1 𝐶 𝑁 𝑐 𝑁 ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 𝑐 ) ⁢ , where  ⁢ 𝑁

∑ 𝑐

1 𝐶 𝑁 𝑐 ⁢ .

(3)

In the conventional FL framework, clients compute gradients { ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 𝑐 ) } 𝑐

1 𝐶 or model updates locally through backpropagation and then upload them to the server. Federated average [62] performs global aggregation using Δ ⁢ 𝐖 := ∑ 𝑖

1 𝐶 𝑁 𝑐 𝑁 ⁢ Δ ⁢ 𝐖 𝑐 , where Δ ⁢ 𝐖 𝑐 is the local update obtained via executing 𝐖 𝑐 ← 𝐖 𝑐 − 𝜂 ⁢ ∇ 𝐖 𝑐 ℒ ⁢ ( 𝐖 𝑐 ; 𝔻 𝑐 ) multiple times and 𝜂 is learning rate.

Zeroth-order FL. Similar to our work, DLZO [54] and FedZO [22] present zeroth-order optimization methods for FL independently in batch-level and epoch-level communications. However, they concentrate primarily on basic linear models with softmax regression problems and ignore deep models. Besides, they also do not account for server security aggregation in conjunction with zero-order optimization. In comparison, BAFFLE enables security aggregation, can train deep models such as WideResNet from scratch and achieves reasonable results, e.g. 95.17% accuracy on MNIST with 20 communication rounds versus 83.58% for FedZO with 1,000 rounds.

3Backpropagation-Free Federated Learning

In this section, we introduce zero-order optimization into FL and develop BAFFLE, a backpropagation-free federated learning framework that uses multiple forward processes in place of backpropagation. An initial attempt is to apply finite difference as the gradient estimator. To estimate the full gradients, we need to perturb each parameter 𝑤 ∈ 𝐖 once to approximate the partial derivative ∂ ℒ ⁢ ( 𝐖 ; 𝔻 ) ∂ 𝑤 , causing the forward computations to grow with 𝑛 (recall that 𝐖 ∈ ℝ 𝑛 ) and making it difficult to scale to large models. In light of this, we resort to Stein’s identity [82] to obtain an unbiased estimation of gradients from loss differences calculated on various perturbations. As depicted in Figure 1, BAFFLE clients need only download random seeds and global parameters update, generate perturbations locally, execute multiple forward propagations and upload loss differences back to the server. Furthermore, we also present convergence analyses of BAFFLE, providing guidelines for model design and acceleration of training.

3.1Unbiased Gradient Estimation with Stein’s Identity

Previous work on sign-based optimization [66] demonstrates that deep networks can be effectively trained if the majority of gradients have proper signs. Thus, we propose performing forward propagation multiple times on perturbed parameters, in order to obtain a stochastic estimation of gradients without backpropagation. Specifically, assuming that the loss function ℒ ⁢ ( 𝐖 ; 𝔻 ) is continuously differentiable w.r.t.  𝐖 given any dataset 𝔻 , which is true (almost everywhere) for deep networks using non-linear activation functions, we define a smoothed loss function as:

ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 ) := 𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) ⁢ ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) ⁢ ,

(4)

where the perturbation 𝜹 follows a Gaussian distribution with mean 0 and covariance 𝜎 2 ⁢ 𝐈 . Stein [82] proves the Stein’s identity (proof recapped in Appendix A):

∇ 𝐖 ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 )

𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) ⁢ [ 𝜹 2 ⁢ 𝜎 2 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 ; 𝔻 ) ] ⁢ ,

(5)

where Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 ; 𝔻 ) := ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) − ℒ ⁢ ( 𝐖 − 𝜹 ; 𝔻 ) is the loss difference. Note that computing a loss difference only requires the execution of two forwards ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) and ℒ ⁢ ( 𝐖 − 𝜹 ; 𝔻 ) without backpropagation. It is trivial that ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 ) is continuously differentiable for any 𝜎 ≥ 0 and ∇ 𝐖 ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 ) converges uniformly as 𝜎 → 0 ; hence, it follows that ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 )

lim 𝜎 → 0 ∇ 𝐖 ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 ) . Therefore, we can obtain a stochastic estimation of gradients using Monte Carlo by 1) selecting a small value of 𝜎 ; 2) randomly sampling 𝐾 perturbations from 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) as { 𝜹 𝑘 } 𝑘

1 𝐾 ; and 3) utilizing the Stein’s identity in Eq. (5) to calculate

∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 ) := 1 𝐾 ⁢ ∑ 𝑘

1 𝐾 [ 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 𝑘 ; 𝔻 ) ] ⁢ .

(6) 3.2Operating Flow of BAFFLE

Based on the forward-only gradient estimator ∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 ) derived in Eq. (6), we outline the basic operating flow of our BAFFLE system (Algorithm 1) as follows:

Model initialization. (Lines 3 ∼ 4, done by server) The server initializes the model parameters to 𝐖 0 and optionally encodes the computing paradigm of loss differences Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 ; 𝔻 ) into the TEE module (see Appendix B for more information on TEE);

Downloading paradigms. (Lines 6 ∼ 7, server ⇒ all clients) In round 𝑡 , the server distributes the most recent model parameters 𝐖 𝑡 (or the model update Δ ⁢ 𝐖 𝑡 ) and the computing paradigm to all the 𝐶 clients. In addition, in BAFFLE, the server sends a random seed 𝑠 𝑡 (rather than directly sending the perturbations to reduce communication burden);

Local computation. (Lines 11 ∼ 12, done by clients) Each client generates 𝐾 perturbations { 𝜹 𝑘 } 𝑘

1 𝐾 locally from 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) using random seed 𝑠 𝑡 , and executes the computing paradigm to obtain loss differences. 𝐾 is chosen adaptively based on clients’ computation capability;

Uploading loss differences. (Line 13, all clients ⇒ server) Each client uploads 𝐾 noisy outputs { Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) + 𝑁 𝑁 𝑐 ⁢ 𝜖 𝑐 } 𝑘

1 𝐾 to the server, where each output is a floating-point number and the noise 𝜖 𝑐 is negotiated by all clients to be zero-sum. The total uploaded Bytes is 4 × 𝐾 ;

Secure aggregation. (Lines 15 ∼ 16, done by server) In order to prevent the server from recovering the exact loss differences and causing privacy leakage [25], we adopt the secure aggregation method [10] that was originally proposed for conventional FL and apply it to BAFFLE. Specifically, all clients negotiate a group of noises { 𝜖 𝑐 } 𝑐

1 𝐶 satisfying ∑ 𝑐

1 𝐶 𝜖 𝑐

0 . Then we can reorganize our gradient estimator as

∇ 𝐖 𝑡 ^ ⁢ ℒ ⁢ ( 𝐖 𝑡 )

1 𝐾 ⁢ ∑ 𝑐

1 𝐶 𝑁 𝑐 𝑁 ⁢ ∑ 𝑘

1 𝐾 [ 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) ]

1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ) ⁢ ,

(7)

Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 )

∑ 𝑐

1 𝐶 𝑁 𝑐 𝑁 ⁢ [ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) + 𝑁 𝑁 𝑐 ⁢ 𝜖 𝑐 ] .

Since { 𝜖 𝑐 } 𝑐

1 𝐶 are zero-sum, there is Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 )

∑ 𝑐

1 𝐶 𝑁 𝑐 𝑁 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) and Eq. (7) holds. Therefore, the server can correctly aggregate Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ) and protect client privacy against recovering Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) .

Remark on communication cost. After getting the gradient estimation ∇ 𝐖 𝑡 ^ ⁢ ℒ ⁢ ( 𝐖 𝑡 ) , the server updates the parameters to 𝐖 𝑡 + 1 using techniques such as gradient descent with learning rate 𝜂 . Similar to the discussion in McMahan et al. [62], the BAFFLE form presented in Algorithm 1 corresponds to the batch-level communication (also named FedSGD) where Lines 11 ∼ 12 execute once for each round 𝑡 . In batch-level settings, we reduce the uploaded Bytes from 4 × | 𝐖 | to 4 × 𝐾 . We can generalize BAFFLE to an analog of epoch-level communication (also named FedAvg), in which each client updates its local parameters multiple steps using the gradient estimator ∇ 𝐖 𝑡 ^ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝔻 𝑐 ) derived from Δ ⁢ ℒ ⁢ ( 𝐖 𝑡 , 𝜹 𝑘 ; 𝔻 𝑐 ) via Eq. (6), and upload model updates to the server after several local epochs. In epoch-level settings, the uploaded Bytes are the same as FedAvg. In experiments, we analyze both batch-level and epoch-level settings for BAFFLE and report the results.

3.3Convergence Analyses

Now we analyze the convergence rate of our gradient estimation method. For continuously differentiable loss functions, we have ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 )

lim 𝜎 → 0 ∇ 𝐖 ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 ) , so we choose a relatively small value for 𝜎 . The convergence guarantee can be derived as follows:

Theorem 3.1

(Proof in Appendix A) Suppose 𝜎 is a small value and the central difference scheme in Eq. (2) holds. For perturbations { 𝛅 𝑘 } 𝑘

1 𝐾 ⁢ ∼ iid ⁢ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) , the empirical covariance matrix is 𝚺 ^ := 1 𝐾 ⁢ 𝜎 2 ⁢ ∑ 𝑘

1 𝐾 𝛅 𝑘 ⁢ 𝛅 𝑘 𝑇 and mean is 𝛅 ^ := 1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝛅 𝑘 . Then for any 𝐖 ∈ ℝ 𝑛 , the relation between ∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 ) and the true gradient ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) can be written as

∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 )

𝚺 ^ ⁢ ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) + 𝑜 ⁢ ( 𝜹 ^ ) ⁢ ,

(8)

where 𝔼 ⁢ [ 𝚺 ^ ]

𝐈 ⁢ ,  ⁢ 𝔼 ⁢ [ 𝛅 ^ ]

𝟎 ⁢ .

Table 1:The classification accuracy (%) of BAFFLE in iid scenarios ( 𝐶

10 ) and epoch-level communication settings with different 𝐾 values ( 𝐾 1 / 𝐾 2 annotations mean using 𝐾 1 for MNIST and 𝐾 2 for CIFAR-10/100). In this configuration, each client updates its local model based on BAFFLE estimated gradients and uploads model updates to the server after an entire epoch on the local dataset. The four guidelines work well under epoch-level settings with total communication rounds 20 / 40 for MNIST and CIFAR-10/100. Settings LeNet WRN-10-2 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100

𝐾 100/200 87.27 48.78 41.54 88.35 52.27 46.61 200/500 89.48 51.82 45.68 89.57 55.59 51.65 500/1000 92.18 53.62 48.72 95.17 58.63 53.15 Ablation Study ( 100 / 200 ) w/o EMA 85.06 47.97 36.81 85.89 50.01 45.86 ReLU 81.55 44.99 39.49 79.08 49.76 44.44 SELU 86.18 48.65 37.34 76.44 43.37 41.79 Central 76.02 45.97 36.53 77.45 42.89 39.62 BP Baselines 94.31 58.75 54.67 97.11 62.29 60.08 Table 2:The accuracy (%) of BAFFLE in label non-iid scenarios ( 𝐶

100 ) and epoch-level settings with total communication rounds 40 and different 𝐾 values. We employ Dirichlet dist. with 𝛼

0.3 to ensure each client has a unique label distribution. Settings LeNet WRN-10-2  CIFAR-10 | CIFAR-100  CIFAR-10 | CIFAR-100

𝐾 200 35.21 | 28.12 39.53 | 30.44 500 38.14 | 30.92 41.69 | 32.89 1000 39.71 | 33.35 43.42 | 34.08 BP Baselines 44.41 | 38.43 51.18 | 40.85

Taking the expectation on both sides of Eq. (8), we obtain 𝔼 ⁢ [ ∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 ) ]

∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) , which degrades to Stein’s identity. To determine the convergence rate w.r.t. the value of 𝐾 , we have:

Theorem 3.2

(Adamczak et al. [3]) With overwhelming probability, the empirical covariance matrix satisfies the inequality ‖ 𝚺 ^ − 𝐈 ‖ 2 ≤ 𝐶 0 ⁢ 𝑛 𝐾 , where ∥ ⋅ ∥ 2 denotes the 2-norm for matrix and 𝐶 0 is an absolute positive constant.

Note that in the finetuning setting, 𝑛 represents the number of trainable parameters, excluding frozen parameters. As concluded, ∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 ) provides an unbiased estimation for the true gradients with convergence rate of 𝒪 ⁢ ( 𝑛 𝐾 ) . Empirically, ∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 ) is used as a noisy gradient to train models, the generalization of which has been analyzed in previous work [105, 50].

4Experiments Figure 2:The classification accuracy (%) of BAFFLE in iid scenarios ( 𝐶

10 ) and batch-level communication settings with various 𝐾 values. We treat the models trained by exact gradients on conventional FL systems as the backpropagation (BP) baselines. On different datasets and architectures, our BAFFLE achieves comparable performance to the exact gradient results with a reasonable 𝐾 .

We evaluate BAFFLE on 4 benchmark datasets: MNIST [49], CIFAR-10/100 [48] and OfficeHome [89]. We consider three models: 1) LeNet [49] with two convolutional layers as the shallow model ( 2.7 × 10 4 parameters); 2) WideResNet [100] with depth

10 and width

2 (WRN-10-2) as the light weight deep model ( 3.0 × 10 5 parameters) and 3) MobileNet [38] as the deep neural networks ( 1.3 × 10 7 parameters) that works on ImageNet.

Participation and communication settings. To perform a comprehensive evaluation of BAFFLE, we simulate three popular FL scenarios [17] with the FedLab [101] participations: iid participations, label non-iid participations and feature non-iid participations. For iid participations, we set the client number 𝐶

10 and use uniform distribution to build local datasets. Then we evaluate our BAFFLE on MNIST and CIFAR-10/100 under both batch-level (FedSGD) and epoch-level (FedAvg) communication settings. For label non-iid participations, we set client number 𝐶

100 , use Dirichlet distribution with 𝛼

0.3 to build clients. For feature non-iid participations, we build clients from the prevailing domain adaptation dataset OfficeHome, which contains 65 categories from 4 different domains, i.e. Art, Clipart, Product and Real-world. We set the total client number to 𝐶

40 and generate 10 clients from each domain. As results, we report Top-1 accuracy for MNIST, CIFAR-10 and OfficeHome and Top-5 accuracy for OfficeHome and CIFAR-100.

Hyperparameters. Following the settings in Section 2, we use FedAVG to aggregate gradients from multiple clients and use SGD-based optimizer to update global parameters. Specifically, we use Adam [46] to train a random initialized model with 𝛽

( 0.9 , 0.99 ) , learning rate 0.01 and epochs 20 / 40 for MNIST and CIFAR-10/100. For OfficeHome, we adapt the transfer learning [40] by loading the ImageNet-pretrained model and finetuning the final layers with Adam, but setting learning rate 0.005 and epochs 40 . In BAFFLE, the perturbation scale 𝜎 and number 𝐾 are the most important hyperparameters. As shown in Theorem 3.1, with less noise and more samples, the BAFFLE will obtain more accurate gradients, leading to improved performance. However, there exists a trade-off between accuracy and computational efficiency: an extremely small 𝜎 will cause the underflow problem [27] and a large 𝐾 will increase computational cost. In practice, we empirically set 𝜎

10 − 4 because it is the smallest value that does not cause numerical problems in all experiments, and works well on edge devices with half-precision floating-point numbers. We also evaluate the impact of 𝐾 across a broad range from 100 to 5000 .

4.1Four Guidelines for BAFFLE

For a general family of continuously differentiable models, we analyze their convergence rate of BAFFLE in Section 3.3. Since deep networks are usually stacked with multiple linear layers and non-linear activation, this layer linearity can be utilized to improve the accuracy-efficiency trade-off. Combining the linearity property and the unique conditions in edge devices (e.g., small data size and half-precision format), we present four guidelines for model design and training that can increase accuracy without introducing extra computation (Appendix C shows the details of linearity analysis):

Using twice forward difference (twice-FD) scheme rather than central scheme. Combining difference scheme Eq. (1) and Eq. (2), we find that by executing twice as many forward inferences (i.e. 𝐖 ± 𝜹 ), the central scheme achieves lower residuals than twice-FD, despite the fact that twice-FD can benefit from additional sample times. With the same forward times (e.g., 2 ⁢ 𝐾 ), determining which scheme performs better is a practical issue. As shown in Appendix C, we find that twice-FD performs better in all experiments, in part because the linearity reduces the benefit from second-order residuals.

Using Hardswish in BAFFLE. ReLU is effective when the middle features ( ℎ ⁢ ( ⋅ ) denotes the feature mapping) have the same sign before and after perturbations, i.e. ℎ ⁢ ( 𝐖 + 𝜹 ) ⋅ ℎ ⁢ ( 𝐖 )

0 . Since ReLU is not differentiable at zero, the value jump occurs when the sign of features changes after perturbations, i.e. ℎ ⁢ ( 𝐖 + 𝜹 ) ⋅ ℎ ⁢ ( 𝐖 ) < 0 . We use Hardswish [37] to overcome this problem as it is continuously differentiable at zero and easy to implement on edge devices.

Using exponential moving average (EMA) to reduce oscillations. As shown in Theorem 3.1, there exists an zero-mean white-noise 𝜹 ^ between the true gradient and our estimation. To smooth out the oscillations caused by white noise, we apply EMA strategies from BYOL [28] to the global parameters, with a smoothing coefficient of 0.995 .

Using GroupNorm as opposed to BatchNorm. On edge devices, the dataset size is typically small, which leads to inaccurate batch statistics estimation and degrades performance when using BatchNorm. Thus we employ GroupNorm [96] to solve this issue.

Table 3:The Top-1 | Top-5 accuracy (%) of BAFFLE on OfficeHome with feature non-iid participations ( 𝐶

40 ) and epoch-level settings with 40 comm. rounds. We use the pretrained MobileNet, freeze the backbone and finetune the FC layers. Settings Domains Avg. Art Clipart Product Real World

𝐾 20 44.75 | 69.46

52.48 | 73.88

66.63 | 89.77

63.78 | 87.83

56.91 | 80.24

50 47.87 | 71.32

53.43 | 76.83

71.28 | 91.74

67.02 | 89.95

59.90 | 82.46

100 50.32 | 74.42

57.43 | 80.73

74.19 | 93.02

69.53 | 90.43

62.87 | 84.65

200 51.42 | 76.64

60.98 | 86.41

76.05 | 94.42

71.51 | 93.14

64.98 | 87.65

500 53.33 | 77.85

62.58 | 86.64

78.84 | 95.23

73.17 | 93.85

66.98 | 88.40

BP Baselines 55.71 | 80.43
65.13 | 88.65
82.44 | 96.05
77.19 | 95.04
70.12 | 90.04 Figure 3:The ablation study of BAFFLE guidelines, with 𝐾

100 on MNIST and 𝐾

500 on CIFAR-10. As seen, twice-FD, Hardswish, and EMA all improve performance without extra computation. EMA reduces oscillations by lessening Gaussian noise. 4.2Performance on IID Clients

Following the settings in Section 4.1, we evaluate the performance of BAFFLE in the iid scenarios. We reproduce all experiments on the BP-based FL systems with the same settings and use them as the baselines. We refer to the baseline results as exact gradients and report the training process of BAFFLE in Figure 2. The value of 𝐾 (e.g., 200 for LeNet and 500 for WRN-10-2) is much less than the dimensions of parameter space (e.g., 2.7 × 10 4 for LeNet and 3 × 10 5 for WRN-10-2). Since the convergence rate to the exact gradient is 𝒪 ⁢ ( 𝑛 𝐾 ) , the marginal benefit of increasing 𝐾 decreases. For instance, increasing 𝐾 from 2000 to 5000 on CIFAR-10 with WRN-10-2 barely improves accuracy by 2 % . Given that the convergence rate of Gaussian perturbations is 𝒪 ⁢ ( 𝑛 𝐾 ) , the sampling efficiency may be improved by choosing an alternative distribution for perturbations.

Ablation studies. As depicted in Figure 3, we conduct ablation studies for BAFFLE to evaluate the aforementioned guidelines. In general, twice-FD, Hardswish and EMA can all improve the accuracy. For two difference schemes, we compare the twice-FD to central scheme with the same computation cost and show that the former outperforms the later, demonstrating that linearity reduces the gain from second-order residuals. As to activation functions, Hardswish is superior to ReLU and SELU because it is differentiable at zero and vanishes to zero in the negative part. Moreover, EMA enhances the performance of training strategies by reducing the effect of white noise.

Communication efficiency. Compared to the batch-level communication settings (FedSGD) in a BP-based FL system, BAFFLE requires each client to upload a 𝐾 -dimensional vector to the server and downloads the updated global parameters in each communication round. Since 𝐾 is significantly less than the parameter amounts (e.g., 500 versus 0.3 million), BAFFLE reduces data transfer by approximately half. To reduce communication costs, the prevalent FL system requires each client to perform model optimization on the local training dataset and upload the model updates to the server after a specified number of local epochs. BAFFLE can also perform epoch-level communications by employing an 𝒪 ⁢ ( 𝑛 ) additional memory to store the perturbation in each forward and estimate the local gradient using Eq. (6). Then each client optimizes the local model with SGD and uploads local updates after several epochs. As shown in Table 1, we evaluate the performance of BAFFLE under one-epoch communication settings. As epoch-level communication is more prevalent in the real-world FL, all the following experiments will be conducted in this context. In brief, BAFFLE uploads the same Bytes as BP-based FL in epoch-level communication while the total communication rounds are much less than FedZO [22], e.g. 20 versus 1000 on MNIST.

4.3Performance on Non-IID Clients

Following Section 4.1, we evaluate the performance of BAFFLE in both label non-iid and feature non-iid scenarios. For label non-iid scenarios, we use the CIFAR-10/100 datasets and employ Dirichlet distribution to ensure that each client has a unique label distribution. We evaluate the performance of BAFFLE with 100 clients and various K values. As seen in Table 2, the model suffers a significant drop in accuracy (e.g., 14 % in CIFAR-10 and 16 % in CIFAR-100) due to the label non-iid effect. For feature non-iid scenarios, we construct clients using the OfficeHome dataset and use MobileNet as the deep model. As seen in Table 3, we use the transfer learning strategy to train MobileNet, i.e., we load the parameters pretrained on ImageNet, freeze the backbone parameters, and retrain the classification layers. The accuracy decrease is approximately 3 % ∼ 5 % .

4.4Computation Efficiency, Memory and Robustness

BAFFLE uses 𝐾 times forward passes instead of backward. Since the backward pass is about as expensive as two normal forward passes [35] and five single-precision accelerated forward passes [67], BAFFLE results in approximately 𝐾 5 times the computation expense of BP-based FL. Although BAFFLE results in 𝐾 5 − 1 times extra computation cost, we show the cost can be reduced with proper training strategies, e.g., the transfer learning in Table 3 can reduce 𝐾 to 20 on the MobileNet and the 224 × 224 sized OfficeHome dataset.

Table 4:The GPU memory cost (MB) of vanilla BP and BAFFLE, respectively. ‘min ∼ max’ denotes the minimum and maximum dynamic memory for BAFFLE. We also report the ratio (%) of vanilla BP to BAFFLE’s max memory cost. Backbone CIFAR-10/100 OfficeHome/ImageNet BP BAFFLE Ratio BP BAFFLE Ratio LeNet 1680 67 ∼ 174 10.35 2527 86 ∼ 201 7.95 WRN-10-2 1878 75 ∼ 196 10.43 3425 94 ∼ 251 7.32 MobileNet 2041 102 ∼ 217 10.63 5271 121 ∼ 289 5.48

Moreover, BAFFLE can reduce huge memory cost on edge devices with the efficiency in static memory and dynamic memory. The auto-differential framework is used to run BP on deep networks, which requires extra static memory (e.g., 200MB for Caffe [41] and 1GB for Pytorch [72]) and imposes a considerable burden on edge devices such as IoT sensors. Due to the necessity of restoring intermediate states, BP also requires enormous amounts of dynamic memory ( ≥ 5GB for MobileNet [24]). Since BAFFLE only requires inference, we can slice the computation graph and execute the forwards per layer [44]. As shown in Table 4, BAFFLE reduces the memory cost to 5% ∼ 10% by executing layer-by-layer inference. By applying kernel-wise computations, we can further reduce the memory cost to approximately 1% (e.g., 64MB for MobileNet [87]), which is suitable for scenarios with extremely limited storage resources, such as TEE.

Recent works exploit TEE to protect models from white-box attacks by preventing model exposure [44]. However, due to the security guarantee, the usable memory of TEE is usually small [87] (e.g., 90MB on Intel SGX for Skylake CPU [61]), which is typically far less than what a backpropagation-based FL system requires. In contrast, BAFFLE can execute in TEE due to its little memory cost (more details are in Appendix B). Membership inference attacks and model inversion attacks need to repeatedly perform model inference and obtain confidence values or classification scores [80, 103]. Given that BAFFLE provides stochastic loss differences Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 ; 𝔻 ) associated with the random perturbation 𝜹 , the off-the-shelf inference attacks may not perform on BAFFLE directly (while adaptively designed attacking strategies are possible to evade BAFFLE). Motivated by differential privacy [1], we further design heuristic experiments to study the information leakage from Δ ⁢ ℒ (details in Appendix D). As shown in Figure 5, the Δ ⁢ ℒ between real data and random noise is hard to distinguish, indicating it is difficult for attackers to obtain useful information from BAFFLE’s outputs.

5Related Work

Along the research routine of FL, many efforts have been devoted to, e.g., dealing with non-IID distributions [104, 76, 21, 92, 53], multi-task learning [81, 60], and preserving privacy of clients [11, 12, 63, 86, 31, 58, 26, 56]. Below we introduce the work on efficiency and vulnerability in FL following the survey of Kairouz et al. [42], which is more related to this paper.

Efficiency in FL. It is widely understood that the communication and computational efficiency is a primary bottleneck for deploying FL in practice [94, 74, 18, 6, 91]. Specifically, communicating between the server and clients could be potentially expensive and unreliable. The seminal work of Konečnỳ et al. [47] introduces sparsification and quantization to reduce the communication cost, where several theoretical works investigate the optimal trade-off between the communication cost and model accuracy [102, 15, 30, 2, 7]. Since practical clients usually have slower upload than download bandwidth, much research interest focuses on gradient compression [84, 4, 36, 8]. On the other hand, different methods have been proposed to reduce the computational burden of local clients [16, 29, 33], since these clients are usually edge devices with limited resources. Training paradigms exploiting tensor factorization in FL can also achieve promising performance [45, 59].

Vulnerability in FL. The characteristic of decentralization in FL is beneficial to protecting data privacy of clients, but in the meanwhile, providing white-box accessibility of model status leaves flexibility for malicious clients to perform poisoning/backdoor attacks [9, 5, 90, 97, 71], model/gradient inversion attacks [103, 25, 39], and membership inference attacks [80, 68, 57]. To alleviate the vulnerability in FL, several defense strategies have been proposed via selecting reliable clients [43], data augmentation [13], update clipping [83], robust training [52], model perturbation [99], detection methods [78, 20], and methods based on differential privacy [95].

6Conclusion and Discussion

Backpropagation is the gold standard for training deep networks, and it is also utilized by traditional FL systems. However, backpropagation is unsuited for edge devices due to their limited resources and possible lack of reliability. Using zero-order optimization techniques, we explore the possibility of BAFFLE in this paper. We need to specify that there are scenarios in which clients are fully trusted and have sufficient computing and storage resources. In these situations, traditional FL with backpropagation is preferred.

While our preliminary studies on BAFFLE have generated encouraging results, there are still a number of tough topics to investigate: (i) Compared to the models trained using exact gradients, the accuracy of models trained using BAFFLE is inferior. One reason is that we select small values of 𝐾 (e.g., 500 ) relative to the number of model parameters (e.g., 3.0 × 10 5 ); another reason is that gradient descent is designed for exact gradients, whereas our noisy gradient estimation may require advanced learning algorithms. (ii) The empirical variance of zero-order gradient estimators affects training convergence in BAFFLE. It is crucial to research variance reduction approaches, such as control variates and non-Gaussian sampling distributions. (iii) Stein’s identity is proposed for loss functions with Gaussian noises imposed on model parameters. Intuitively, this smoothness is related to differential privacy in FL, but determining their relationship requires theoretical derivations.

Acknowledgments

This paper is supported by the National Science Foundation of China (62132017), Zhejiang Provincial Natural Science Foundation of China (LD24F020011).

References [1] ↑ Abadi, M., Chu, A., Goodfellow, I.J., McMahan, H.B., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, October 24-28, 2016 (2016) [2] ↑ Acharya, J., Canonne, C.L., Tyagi, H.: Inference under information constraints i: Lower bounds from chi-square contraction. IEEE Transactions on Information Theory (2020) [3] ↑ Adamczak, R., Litvak, A.E., Pajor, A., Tomczak-Jaegermann, N.: Sharp bounds on the rate of convergence of the empirical covariance matrix. Comptes Rendus Mathematique 349(3-4), 195–200 (2011) [4] ↑ Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: Qsgd: Communication-efficient sgd via gradient quantization and encoding. Advances in Neural Information Processing Systems (NeurIPS) (2017) [5] ↑ Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics (AISTATS) (2020) [6] ↑ Balakrishnan, R., Li, T., Zhou, T., Himayat, N., Smith, V., Bilmes, J.: Diverse client selection for federated learning via submodular maximization. In: International Conference on Learning Representations (ICLR) (2022) [7] ↑ Barnes, L.P., Han, Y., Özgür, A.: Lower bounds for learning distributions under communication constraints via fisher information. Journal of Machine Learning Research (JMLR) (2020) [8] ↑ Basu, D., Data, D., Karakus, C., Diggavi, S.: Qsparse-local-sgd: Distributed sgd with quantization, sparsification and local computations. Advances in Neural Information Processing Systems (NeurIPS) (2019) [9] ↑ Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning (ICML) (2019) [10] ↑ Bonawitz, K.A., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., Seth, K.: Practical secure aggregation for privacy-preserving machine learning. In: ACM Conference on Computer and Communications Security (CCS) (2017) [11] ↑ Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., Seth, K.: Practical secure aggregation for federated learning on user-held data. arXiv preprint arXiv:1611.04482 (2016) [12] ↑ Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., Seth, K.: Practical secure aggregation for privacy-preserving machine learning. In: ACM SIGSAC Conference on Computer and Communications Security (2017) [13] ↑ Borgnia, E., Geiping, J., Cherepanova, V., Fowl, L., Gupta, A., Ghiasi, A., Huang, F., Goldblum, M., Goldstein, T.: Dp-instahide: Provably defusing poisoning and backdoor attacks with differentially private data augmentations. arXiv preprint arXiv:2103.02079 (2021) [14] ↑ Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., Zhang, Q.: JAX: composable transformations of Python+NumPy programs (2018), http://github.com/google/jax [15] ↑ Braverman, M., Garg, A., Ma, T., Nguyen, H.L., Woodruff, D.P.: Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In: ACM Symposium on Theory of Computing (2016) [16] ↑ Caldas, S., Konečny, J., McMahan, H.B., Talwalkar, A.: Expanding the reach of federated learning by reducing client resource requirements. arXiv preprint arXiv:1812.07210 (2018) [17] ↑ Caldas, S., Wu, P., Li, T., Konečný, J., McMahan, H.B., Smith, V., Talwalkar, A.: LEAF: A benchmark for federated settings. CoRR (2018) [18] ↑ Chen, M., Shlezinger, N., Poor, H.V., Eldar, Y.C., Cui, S.: Communication-efficient federated learning. Proceedings of the National Academy of Sciences (2021) [19] ↑ Chen, Y., Luo, F., Li, T., Xiang, T., Liu, Z., Li, J.: A training-integrity privacy-preserving federated learning scheme with trusted execution environment. Information Sciences (2020) [20] ↑ Dong, Y., Yang, X., Deng, Z., Pang, T., Xiao, Z., Su, H., Zhu, J.: Black-box detection of backdoor attacks with limited information and data. In: IEEE International Conference on Computer Vision (ICCV) (2021) [21] ↑ Eichner, H., Koren, T., McMahan, B., Srebro, N., Talwar, K.: Semi-cyclic stochastic gradient descent. In: International Conference on Machine Learning (ICML) (2019) [22] ↑ Fang, W., Yu, Z., Jiang, Y., Shi, Y., Jones, C.N., Zhou, Y.: Communication-efficient stochastic zeroth-order optimization for federated learning. IEEE Trans. Signal Process. (2022) [23] ↑ Flaxman, A.D., Kalai, A.T., McMahan, H.B.: Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007 (2004) [24] ↑ Gao, Y., Liu, Y., Zhang, H., Li, Z., Zhu, Y., Lin, H., Yang, M.: Estimating gpu memory consumption of deep learning models. In: ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (2020) [25] ↑ Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients-how easy is it to break privacy in federated learning? In: Advances in Neural Information Processing Systems (NeurIPS) (2020) [26] ↑ Ghazi, B., Kumar, R., Manurangsi, P., Pagh, R.: Private counting from anonymous messages: Near-optimal accuracy with vanishing communication overhead. In: International Conference on Machine Learning (ICML) (2020) [27] ↑ Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016) [28] ↑ Grill, J., Strub, F., Altché, F., Tallec, C., Richemond, P.H., Buchatskaya, E., Doersch, C., Pires, B.Á., Guo, Z., Azar, M.G., Piot, B., Kavukcuoglu, K., Munos, R., Valko, M.: Bootstrap your own latent - A new approach to self-supervised learning. In: NeurIPS 2020 (2020) [29] ↑ Hamer, J., Mohri, M., Suresh, A.T.: Fedboost: A communication-efficient algorithm for federated learning. In: International Conference on Machine Learning (ICML) (2020) [30] ↑ Han, Y., Özgür, A., Weissman, T.: Geometric lower bounds for distributed parameter estimation under communication constraints. In: Conference On Learning Theory (COLT) (2018) [31] ↑ Hao, M., Li, H., Luo, X., Xu, G., Yang, H., Liu, S.: Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Transactions on Industrial Informatics (2019) [32] ↑ Hard, A., Rao, K., Mathews, R., Ramaswamy, S., Beaufays, F., Augenstein, S., Eichner, H., Kiddon, C., Ramage, D.: Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018) [33] ↑ He, C., Annavaram, M., Avestimehr, S.: Group knowledge transfer: Federated learning of large cnns at the edge. In: Advances in Neural Information Processing Systems (NeurIPS) (2020) [34] ↑ He, D., Shi, W., Li, S., Gao, X., Zhang, J., Bian, J., Wang, L., Liu, T.Y.: Learning physics-informed neural networks without stacked back-propagation. arXiv preprint arXiv:2202.09340 (2022) [35] ↑ Hinton, G., Srivastava, N.: Csc321: Introduction to neural networks and machine learning. Lecture 10 (2010) [36] ↑ Horváth, S., Ho, C.Y., Horvath, L., Sahu, A.N., Canini, M., Richtárik, P.: Natural compression for distributed deep learning. arXiv preprint arXiv:1905.10988 (2019) [37] ↑ Howard, A., Pang, R., Adam, H., Le, Q.V., Sandler, M., Chen, B., Wang, W., Chen, L., Tan, M., Chu, G., Vasudevan, V., Zhu, Y.: Searching for mobilenetv3. In: IEEE International Conference on Computer Vision (ICCV) (2019) [38] ↑ Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR (2017) [39] ↑ Huang, Y., Gupta, S., Song, Z., Li, K., Arora, S.: Evaluating gradient inversion attacks and defenses in federated learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2021) [40] ↑ Huh, M., Agrawal, P., Efros, A.A.: What makes imagenet good for transfer learning? CoRR (2016) [41] ↑ Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R.B., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. In: Hua, K.A., Rui, Y., Steinmetz, R., Hanjalic, A., Natsev, A., Zhu, W. (eds.) Proceedings of the ACM International Conference on Multimedia (2014) [42] ↑ Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al.: Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14(1–2), 1–210 (2021) [43] ↑ Kang, J., Xiong, Z., Niyato, D., Zou, Y., Zhang, Y., Guizani, M.: Reliable federated learning for mobile networks. IEEE Wireless Communications (2020) [44] ↑ Kim, K., Kim, C.H., Rhee, J.J., Yu, X., Chen, H., Tian, D., Lee, B.: Vessels: Efficient and scalable deep learning prediction on trusted processors. In: ACM Symposium on Cloud Computing (2020) [45] ↑ Kim, Y., Sun, J., Yu, H., Jiang, X.: Federated tensor factorization for computational phenotyping. In: ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD) (2017) [46] ↑ Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 (2015) [47] ↑ Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016) [48] ↑ Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009) [49] ↑ LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of IEEE (1998) [50] ↑ Li, J., Luo, X., Qiao, M.: On generalization error bounds of noisy gradient methods for non-convex learning. In: International Conference on Learning Representations (ICLR) (2020) [51] ↑ Li, L., Fan, Y., Tse, M., Lin, K.Y.: A review of applications in federated learning. Computers & Industrial Engineering (2020) [52] ↑ Li, T., Hu, S., Beirami, A., Smith, V.: Ditto: Fair and robust federated learning through personalization. In: International Conference on Machine Learning (ICML) (2021) [53] ↑ Li, X., Huang, K., Yang, W., Wang, S., Zhang, Z.: On the convergence of fedavg on non-iid data. In: International Conference on Learning Representations (ICLR) (2020) [54] ↑ Li, Z., Chen, L.: Communication-efficient decentralized zeroth-order method on heterogeneous data. In: WCSP 2021 (2021) [55] ↑ Liu, S., Chen, P.Y., Kailkhura, B., Zhang, G., Hero III, A.O., Varshney, P.K.: A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications. IEEE Signal Processing Magazine (2020) [56] ↑ Liu, Y., Suresh, A.T., Yu, F.X.X., Kumar, S., Riley, M.: Learning discrete distributions: user vs item-level privacy. In: Advances in Neural Information Processing Systems (NeurIPS) (2020) [57] ↑ Luo, X., Wu, Y., Xiao, X., Ooi, B.C.: Feature inference attack on model predictions in vertical federated learning. In: IEEE International Conference on Data Engineering (ICDE) (2021) [58] ↑ Lyu, L., Yu, H., Ma, X., Sun, L., Zhao, J., Yang, Q., Yu, P.S.: Privacy and robustness in federated learning: Attacks and defenses. IEEE Transactions on Neural Networks and Learning Systems pp. 1–21 (2022) [59] ↑ Ma, J., Zhang, Q., Lou, J., Ho, J.C., Xiong, L., Jiang, X.: Privacy-preserving tensor factorization for collaborative health data analysis. In: ACM International Conference on Information and Knowledge Management (CIKM) (2019) [60] ↑ Marfoq, O., Neglia, G., Bellet, A., Kameni, L., Vidal, R.: Federated multi-task learning under a mixture of distributions. In: Advances in Neural Information Processing Systems (NeurIPS) (2021) [61] ↑ McKeen, F., Alexandrovich, I., Anati, I., Caspi, D., Johnson, S., Leslie-Hurd, R., Rozas, C.: Intel® software guard extensions (intel® sgx) support for dynamic memory management inside an enclave. In: Proceedings of the Hardware and Architectural Support for Security and Privacy (2016) [62] ↑ McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics (AISTATS) (2017) [63] ↑ McMahan, B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. In: International Conference on Learning Representations (ICLR) (2018) [64] ↑ Mo, F., Haddadi, H., Katevas, K., Marin, E., Perino, D., Kourtellis, N.: Ppfl: privacy-preserving federated learning with trusted execution environments. In: Annual International Conference on Mobile Systems, Applications, and Services (2021) [65] ↑ Mondal, A., More, Y., Rooparaghunath, R.H., Gupta, D.: Flatee: Federated learning across trusted execution environments. arXiv preprint arXiv:2111.06867 (2021) [66] ↑ Moulay, E., Léchappé, V., Plestan, F.: Properties of the sign gradient descent algorithms. Inf. Sci. 492, 29–39 (2019) [67] ↑ Nakandala, S., Nagrecha, K., Kumar, A., Papakonstantinou, Y.: Incremental and approximate computations for accelerating deep CNN inference. ACM Trans. Database Syst. 45 (2020) [68] ↑ Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: IEEE symposium on security and privacy (S&P). IEEE (2019) [69] ↑ Nesterov, Y., Spokoiny, V.: Random gradient-free minimization of convex functions. Foundations of Computational Mathematics (2017) [70] ↑ Pang, T., Xu, K., Li, C., Song, Y., Ermon, S., Zhu, J.: Efficient learning of generative models via finite-difference score matching. In: Advances in Neural Information Processing Systems (NeurIPS) (2020) [71] ↑ Pang, T., Yang, X., Dong, Y., Su, H., Zhu, J.: Accumulative poisoning attacks on real-time data. In: Advances in Neural Information Processing Systems (NeurIPS) (2021) [72] ↑ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Köpf, A., Yang, E.Z., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32 (2019) [73] ↑ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems (NeurIPS) (2019) [74] ↑ Rothchild, D., Panda, A., Ullah, E., Ivkin, N., Stoica, I., Braverman, V., Gonzalez, J., Arora, R.: Fetchsgd: Communication-efficient federated learning with sketching. In: International Conference on Machine Learning (ICML) (2020) [75] ↑ Sabt, M., Achemlal, M., Bouabdallah, A.: Trusted execution environment: what it is, and what it is not. In: IEEE Trustcom/BigDataSE/ISPA (2015) [76] ↑ Sattler, F., Wiedemann, S., Müller, K.R., Samek, W.: Robust and communication-efficient federated learning from non-iid data. IEEE Transactions on Neural Networks and Learning Systems (TNNLS) (2019) [77] ↑ Saxe, A.M., McClelland, J.L., Ganguli, S.: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 (2013) [78] ↑ Seetharaman, S., Malaviya, S., KV, R., Shukla, M., Lodha, S.: Influence based defense against data poisoning attacks in online learning. arXiv preprint arXiv:2104.13230 (2021) [79] ↑ Sharma, H., Park, J., Suda, N., Lai, L., Chau, B., Chandra, V., Esmaeilzadeh, H.: Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. In: IEEE Annual International Symposium on Computer Architecture (ISCA) (2018) [80] ↑ Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: IEEE symposium on security and privacy (S&P). IEEE (2017) [81] ↑ Smith, V., Chiang, C.K., Sanjabi, M., Talwalkar, A.S.: Federated multi-task learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2017) [82] ↑ Stein, C.M.: Estimation of the mean of a multivariate normal distribution. The annals of Statistics (1981) [83] ↑ Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning? arXiv preprint arXiv:1911.07963 (2019) [84] ↑ Suresh, A.T., Felix, X.Y., Kumar, S., McMahan, H.B.: Distributed mean estimation with limited communication. In: International Conference on Machine Learning (ICML) (2017) [85] ↑ Tramer, F., Boneh, D.: Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. In: International Conference on Learning Representations (ICLR) (2019) [86] ↑ Truex, S., Baracaldo, N., Anwar, A., Steinke, T., Ludwig, H., Zhang, R., Zhou, Y.: A hybrid approach to privacy-preserving federated learning. In: ACM workshop on Artificial Intelligence and Security (2019) [87] ↑ Truong, J.B., Gallagher, W., Guo, T., Walls, R.J.: Memory-efficient deep learning inference in trusted execution environments. In: IEEE International Conference on Cloud Engineering (IC2E) (2021) [88] ↑ Umuroglu, Y., Rasnayake, L., Själander, M.: Bismo: A scalable bit-serial matrix multiplication overlay for reconfigurable computing. In: International Conference on Field Programmable Logic and Applications (FPL) (2018) [89] ↑ Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: CVPR (2017) [90] ↑ Wang, H., Sreenivasan, K., Rajput, S., Vishwakarma, H., Agarwal, S., Sohn, J.y., Lee, K., Papailiopoulos, D.: Attack of the tails: Yes, you really can backdoor federated learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2020) [91] ↑ Wang, H.P., Stich, S., He, Y., Fritz, M.: Progfed: effective, communication, and computation efficient federated learning by progressive training. In: International Conference on Machine Learning (ICML) (2022) [92] ↑ Wang, J., Liu, Q., Liang, H., Joshi, G., Poor, H.V.: Tackling the objective inconsistency problem in heterogeneous federated optimization. In: Advances in Neural Information Processing Systems (NeurIPS) (2020) [93] ↑ Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: Haq: Hardware-aware automated quantization with mixed precision. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019) [94] ↑ Wang, L., Wang, W., Li, B.: Cmfl: Mitigating communication overhead for federated learning. In: IEEE International Conference on Distributed Computing Systems (ICDCS) (2019) [95] ↑ Wei, K., Li, J., Ding, M., Ma, C., Yang, H.H., Farokhi, F., Jin, S., Quek, T.Q., Poor, H.V.: Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security (2020) [96] ↑ Wu, Y., He, K.: Group normalization. Int. J. Comput. Vis. 128(3), 742–755 (2020) [97] ↑ Xie, C., Huang, K., Chen, P.Y., Li, B.: Dba: Distributed backdoor attacks against federated learning. In: International Conference on Learning Representations (ICLR) (2020) [98] ↑ Yang, T., Andrew, G., Eichner, H., Sun, H., Li, W., Kong, N., Ramage, D., Beaufays, F.: Applied federated learning: Improving google keyboard query suggestions. arXiv preprint arXiv:1812.02903 (2018) [99] ↑ Yang, X., Feng, Y., Fang, W., Shao, J., Tang, X., Xia, S.T., Lu, R.: An accuracy-lossless perturbation method for defending privacy attacks in federated learning. In: ACM Web Conference (WWW) (2022) [100] ↑ Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference 2016 (2016) [101] ↑ Zeng, D., Liang, S., Hu, X., Wang, H., Xu, Z.: Fedlab: A flexible federated learning framework. arXiv preprint arXiv:2107.11621 (2021) [102] ↑ Zhang, Y., Duchi, J., Jordan, M.I., Wainwright, M.J.: Information-theoretic lower bounds for distributed statistical estimation with communication constraints. Advances in Neural Information Processing Systems (NeurIPS) (2013) [103] ↑ Zhang, Y., Jia, R., Pei, H., Wang, W., Li, B., Song, D.: The secret revealer: Generative model-inversion attacks against deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020) [104] ↑ Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-iid data. arXiv preprint arXiv:1806.00582 (2018) [105] ↑ Zhu, Z., Wu, J., Yu, B., Wu, L., Ma, J.: The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects. In: International Conference on Machine Learning (ICML) (2019) Appendix 0.AProofs 0.A.1Proof of Stein’s identity

We recap the proof of Stein’s identity following He et al. [34], where

∇ 𝐖 ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 )

∇ 𝐖 𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐼 ) ⁢ ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 )

= ( 2 ⁢ 𝜋 ) − 𝑛 2 ⋅ ∇ 𝐖 ⁢ ∫ ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) ⋅ exp ⁡ ( − ‖ 𝜹 ‖ 2 2 2 ⁢ 𝜎 2 ) ⁢ 𝑑 𝜹

= ( 2 ⁢ 𝜋 ) − 𝑛 2 ⋅ ∫ ℒ ⁢ ( 𝐖 ~ ; 𝔻 ) ⋅ ∇ 𝐖 exp ⁡ ( − ‖ 𝐖 ~ − 𝐖 ‖ 2 2 2 ⁢ 𝜎 2 ) ⁢ 𝑑 𝐖 ~

= ( 2 ⁢ 𝜋 ) − 𝑛 2 ⋅ ∫ ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) ⋅ 𝜹 𝜎 2 ⋅ exp ⁡ ( − ‖ 𝜹 ‖ 2 2 2 ⁢ 𝜎 2 ) ⁢ 𝑑 𝜹

= 𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) ⁢ [ 𝜹 𝜎 2 ⁢ ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) ] ⁢ .

(9)

By symmetry, we change 𝜹 to − 𝜹 and obtain

∇ 𝐖 ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 )

− 𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) ⁢ [ 𝜹 𝜎 2 ⁢ ℒ ⁢ ( 𝐖 − 𝜹 ; 𝔻 ) ] ⁢ ,

(10)

and further we prove that

∇ 𝐖 ℒ 𝜎 ⁢ ( 𝐖 ; 𝔻 )

1 2 ⁢ 𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) ⁢ [ 𝜹 𝜎 2 ⁢ ℒ ⁢ ( 𝐖 + 𝜹 ; 𝔻 ) ] − 1 2 ⁢ 𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) ⁢ [ 𝜹 𝜎 2 ⁢ ℒ ⁢ ( 𝐖 − 𝜹 ; 𝔻 ) ]

= 𝔼 𝜹 ∼ 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) ⁢ [ 𝜹 2 ⁢ 𝜎 2 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 ; 𝔻 ) ] ⁢ .

0.A.2Proof of Theorem 1

We rewrite the format of ∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 ) as follows:

∇ 𝐖 ^ ⁢ ℒ ⁢ ( 𝐖 ; 𝔻 )

1 𝐾 ⁢ ∑ 𝑘

1 𝐾 [ 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 𝑘 ; 𝔻 ) ]

= 1 𝐾 ⁢ ∑ 𝑘

1 𝐾 [ 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ ( 2 ⁢ 𝜹 𝑘 ⊤ ⁢ ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) + 𝑜 ⁢ ( ‖ 𝜹 𝑘 ‖ 2 2 ) ) ] (using Eq. ( 2 )) 

= 1 𝐾 ⁢ 𝜎 2 ⁢ ∑ 𝑘

1 𝐾 [ 𝜹 𝑘 ⁢ 𝜹 𝑘 ⊤ ] ⁢ ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) + 1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ 𝑜 ⁢ ( ‖ 𝜹 𝑘 ‖ 2 2 )

= 𝚺 ^ ⁢ ∇ 𝐖 ℒ ⁢ ( 𝐖 ; 𝔻 ) + 1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ 𝑜 ⁢ ( ‖ 𝜹 𝑘 ‖ 2 2 ) ⁢ .

(11)

Then we prove 1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ 𝑜 ⁢ ( ‖ 𝜹 𝑘 ‖ 2 2 )

𝑜 ⁢ ( 𝜹 ^ ) . Suppose 𝜹 𝑘

( 𝛿 𝑘 , 1 , ⋯ , 𝛿 𝑘 , 𝑛 ) , then we have ‖ 𝜹 𝑘 ‖ 2 2 𝜎 2

∑ 𝑖

1 𝑛 ( 𝛿 𝑘 , 𝑖 𝜎 ) 2 . Since ∀ 𝑖 , 𝛿 𝑘 , 𝑖 𝜎 ∼ 𝒩 ⁢ ( 0 , 1 ) , we have ‖ 𝜹 𝑘 ‖ 2 2 𝜎 2 ∼ 𝜒 2 ⁢ ( 𝑛 ) and 𝔼 ⁢ ( ‖ 𝜹 𝑘 ‖ 2 2 𝜎 2 )

𝑛 . So with high probability, 𝑜 ⁢ ( ‖ 𝜹 𝑘 ‖ 2 2 ) 𝜎 2

𝑜 ⁢ ( 𝑛 ) . Substituting it into Eq. (11), we have with high probability,

1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝜹 𝑘 2 ⁢ 𝜎 2 ⁢ 𝑜 ⁢ ( ‖ 𝜹 𝑘 ‖ 2 2 )

𝜹 ^ ⋅ 𝑜 ⁢ ( 𝑛 )

𝑜 ⁢ ( 𝜹 ^ ) ⁢ ,

where we regard 𝑛 as a constant for a given model architecture. Finally, we prove 𝔼 ⁢ [ 𝜹 ^ ]

𝟎 and 𝔼 ⁢ [ 𝚺 ^ ]

𝐈 . It is trivial that 𝔼 ⁢ [ 𝜹 ^ ]

𝟎 since 𝜹 ^ ∼ 𝒩 ⁢ ( 0 , 1 𝐾 ⁢ 𝜎 2 ⁢ 𝐈 ) . For 𝔼 ⁢ [ 𝚺 ^ ]

𝐈 , we can observe by examining each of its entries

𝚺 ^ [ 𝑖 ⁢ 𝑗 ]

1 𝐾 ⁢ 𝜎 2 ⁢ ∑ 𝑘

1 𝐾 𝛿 𝑘 ⁢ [ 𝑖 ] ⁢ 𝛿 𝑘 ⁢ [ 𝑗 ]

1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝛿 𝑘 ⁢ [ 𝑖 ] 𝜎 ⁢ 𝛿 𝑘 ⁢ [ 𝑗 ] 𝜎 ⁢ ,

(12)

where we have used subscripts [ 𝑖 ⁢ 𝑗 ] and [ 𝑖 ] to denote the usual indexing of matrices and vectors. Specifically, for diagonal entries (i.e., 𝑖

𝑗 ), we observe 𝐾 ⋅ 𝚺 ^ [ 𝑖 ⁢ 𝑖 ]

∑ 𝑘

1 𝐾 ( 𝛿 𝑘 ⁢ [ 𝑖 ] 𝜎 ) 2 distributes as 𝜒 2 ⁢ ( 𝐾 ) , which means 𝔼 ⁢ [ 𝚺 ^ [ 𝑖 ⁢ 𝑖 ] ]

1

𝐈 [ 𝑖 ⁢ 𝑖 ] and Var ⁢ [ 𝚺 ^ [ 𝑖 ⁢ 𝑖 ] ]

2 𝐾 ; for non-diagonal entries (i.e., 𝑖 ≠ 𝑗 ), we have 𝔼 ⁢ [ 𝚺 ^ [ 𝑖 ⁢ 𝑗 ] ]

1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝔼 ⁢ [ 𝛿 𝑘 ⁢ [ 𝑖 ] 𝜎 ⁢ 𝛿 𝑘 ⁢ [ 𝑗 ] 𝜎 ]

1 𝐾 ⁢ ∑ 𝑘

1 𝐾 𝔼 ⁢ [ 𝛿 𝑘 ⁢ [ 𝑖 ] ] 𝜎 ⁢ 𝔼 ⁢ [ 𝛿 𝑘 ⁢ [ 𝑗 ] ] 𝜎

0

𝐈 [ 𝑖 ⁢ 𝑗 ] , due to the independence between different dimensions in 𝜹 𝑘 . ∎

Figure 4:A sketch map to run BAFFLE in one trusted execution environment. The pipeline contains three steps: (1) Load the data and model into the security storage. (2) Load the code of BAFFLE into the root of trust. (3) Run the BAFFLE program in a separation kernel. Appendix 0.BTrusted execution environment

A trusted execution environment (TEE) [75] is regarded as the ultimate solution for defending against all white-box attacks by preventing any model exposure. TEE protects both data and model security with three components: physical secure storage to ensure the confidentiality, integrity, and tamper-resistance of stored data; a root of trust to load trusted code; and a separate kernel to execute code in an isolated environment, as illustrated in Figure 4. Using TEE, the FL system is able to train deep models without revealing model specifics. However, due to the security guarantee, the usable memory of TEE is typically small [87] (e.g., 90MB on Intel SGX for Skylake CPU [61]), which is considerably less than what deep models require for backpropagation (e.g., ≥ 5GB for VGG-16 [24]).

Figure 5:The robustness of BAFFLE to inference attacks. For real data, we randomly sample some input-label pairs from the validation dataset. For random noise, we generate input-label pairs from standard normal distribution. We sample 500 perturbations 𝜹 from 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) , collect the values of Δ ⁢ ℒ ⁢ ( 𝐖 , 𝜹 ; 𝔻 ) for real data and random noise separately, and compare their distributions. Appendix 0.CConvergence analyses of deep linear networks in BAFFLE

We analyze the convergence of BAFFLE in Section 3 using a general technique applicable to any continuously differentiable models corresponding to the loss function ℒ ⁢ ( 𝐖 ; 𝔻 ) . Since deep networks are the most prevalent models in FL, which has strong linearity, it is simpler to investigate the convergence of deep linear networks [77].

Consider a two-layer deep linear network in a classification task with 𝐿 categories. We denote the model parameters as { 𝐖 1 , 𝐖 2 } , where in the first layer 𝐖 1 ∈ ℝ 𝑛 × 𝑚 , in the second layer 𝐖 2 ∈ ℝ 𝐿 × 𝑛 consists of 𝐿 vectors related to the 𝐿 categories as { 𝐰 2 𝑙 } 𝑙

1 𝐿 and 𝐰 2 𝑐 ∈ ℝ 1 × 𝑛 . For the input data 𝐗 ∈ ℝ 𝑚 × 1 with label 𝑦 , we train the deep linear network by maximizing the classification score on the 𝑦 -th class. Since there is no non-linear activation in deep linear networks, the forward inference can be represented as ℎ

𝐰 2 𝑦 ⁢ 𝐖 1 ⁢ 𝐗 , and the loss is − ℎ . It is easy to show that ∂ ℎ ∂ 𝐰 2 𝑦

( 𝐖 1 ⁢ 𝐗 ) ⊤ and ∂ ℎ ∂ 𝐖 1

( 𝐗𝐰 2 𝑦 ) ⊤ . We sample 𝜹 1 , 𝜹 2 from noise generator 𝒩 ⁢ ( 0 , 𝜎 2 ⁢ 𝐈 ) , where 𝜹 1 ∈ ℝ 𝑛 × 𝑚 and 𝜹 2 ∈ ℝ 1 × 𝑛 . Let ℎ ⁢ ( 𝜹 𝟏 , 𝜹 𝟐 ) := ( 𝐰 2 𝑦 + 𝜹 2 ) ⁢ ( 𝐖 1 + 𝜹 1 ) ⁢ 𝐗 , we discover that the BAFFLE estimation in Eq. (6) follows the same pattern for both forward (2) and central schemes (3):

Δ for ⁢ ℎ ⁢ ( 𝜹 1 , 𝜹 2 )

:= ℎ ⁢ ( 𝜹 𝟏 , 𝜹 2 ) − ℎ ⁢ ( 𝟎 , 𝟎 ) ⁢ ;

Δ ctr ⁢ ℎ ⁢ ( 𝜹 1 , 𝜹 2 )

:= ℎ ⁢ ( 𝜹 1 , 𝜹 2 ) − ℎ ⁢ ( − 𝜹 1 , − 𝜹 2 ) ⁢ ;

Δ for ⁢ ℎ ⁢ ( 𝜹 1 , 𝜹 2 ) 𝜎 2

Δ ctr ⁢ ℎ ⁢ ( 𝜹 1 , 𝜹 2 ) 2 ⁢ 𝜎 2

1 𝜎 2 ⁢ ( 𝐰 2 𝑐 ⁢ 𝜹 1 ⁢ 𝐗 + 𝜹 2 ⁢ 𝐖 1 ⁢ 𝐗 ) ⁢ .

(13)

This equivalent form in deep linear networks illustrates that the residual benefit from the central scheme is reduced by the linearity, hence the performance of the two finite difference schemes described above is same in deep linear networks. We refer to this characteristic as FD scheme independence. We also find the property of 𝜎 independence, that is, the choice of 𝜎 does not effect the results of finite difference, due to the fact that 𝜹 1 𝜎 and 𝜹 2 𝜎 follow the standard normal distribution.

Based on the findings from Eq. (13), we propose the following useful guideline that improves accuracy under the same computation cost: Using twice forward difference (twice-FD) scheme rather than central scheme. Combining the forward scheme Eq. (2) and central scheme Eq. (3), we find that the central scheme produces smaller residuals than the forward scheme by executing twice as many forward inferences, i.e. 𝐖 ± 𝜹 . With the same forward inference times (e.g., 2 𝐾 ), one practical difficulty is to identify which scheme performs better. We find that the forward scheme performs better in all experiments, in part because the linearity reduces the benefit from second-order residuals, as demonstrated by Eq. (13).

Appendix 0.DRobustness to inference attacks

To explore the information leakage from outputs Δ ⁢ ℒ , we design heuristic experiments. Regular attacks such as membership inference attacks and model inversion attacks cannot directly target BAFFLE since they must repeatedly do model inference and get confidence values or classification scores. To analyze the possibility of information leaking, we employ the concept of differential privacy [1] and compare the BAFFLE’s outputs from private data to random noise. If we cannot discriminate between private data and random noise merely from the BAFFLE’s outputs, we can assert that the outputs do not contain private information. In details, we utilize the validation dataset as the private data and generate random input pairs from Gaussian and Laplacian noise as ( 𝐗 ~ , 𝑦 ~ ) . Then we apply BAFFLE to both private data and random noise and compare the distributions of their respective outputs Δ ⁢ ℒ . As shown in Figure 5, it is difficult to distinguish the BAFFLE’s outputs between private data and random noise, showing that it is difficult for attackers to acquire meaningful information rather than random noise from the BAFFLE’s outputs.

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
74.2 kB
·
Xet hash:
a2af37b6cdffd2d955e00b24267f3bd73d2b7cc7b67d3005662d1e6784ee1e86

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.