ACL-OCL / Base_JSON /prefixF /json /fl4nlp /2022.fl4nlp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:40:10.809313Z"
},
"title": "Intrinsic Gradient Compression for Scalable and Efficient Federated Learning",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Melas-Kyriazi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Franklyn",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Federated learning is a rapidly growing area of research, holding the promise of privacypreserving distributed training on edge devices. The largest barrier to wider adoption of federated learning is the communication cost of model updates, which is accentuated by the fact that many edge devices are bandwidthconstrained. At the same time, within the machine learning theory community, a separate line of research has emerged around optimizing networks within a subspace of the full space of all parameters. The dimension of the smallest subspace for which these methods still yield strong results is called the intrinsic dimension. In this work, we prove a general correspondence between the notions of intrinsic dimension and gradient compressibility, and we show that a family of low-bandwidth federated learning algorithms, which we call intrinsic gradient compression algorithms, naturally emerges from this correspondence. Finally, we conduct large-scale NLP experiments using transformer models with over 100M parameters (GPT-2 and BERT), and show that our method outperforms the state-of-the-art in gradient compression.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Federated learning is a rapidly growing area of research, holding the promise of privacypreserving distributed training on edge devices. The largest barrier to wider adoption of federated learning is the communication cost of model updates, which is accentuated by the fact that many edge devices are bandwidthconstrained. At the same time, within the machine learning theory community, a separate line of research has emerged around optimizing networks within a subspace of the full space of all parameters. The dimension of the smallest subspace for which these methods still yield strong results is called the intrinsic dimension. In this work, we prove a general correspondence between the notions of intrinsic dimension and gradient compressibility, and we show that a family of low-bandwidth federated learning algorithms, which we call intrinsic gradient compression algorithms, naturally emerges from this correspondence. Finally, we conduct large-scale NLP experiments using transformer models with over 100M parameters (GPT-2 and BERT), and show that our method outperforms the state-of-the-art in gradient compression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Federated learning is a nascent area of study which seeks to perform machine learning in a privacypreserving way. However, federated learning with deep neural networks suffers from a problem with communication bandwidth: it is very costly to send gradient/model updates over a network, especially when communicating with mobile phones and edge devices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To reduce bandwidth for federated learning, it is natural to utilize various forms of compression. Previous works have tried to achieve compression in two ways: (1) by compressing the information communicated in standard gradient descent algorithms (e.g. quantizing gradients (Wen et al., 2017 )) * Equal contribution and (2) by training with non-standard methods that naturally use less bandwidth (e.g. prototypical networks (Tan et al., 2021) ).",
"cite_spans": [
{
"start": 276,
"end": 293,
"text": "(Wen et al., 2017",
"ref_id": "BIBREF30"
},
{
"start": 426,
"end": 444,
"text": "(Tan et al., 2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the same time, in the machine learning theory community, researchers have been working to understand what at first seems like an entirely different question: why do hugely overparametrized models generalize so well? One promising approach to this answering this question has utilized the concept of intrinsic dimension, defined for a given optimization problem as the smallest dimension d for which we can solve the problem when the weights are restricted to a a d-dimensional manifold. To be precise, it is the smallest d for which the standard loss minimization problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03b8 \u2032 \u2208R d \u2113(f g(\u03b8 \u2032 ) )",
"eq_num": "(1)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "has a satisfactory solution, where the image of g is a d-dimensional manifold. If the intrinsic dimension of a problem is low, then even if a model is vastly overparameterized, only a small number of parameters need to be tuned in order to obtain a good solution, which is often enough to imply certain generalization guarantees. We begin this paper by observing that the two problems above are naturally related. If one can find a solution to the problem by only tuning d parameters, as in Equation 1, then a corresponding low bandwidth algorithm can be found by simply running stochastic gradient descent in the reduced parameter space (in this case, R d ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, simply optimizing a subset of a model's parameters is often insufficient for training models (especially when training from scratch, rather than finetuning). Thus, we are inspired to seek a more general characterization of algorithms that use a low amount of bandwidth. In order to do this, we rewrite the optimization problem in Equation (1) in the original parameter space. When g(\u03b8 \u2032 ) = A\u03b8 \u2032 for some matrix A (so the lowdimensional manifold is a low-dimensional sub-space), stochastic gradient descent can be rewritten as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 t+1 = \u03b8 t \u2212 \u03b7AA \u22a4 \u2207 \u03b8 \u2113(f \u03b8 )| \u03b8=\u03b8t .",
"eq_num": "(2)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We call this method static intrinsic gradient compression, because our gradients are projected into a static (\"intrinsic\") subspace. Now, Equation 2admits a natural generalization, which allows us to explore more of the parameter space while still preserving a low level of upload bandwidth usage:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 t+1 = \u03b8 t \u2212 \u03b7A t A \u22a4 t \u2207 \u03b8 \u2113(f \u03b8 )| \u03b8=\u03b8t",
"eq_num": "(3)"
}
],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where A t may vary with time. We call the set of all such algorithms intrinsic gradient compression algorithms, and consider three particular instantiations: static, time-varying, and k-varying, each of which perform in different use cases. Our approach is model-agnostic and highly scalable. In experiments across multiple federated learning benchmarks (language modeling, text classification, and image classification), we vastly outperform prior gradient compression methods, and show strong performance even at very high compression rates (e.g. up to 1000\u00d7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We find a general class of optimization algorithms based on the notion of intrinsic dimension that use low amounts of upload bandwidth, which we denote intrinsic gradient compression algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We specify three such algorithms: static compression, time-varying compression and Kvarying compression, with different levels of upload and download bandwidth for use in various federated settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In a set of experiments, we show that these methods significantly outperform prior approaches to federated learning with gradient compression, obtaining large reductions in bandwidth at the same level of performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, we describe the preliminaries needed to contextualize our work, namely ideas from intrinsic dimension, federated learning, and gradient compression. In Section 3, we show how the algorithm used by intrinsic dimension naturally generalizes to algorithms which use little upload bandwidth. In Section 4 we consider special instantiations of these algorithms in federated learning settings which attain low upload and download bandwidth, and in Section 5 show that they achieve state of the art results. Finally, Section 6 concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The concept of intrinsic dimension was introduced in the work of (Li et al., 2018) , as a way of evaluating the true difficulty of an optimization problem. While this can usually be done by counting the number of parameters, some optimization problems are easier than others in that solutions may be far more plentiful.. One can write",
"cite_spans": [
{
"start": 65,
"end": 82,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Dimension",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2113(f \u03b8 ) = \u2113(f g(\u03b8 \u2032 ) )",
"eq_num": "(4)"
}
],
"section": "Intrinsic Dimension",
"sec_num": "2.1"
},
{
"text": "where g : R d \u2192 R D and thus we've transformed the problem into an optimization problem over \u03b8 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Dimension",
"sec_num": "2.1"
},
{
"text": "If we can still find good solutions to the original problem where \u03b8 2 \u2208 \u0398 2 , then the problem's intrinsic dimension may be lower, and thus the question may be easier than previously expected. Throughout this paper we will always take g(\u03b8 \u2032 ) = A\u03b8 \u2032 +\u03b8 0 for a D \u00d7 d matrix A, and take \u0398 2 = R d , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Dimension",
"sec_num": "2.1"
},
{
"text": "\u0398 1 = R D , where D > d,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Dimension",
"sec_num": "2.1"
},
{
"text": "where \u03b8 0 is the original value of the expression. The intrinisic dimension g(\u2113, L) with respect to a task \u2113 and performance threshold L is equal to the smallest integer d so that optimizing Equation (4) on task \u2113 could lead to a solution of performance at least equal to T . The intrinsic dimension is not exactly knowable, because we cannot find the \"best performing model\" exactly. However, if say, training with some optimization algorithm gives us a solution to Equation (4) with loss \u2264 L and with d dimensions, we can say with certainty that g(\u2113, T ) \u2264 d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Dimension",
"sec_num": "2.1"
},
{
"text": "Federated learning is a paradigm built around protecting the privacy of user data. The standard model involves a server and many clients, where the raw data must remain on the client's device but the server learns a model. Generally, this is implemented by only the gradients of the model on the data being sent to the central server, which then runs a standard algorithm. A common example of this is the FedAvg algorithm (McMahan et al., 2017) , where models are trained to near-completion on a each client's data, and the data is then averaged. In what follows, we define an epoch to be a single pass over every client.",
"cite_spans": [
{
"start": 422,
"end": 444,
"text": "(McMahan et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Federated Learning",
"sec_num": "2.2"
},
{
"text": "Sending full gradients in standard uncompressed form uses far more bandwidth than we are afforded in certain settings. For example, in a 1 billion parameter model (hardly particularly large by current standards) one gradient update would take 4 gigabytes of bandwidth uncompressed. Thus, there has been substantial amounts of work in compressing the gradient, like (Albasyoni et al., 2020) , which finds an optimal gradient compression algorithm, albeit one which is computationally infeasible.",
"cite_spans": [
{
"start": 365,
"end": 389,
"text": "(Albasyoni et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient Compression",
"sec_num": "2.3"
},
{
"text": "Related Work: Model Pruning There has been great interest in compressing models by using fewer weights, starting with the work of (Hinton et al., 2015; Han et al., 2015) . One related work is Diff Pruning (Guo et al., 2021) , which constrains the number of weights that can be changed from a pretrained model. In essence, diff pruning attempts to solve an L 0 minimization problem on the weights of the model, and approaches this by means of a relaxation. A number of other works have explored the idea of finetuning by only modifying a subset of a model's parameters. (Jiang et al., 2019) and (Bibikar et al., 2021) utilize sparsity to reduce communication costs during training. (Ravfogel et al., 2021) finetunes only the layer biases of large models. Similarly, (Houlsby et al., 2019) finetunes lowparameter adapters between each layer. Compared to (Ravfogel et al., 2021) our method is far more flexible, allowing any number of parameters to be changed. Compared to (Houlsby et al., 2019) our methods are architecture-independent, and can be applied to any model. Related Work: Federated Learning Federated learning is a machine learning paradigm in which a model is trained by a collection of clients, each with their own private local data. From the introduction of federated learning (McMahan et al., 2017) , it was clear that communication costs represented a significant challenge: sending gradients or weights over a network is costly due to the large size of modern machine learning models. (McMahan et al., 2017) introduced the FedAvg algorithm, which aims to reduce communication costs by sending and averaging weights, rather than gradients. Specifically, clients train their model locally for a given number of epochs, send it to the server, and received an averaged copy of the model weights. However, sending the full set of model weights often remains very costly (especially when clients only have a small amount of local data, such that many rounds of communication are necessary); as a result, FedAvg performs poorly in heavilybandwidth-constrained settings.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Hinton et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 152,
"end": 169,
"text": "Han et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 205,
"end": 223,
"text": "(Guo et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 569,
"end": 589,
"text": "(Jiang et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 594,
"end": 616,
"text": "(Bibikar et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 681,
"end": 704,
"text": "(Ravfogel et al., 2021)",
"ref_id": "BIBREF24"
},
{
"start": 765,
"end": 787,
"text": "(Houlsby et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 852,
"end": 875,
"text": "(Ravfogel et al., 2021)",
"ref_id": "BIBREF24"
},
{
"start": 970,
"end": 992,
"text": "(Houlsby et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1291,
"end": 1313,
"text": "(McMahan et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 1502,
"end": 1524,
"text": "(McMahan et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work: Model Pruning and Model Compression",
"sec_num": "2.4"
},
{
"text": "Recently, FetchSGD (Rothchild et al., 2020) aimed to address this issue differently by utilizing the concept of sketching. Rather than transmitting full gradients from the client to the server, they send a sketch of the gradient. This approach performs well, but only yields moderate compression rates. We compare to FetchSGD in Section 5.",
"cite_spans": [
{
"start": 19,
"end": 43,
"text": "(Rothchild et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work: Model Pruning and Model Compression",
"sec_num": "2.4"
},
{
"text": "In this section, we characterize a family of lowbandwidth optimization algorithms based on the notion of intrinsic dimension. We start from the optimization problem induced by intrinsic dimension (Equation 4). If we directly run gradient descent on Equation 4with respect to the intrinsic weights \u03b8 \u2032 , we obtain an equation of the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "\u03b8 \u2032 t+1 = \u03b8 \u2032 t \u2212 \u03b7\u2207 \u03b8 \u2032 \u2113(f g(\u03b8 \u2032 ) ) = \u03b8 \u2032 t \u2212 \u03b7\u2207 \u03b8 \u2032 (\u2113(f A\u03b8 )) = \u03b8 \u2032 t \u2212 \u03b7A \u22a4 \u2207 \u03b8 (\u2113(f \u03b8 )) \u22a4 | \u03b8=A\u03b8 \u2032 t +\u03b8 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "Then, left-multiplying both sides by A we obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 t+1 = \u03b8 t \u2212 \u03b7 A A \u22a4 \u2207 \u03b8 (\u2113(f \u03b8 ))| \u03b8=\u03b8t compressed gradient approximate gradient",
"eq_num": "(5)"
}
],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "Note that here, we can interpret A \u22a4 \u2207 \u03b8 (\u2113(f (\u03b8)))| \u03b8=\u03b8t as a compressed gradient with dimension d, and AA \u22a4 \u2207 \u03b8 (\u2113(f (\u03b8)))| \u03b8=\u03b8t as the approximate gradient. This inspires us to consider the more general family of optimization algorithms given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 t+1 = \u03b8 t \u2212 \u03b7A t A \u22a4 t (v t ),",
"eq_num": "(6)"
}
],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "where v t is a D dimensional vector computed from data available at timestep t that plays a similar role to a gradient, but may not be an exact gradient, and the A t are all D \u00d7 d matrices known ahead of time (say, generated with random seeds). One intuitive way of interpreting this algorithm is that \u03b8 t+1 \u2212 \u03b8 t is constrained to lie in a low-dimensional subspace, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "\u03a30 = 0 for t = 1, 2 \u2022 \u2022 \u2022 T do Randomly select W clients c1, . . . cW . loop {In parallel on clients {ci} W i=1 } Download \u03a3t\u22121, calculate current \u03b8t\u22121 = \u03b80 + A(\u03a3t\u22121). Compute stochastic gradient g t i on batch Bi of size \u2113: g t i = 1 \u2113 \u2113 j=1 \u2207 \u03b8 L(\u03b8t\u22121, zj). Sketch g t i to S t i = A \u22a4 g t i and upload it to the aggrega- tor. end loop Aggregate sketches S t = 1 W W i=1 S t i Unsketch: \u2206t = AS t Update: \u03b8t = \u03b8t\u22121 \u2212 \u03b7\u2206t, \u03a3t = \u03a3t\u22121 \u2212 \u03b7S t . end for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "namely that given by the span of A t . This family of algorithms can be made to use only d upload bandwidth, as only the vector A \u22a4 t (v t ) must be uploaded. Furthermore, note that Equation 6has no references to the intrinsic weights \u03b8 \u2032 , meaning that it represents a general optimization algorithm in the original space. Formally, Theorem 3.1. All algorithms of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "\u03b8 t+1 = \u03b8 t \u2212 \u03b7A t A \u22a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "t (v t ) can be simulated with d upload bandwidth in a standard federated learning setting, where v t is a function that can be calculated by the client at time t combined with all data from the server.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "We call all such algorithms intrinsic gradient compression algorithms. Note that this theorem only bounds the upload bandwidth capacity needed to run gradient descent, and does not bound the download bandwidth. In the particular instantiations we consider, we will demonstrate that one can also bound the download bandwidth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Low-Bandwidth Algorithms",
"sec_num": "3"
},
{
"text": "While Theorem 3.1 shows that any algorithm of the form Equation (6) can be implemented with low levels of upload bandwidth, not every algorithm of the form Equation (6) can be implemented with low levels of download bandwidth as well. Theorem 3.1 gives rise to a family of algorithms we denote intrinsic gradient compression algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "In this section, we describe three particular intrinsic gradient compression algorithms which use low amounts of both upload and download bandwidth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "These federated learning algorithms can be decomposed into three main phases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "\u2022 Reconciliation: The client reconciles its model with the server's copy of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "\u2022 Compression: The local model calculates, compresses, and sends its local gradient to the server.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "\u2022 Decompression: The server model updates its own copy of the model using the estimated gradient from the local model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "In general, reconciliation will be by far the most complex part of each algorithm, and the other steps are essentially shared across algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "We show how to implement SGD for each variant, and note that this choice of optimization algorithm is quite necessary -other optimization algorithms like SGD with momentum cause the parameters to not move in the low-dimensional subspace, which makes the compression impossible. While one can implement a variant which resets the momentum every epoch, momentum is rarely a useful optimization in federated learning due to the non-i.i.d. nature of the batches) so we do not consider this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Gradient Compression Algorithms",
"sec_num": "4"
},
{
"text": "In this subsection, we seek to implement the static intrinsic gradient compression algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Static Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "\u03b8 t = \u03b8 t\u22121 \u2212 \u03b7AA \u22a4 \u2207 \u03b8 L(\u03b8 t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Static Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "in a federated learning setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Static Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "In the reconciliation phase, since we know that the parameters \u03b8 c (which denotes the current parameters of the server) will always be equal to \u03b8 0 + A\u03a3 for some \u03a3 \u2208 R d , the server can just send \u03a3 to the client, which will take d download bandwidth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Static Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "For compression, the client compresses the gradient by multiplying by A \u22a4 , and for decompression the server multiplies this by A. The full algorithm is given in Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Static Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "In this subsection, we implement the time-varying intrinsic gradient compression algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "\u03b8 t = \u03b8 t\u22121 \u2212 \u03b7A e A \u22a4 e \u2207 \u03b8 L(\u03b8 t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "in a federated learning setting, where e is the epoch. In this case, we show that our algorithm can be implemented with at most 2d bandwidth used per Intrinsic Gradient Compression Method Upload Download Dimensions Explored Note that we break upload and download bandwidth into separate columns, because download speeds can often be considerably faster than upload speeds and we may thus be willing to tolerate higher values of download bandwidth. A realistic example of the values of the variables above is e.g. d = 10 3 , D = 10 8 , E = 20, K = 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "Static dE dE d Time-Varying dE 2dE dE K-Varying dE 2dEK dEK No Compression DE DE D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "client per timestep, so over E epochs there is 2dE bandwidth used total on downloading. Since this bandwidth is twice that of static subspace compression, but we search E times more directions in the space, this algorithm is particularly useful when we have many epochs. Letting \u03b8 c e be the client parameters at epoch e, note that we have the value of \u03b8 c e\u22121 when performing reconciliation. Now we can write",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "\u03b8 c e \u2212 \u03b8 c e\u22121 = (\u03b8 c e \u2212 \u03b8 final e\u22121 ) + (\u03b8 final e\u22121 \u2212 \u03b8 c e\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "and we can see that (\u03b8 c e \u2212\u03b8 final e\u22121 ) lies in the column space of A e and (\u03b8 final e\u22121 \u2212 \u03b8 c e\u22121 ) lies in the column space of A e\u22121 , which is enough to find the full algorithm, given in Algorithm 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "In this subsection, we describe how to implement the K-varying intrinsic gradient compression algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "\u03b8 t = \u03b8 t\u22121 \u2212 \u03b7A (i) e A (i)\u22a4 e \u2207 \u03b8 L(\u03b8 t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "{A (i) e } K i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "is the set of K compression matrices used at epoch e, and i is a randomly chosen integer between 1 and K inclusive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "This method is motivated from the fact that in many cases, the upload speed is much slower than the download speed, so we may only want to project the gradient into part of the subspace currently being explored, as opposed to the complete subspace. This allows each client to explore d directions at a time, but for dK directions to be explored across the entire epoch. As such, the algorithm identical time-varying compression, and is given in Algorithm 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "Choice of Compression Matrix Finally, we we discuss the choice of compression matrix for A. We note that our methods are agnostic to the specific choice of A, and depend only on the existence of efficient subroutines for calculating the matrix-vector products Ax and A \u22a4 y. Nonetheless, the choice of A has significant implications for the resulting accuracy of the algorithms. In order to maintain the most proximity to the original stochastic gradient descent algorithm, we will choose normalized A so that E[AA \u22a4 ] = I D .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "The naive choice is to let A be a D \u00d7 d random dense matrix, but such a choice is impossible due to memory constraints. For example, if we aim to train even a small version of BERT (100M parameters) with an intrinsic dimension of 1000, we would need to store a matrix with 10 11 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "The approach taken by (Aghajanyan et al., 2021; Li et al., 2018) for large-scale experiments, which we follow, utilizes the Fastfood transform (Le et al., 2013), in which A can be expressed as the",
"cite_spans": [
{
"start": 22,
"end": 47,
"text": "(Aghajanyan et al., 2021;",
"ref_id": "BIBREF0"
},
{
"start": 48,
"end": 64,
"text": "Li et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "D \u00d7 d matrix A i = Unpad D B i H\u03a0 i G i HPad 2 \u2113 where 2 \u2113",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "is the smallest power of two larger than D, H is a standard Hadamard matrix, B i is a random diagonal matrix with independent Rademacher entries (random signs), \u03a0 is a random permutation matrix, G is a random diagonal matrix with independent standard normal entries, Pad 2 \u2113 to be a linear operator which simply pads a d-dimensional vector v with zeroes until it has size 2 \u2113 , and Unpad D is a linear operator which takes the first D elements from a 2 \u2113 -dimensional vector. Since we can quickly compute a matrix-vector product by H with a fast Walsh-Hadamard transform, we can perform a matrix multiplication by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "A i A \u22a4 i in O(\u21132 \u2113 ) = O(D log D) time and O(D) space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "Performance Comparison We show the theoretical tradeoffs between each of these algorithms in Table 2 : Language modeling perplexity (lower is better) and compression rates (higher is better) for a GPT-2 model (124M parameters) on the PersonaChat dataset. We compare to prior work, including the state-of-the-art in gradient compression (FetchSGD), and we show upload, download, and total compression rates. For our intrinsic gradient compression results, we give static and K-subspace compression for a range of dimensions between 16386 and 4194304. For K-subspace compression we use K = 8. Overall, we match or exceed the performance of prior work with significantly improved compression rates.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "K-Varying Intrinsic Gradient Compression",
"sec_num": null
},
{
"text": "We evaluate our method across a range of benchmarks to showcase the potential of our three algorithms. These include two natural language processing tasks (language modeling and text classification), as well as a computer vision task (image classification). As with previous works (Rothchild et al., 2020; McMahan et al., 2017) , we simulate the federated learning in order to scale to large numbers of clients (upwards of 10, 000). We simulate on 8 commercial-grade GPUs for the language modeling experiments and 1 GPU for the other experiments. We perform experiments in both non-IID (language modeling, image classification) and IID (text classification) settings, because both scenarios are common in real-world federated learning.",
"cite_spans": [
{
"start": 281,
"end": 305,
"text": "(Rothchild et al., 2020;",
"ref_id": "BIBREF26"
},
{
"start": 306,
"end": 327,
"text": "McMahan et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Image Classification (ResNet-9 on CIFAR-10) First, we consider image classification on the CIFAR-10 dataset, a collection of 50,000 images with resolution 32 \u00d7 32px. We use the same experimental setup as (Rothchild et al., 2020) : we split the data between 10,000 clients in a non-IID fashion, such that each client only has data from a single class. At each step, we sample 100 clients at random, such that each gradient step corresponds to 500 images. We perform 24 rounds of communi-cation between all clients (i.e. 24 training epochs).",
"cite_spans": [
{
"start": 204,
"end": 228,
"text": "(Rothchild et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We use a ResNet-9 architecture with 6,570,880 trainable parameters for our fair comparison to previous work. Note that the model does not have batch normalization, as batch normalization would not make sense in a setting where each client has so few examples. Due to the substantial number of epochs performed here, we experiment with both static and time-varying gradient compression (kvarying compression is better suited to settings involving fewer rounds of communication). We perform experiments across intrinsic dimensions of 4000, 8000, 16000, 32000, 64000, 128000, and 256000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our results are shown in Figure 1 . Whereas FedAvg and Top-K struggle at even modest compression rates (e.g. 3\u00d7), the intrinsic gradient compression methods deliver strong performance at much larger compression rates. The intrinsic methods outperform the current state-of-the-art gradient compression method, FetchSGD (Rothchild et al., 2020) , by a large margin, and easily scale to high compression rates (e.g. 100\u00d7). Finally, we see that time-varying intrinsic compression generally outperforms static compression for the same communication cost.",
"cite_spans": [
{
"start": 318,
"end": 342,
"text": "(Rothchild et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Text Classification (BERT on SST-2) Next, we consider text classification on the Stanford Senti- ment Treebank-v2 (SST-2) dataset (Socher et al., 2013) , a common sentiment analysis dataset. For this experiment, we consider an IID data split into 50 and 500 clients, respectively. We employ the popular BERT (Devlin et al., 2019) transformer architecture with 109M parameters. The purpose of this experiment is to push the limits of gradient compression; we project the 109M-dimension BERT gradients into as few as 200 dimensions.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Socher et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 308,
"end": 329,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We perform 30 rounds (i.e. 30 epochs) of training for all compressed runs, while we perform 6 for the uncompressed baseline (as it converges more quickly). Federated learning experiments has previously been criticized for being challenging to reproduce; as a result, we perform each run five times over different random seeds. We report the mean, min, max, and standard deviation of the runs in Appendix D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Due to the substantial number of epochs performed here, it is natural to apply static and timevarying intrinsic gradient compression. We use intrinsic dimensions of 200, 400, 800, . . . , 25600.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Our results are given in Figure 2 . First, along similar lines to (Aghajanyan et al., 2021) , we find that it is possible to achieve remarkably high compression ratios for text classification: we achieve close to full performance even when compressing the 109M-dimension parameter vector into an intrinsic space of dimension 16,384. Furthermore, we find that time-varying intrinsic gradient compression consistently outperforms static intrinsic gradient compression at the same compression rate.",
"cite_spans": [
{
"start": 66,
"end": 91,
"text": "(Aghajanyan et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Lastly, we consider language modeling on the Per-sonaChat (Zhang et al., 2018) dataset of dialogues between Amazon Mechanical Turk workers as-signed to act out specific personalities. 1 The dataset has a non-IID split into 17,568 clients in which each client is assigned all data corresponding to given personality; as a result, it is widely used in federated learning simulations. We perform language modeling using the GPT-2 transformer architecture (124M parameters). For fair comparison to previous work, we conduct only two rounds of training across the clients (i.e. two epochs).",
"cite_spans": [
{
"start": 58,
"end": 78,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 184,
"end": 185,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling (GPT-2 on PersonaChat)",
"sec_num": null
},
{
"text": "Due to the low number of training rounds, it is natural to apply static and K-varying gradient compression. 2 Specifically, we apply both of these algorithms to train GPT-2 using intrinsic dimensions of 16384, 65536, 262144, 1048576, and 4194304. Our results are shown in Figure 2 . Overall, intrinsic dimension-based gradient compression vastly outperforms a wide range of prior approaches to reducing communication in federated learning. On the low-compression end of the spectrum, we obtain nearly full performance with superior compression rates to FedAvg (McMahan et al., 2017) and the recent FetchSGD (Rothchild et al., 2020) . On the high-compression end of the spectrum, we scale better than previous approaches. For example, we obtain a perplexity of around 20 even with an extremely high compression rate of 1898.",
"cite_spans": [
{
"start": 203,
"end": 246,
"text": "16384, 65536, 262144, 1048576, and 4194304.",
"ref_id": null
},
{
"start": 553,
"end": 582,
"text": "FedAvg (McMahan et al., 2017)",
"ref_id": null
},
{
"start": 607,
"end": 631,
"text": "(Rothchild et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 272,
"end": 280,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Language Modeling (GPT-2 on PersonaChat)",
"sec_num": null
},
{
"text": "Finally, we see that K-varying intrinsic compression performs similarly to (or slightly worse) than static compression at the same level of overall compression. However, if it is more important to conserve upload bandwidth than download bandwidth, then K-varying intrinsic gradient compression significantly outperforms static intrinsic gradient compression (see Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modeling (GPT-2 on PersonaChat)",
"sec_num": null
},
{
"text": "One of the primary motivations of federated learning is the desire for individual clients to be able to retain data privacy while still participating in model training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient Compression Results",
"sec_num": "5.1"
},
{
"text": "However, a number of works have shown that if the client does not have a large amount of data 1 In more detail, the PersonaChat dataset (Zhang et al., 2018) was collected by first giving imaginary personas (defined by a set of 5 sentences) to Amazon Mechanical Turk workers and asking them to take on those personas. Then, the system paired workers and asked them to discuss. Since the personas were imaginary and no personally identifiable information was exchanged (in particular, the workers were explicitly told to not use personally identifiable information) the dataset does not contain personally identifiable information.",
"cite_spans": [
{
"start": 136,
"end": 156,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient Compression Results",
"sec_num": "5.1"
},
{
"text": "2 Time-varying compression does not make sense here, as its benefit is derived from the setting where there are many rounds of communication between the clients. and the client sends back their full local gradient, it is possible to approximately reconstruct their local data from the model. This is a significant problem, because their data would then effectively be visible to the central server and any attackers that intercept their communications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient Compression Results",
"sec_num": "5.1"
},
{
"text": "Here, we show that compressing gradients with our approach can mitigate this problem. Specifically, we check if our compressed gradients can be reconstructed with the procedure proposed by (Zhu et al., 2019) . As in (Zhu et al., 2019) , we use a ResNet-152 model a randomly selected image from ImageNet and run for 24,000 iterations (by which time the method has converged). We reconstruct the image both from the full gradient (the center image) and from a the intrinsically-compressed image (the right image) with intrinsic dimension 65,536.",
"cite_spans": [
{
"start": 189,
"end": 207,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 216,
"end": 234,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gradient Compression Results",
"sec_num": "5.1"
},
{
"text": "As seen in Figure 3 , given the full gradient it is possible to obtain a fairly good reconstruction of the image. By contrast, with our method, the reconstruction is visually much less similar from original image. Of course, our method does not solve the problem entirely; an outline of the dog in the image is still visible because the compressed gradient still contains some information about the local data. To solve the issue entirely, it would be necessary to use a method such as differential privacy.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gradient Compression Results",
"sec_num": "5.1"
},
{
"text": "Federated learning holds the promise of large-scale model training while simultaneously letting users retain control over their data. In this paper, we preset a set of novel algorithms for scalable and efficient federated learning. These algorithms are particularly helpful for NLP training, where models often have hundreds of millions of parameters. Our experiments finetuning BERT and GPT-2 that our proposed method significantly improves upon the state-of-the-art in gradient compression for federated learning. In future work, we hope to deploy our system in a real-world federated learning setting with a large number of physical devices, rather than solely in simulation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Intrinsic Dimensionality As mentioned in the main paper, the concept of measuring the intrinsic dimensional of loss landscapes was introduced by (Li et al., 2018) . (Li et al., 2018) consider optimizing a D-parameter model in a random ddimensional subspace of the full parameter space. They define the intrinsic dimension of the optimization problem as the minimum dimension d for which a solution to the problem can be found, where a \"solution\" refers attaining a certain percentage of the maximum possible validation accuracy (i.e. the validation accuracy obtained by optimizing in all D dimensions). They use a fixed cut-off of 90% accuracy for their experiments. (Aghajanyan et al., 2021) followed up on this work by considering the setting of finetuning models in natural language processing. They show that the intrinsic dimension of some of these tasks (e.g. text classification on MRPC) is surprisingly low.",
"cite_spans": [
{
"start": 145,
"end": 162,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 165,
"end": 182,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 667,
"end": 692,
"text": "(Aghajanyan et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "A number of works have tried to measure the intrinsic dimension of datasets, rather than objective landscapes. (Levina and Bickel, 2005) introduced a maximum likelihood approach to estimating intrinsic dimensionality based on nearest-neighbors, while (Ceruti et al., 2014) employed angle and norm-based similarity. More recently, () further extended this line of work to use minimal neighborhood information.",
"cite_spans": [
{
"start": 111,
"end": 136,
"text": "(Levina and Bickel, 2005)",
"ref_id": "BIBREF15"
},
{
"start": 251,
"end": 272,
"text": "(Ceruti et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Finally, some works have tried to measure the intrinsic dimensionality of image representations and datasets. (Gong et al., 2019) finds that the representations produced by popular image and face representation learning models (ResNet-50 and SphereFace) have quite low intrinsic dimensionalities (16 and 19, respectively). Along similar lines, (Pope et al., 2021) showed that popular image datasets (MNIST, CIFAR 10, ImageNet) also have low intrinsic dimensionality.",
"cite_spans": [
{
"start": 110,
"end": 129,
"text": "(Gong et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 344,
"end": 363,
"text": "(Pope et al., 2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Federated Learning Federated learning is generally concerned with the distributed training of machine learning models across many devices, each of which holds private data. Many aspects of this federated setup are separate subfields of research, including how to ensure the privacy of client-held data (Xie et al., 2020; Bhagoji et al., 2019) , how to deal with heterogeneous data and networks (Li et al., 2020a,b; , how to reconcile weights/gradients from multiple clients (Li et al., 2020a; Wang et al., 2020; Li et al., 2020c) , how to manage clients in a fault-tolerant manner, deployment on mobile/iot devices (He et al., 2020) , and fairness (Mohri et al., 2019) .",
"cite_spans": [
{
"start": 302,
"end": 320,
"text": "(Xie et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 321,
"end": 342,
"text": "Bhagoji et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 394,
"end": 414,
"text": "(Li et al., 2020a,b;",
"ref_id": null
},
{
"start": 474,
"end": 492,
"text": "(Li et al., 2020a;",
"ref_id": null
},
{
"start": 493,
"end": 511,
"text": "Wang et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 512,
"end": 529,
"text": "Li et al., 2020c)",
"ref_id": "BIBREF19"
},
{
"start": 615,
"end": 632,
"text": "(He et al., 2020)",
"ref_id": null
},
{
"start": 648,
"end": 668,
"text": "(Mohri et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Numerous works focus on making federated training more efficient, with the ultimate goal of reducing communication cost and training time. The classic FedAvg (McMahan et al., 2017) algorithm tries to do this by communicating weights rather than gradients. FedProx (Li et al., 2020a) generalizes and re-parametrizes FedAvg. FedMA (Wang et al., 2020) continues to improve this approach by matching and averaging hidden layers of networks with similar activations at each communication round. FedAwS considers federated averaging in the case where each client has data from only a single class. (Malinovsky et al., 2020) analyzes a generalization of these weightaveraging approaches from a theoretical viewpoint.",
"cite_spans": [
{
"start": 264,
"end": 282,
"text": "(Li et al., 2020a)",
"ref_id": null
},
{
"start": 329,
"end": 348,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 592,
"end": 617,
"text": "(Malinovsky et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Relative to the weight averaging approach, the approach of compressing and sending gradients is relatively understudied. (Albasyoni et al., 2020) describes an approach that is theoretically optimal but not practical for large non-linear models. (Han et al., 2020) proposes adaptive gradient sparsification for federated learning, in which a subset of the full gradient is communicated at each round. FetchSGD (Rothchild et al., 2020) compresses gradients by sketching; it is the current state-of-the-art ). Update \u03a3 last = \u03a3 current e . Compute stochastic gradient g t i on batch Bi of size \u2113:",
"cite_spans": [
{
"start": 121,
"end": 145,
"text": "(Albasyoni et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 245,
"end": 263,
"text": "(Han et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 409,
"end": 433,
"text": "(Rothchild et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "g t i = 1 \u2113 \u2113 j=1 \u2207 \u03b8 L(\u03b8 c i e , zj). Sketch g t i : S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "(e)t i = A \u22a4 e g t i and upload it to the aggregator. end loop Aggregate sketches S (e)t = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "W W i=1 S (e)t i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Unsketch: \u2206 (e)t = AeS (e)t Update: \u03b8 current = \u03b8 current \u2212 \u03b7\u2206 (e)t , \u03a3 current e = \u03a3 current e \u2212 \u03b7S (e)t . end for Let \u03a3 final e = \u03a3 current e . end for in gradient compression for federated learning. We describe it in further depth in the main paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Finally, (Reddi et al., 2021) and (Li et al., 2020c ) accelerate training by bringing adaptive optimizers built for centralized learning into the federated setting.",
"cite_spans": [
{
"start": 9,
"end": 29,
"text": "(Reddi et al., 2021)",
"ref_id": "BIBREF25"
},
{
"start": 34,
"end": 51,
"text": "(Li et al., 2020c",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the main paper, we included a number of figures demonstrating our performance in comparison to prior work. Here, we include tables with our precise results for clarity and in order to facilitate future comparison with our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Further Experimental Analysis",
"sec_num": null
},
{
"text": "Section 4 shows full results on PersonaChat, complete with upload and download compression. Overall compression is calculated as average compression over both upload and download.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.1 Further PersonaChat Analysis",
"sec_num": null
},
{
"text": "We compare with FedAvg (McMahan et al., 2017), Top-K, and FetchSGD (Rothchild et al., 2020) . FedAvg is the baseline federated learning approach involving sending and averaging weights. Top-K refers to sending the top gradients, sorted by magnitude. FetchSGD compresses the weights with sketching.",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "(Rothchild et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D.1 Further PersonaChat Analysis",
"sec_num": null
},
{
"text": "Our method significantly outperforms competing approaches across the board. We obtain an accuracy close to that of uncompressed optimization using INSERTx overall compression; FedAvg and Top-K both fail to achieve such strong results, while FetchSGD does so at a significantly lower compression rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.1 Further PersonaChat Analysis",
"sec_num": null
},
{
"text": "Next we compare static and K-varying intrinsic gradient compression. When comparing overall compression rates, static compression is slightly better than K-varying compression. However, Kvarying compression is optimized for low upload bandwidth; it obtains much better upload compression rates than static compression at the same accuracy. For example, K-varying compression with k = 8 and d = 65536 yields perplexity 17.6 at upload compression 1900\u00d7, whereas static compression with d = 262144 yields perplexity 17.4 at upload compression 475\u00d7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.1 Further PersonaChat Analysis",
"sec_num": null
},
{
"text": "In Table 3 , we show full results for the SST-2 dataset with static and time-varying gradient compression for a range of intrinsic dimensions. We include in this experiment an demonstration of the robustness of our method to variation in random seeds; we run each experiment five times using separate random seeds (i.e. different intrinsic subspaces and model initializations). We report standard errors in Table 3 ; variability is quite low.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
},
{
"start": 407,
"end": 414,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "D.2 Further SST-2 Analysis",
"sec_num": null
},
{
"text": "We also see that time-varying intrinsic gradient compression outperforms static intrinsic compression, especially for low intrinsic dimensions. For example, time-varying compression at d = 200 outperforms static compression with d = 400, and time-varying compression with d = 400 outperforms static compression with d = 800. Figure 3: Image reconstruction from gradients with and without our intrinsic gradient compression method. On the left, we show the original image. In the center, we show the result of reconstructing the image from a single gradient from a ResNet-152 model (60M parameters), produced using the method of (Zhu et al., 2019) . On the right, we show the result of the same image reconstruction method applied to an gradient compressed by our algorithm using intrinsic dimension 65,536. Table 3 : Accuracy and standard error of a BERT model trained on the Stanford Sentiment Treebank v2 (SST-2) for varying intrinsic dimensions. We calculate the standard error over five trials with different random seeds. We see that for fixed dimension, time-varying intrinsic gradient compression outperforms static intrinsic gradient compression.",
"cite_spans": [
{
"start": 628,
"end": 646,
"text": "(Zhu et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 807,
"end": 814,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "D.2 Further SST-2 Analysis",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Ankur Moitra, Yang Liu, and Demi Guo for helpful discussions. L. M. K. is supported by the Rhodes Trust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
},
{
"text": "In Algorithm 2 and Algorithm 3 below, we provide the full time-varying and K-varying intrinsic gradient compression algorithms, which were omitted from the main text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Algorithms",
"sec_num": null
},
{
"text": "B.1 Proof of Theorem 3.1First, note that the server knows the value of A t . Then, for any local vector v t , the client can send A \u22a4 t (v t ) to the server, and the server can calculate A t A \u22a4 t , enabling it to continue executing the algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Proofs",
"sec_num": null
},
{
"text": "In the main paper, we described the prior work in federated learning and machine learning theory that was directly relevant to our paper's method.Here, we describe a number of less-related works that could not be included in the main paper due to space constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Additional Related work",
"sec_num": null
},
{
"text": "input: distinct subspaces K, learning rate \u03b7, timesteps T , local batch size \u2113, clients per round W for e = 1, 2, . . . E do Create matrices Ae , . . . Afor k = 1, . . . K, and calculate:).\u03a3 last(k) = \u03a3 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 3 K-Varying Intrinsic Gradient Compression",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Intrinsic dimensionality explains the effectiveness of language model fine-tuning",
"authors": [
{
"first": "Armen",
"middle": [],
"last": "Aghajanyan",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021",
"volume": "1",
"issue": "",
"pages": "7319--7328",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.568"
]
},
"num": null,
"urls": [],
"raw_text": "Armen Aghajanyan, Sonal Gupta, and Luke Zettle- moyer. 2021. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Pro- ceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 7319- 7328. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Optimal gradient compression for distributed and federated learning",
"authors": [
{
"first": "Alyazeed",
"middle": [],
"last": "Albasyoni",
"suffix": ""
},
{
"first": "Mher",
"middle": [],
"last": "Safaryan",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Condat",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Richt\u00e1rik",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.03246"
]
},
"num": null,
"urls": [],
"raw_text": "Alyazeed Albasyoni, Mher Safaryan, Laurent Condat, and Peter Richt\u00e1rik. 2020. Optimal gradient com- pression for distributed and federated learning. arXiv preprint arXiv:2010.03246.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Analyzing federated learning through an adversarial lens",
"authors": [
{
"first": "Supriyo",
"middle": [],
"last": "Arjun Nitin Bhagoji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "634--643",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mit- tal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning, pages 634-643. PMLR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Federated dynamic sparse training: Computing less, communicating less, yet learning better",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Bibikar",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Vikalo",
"suffix": ""
},
{
"first": "Zhangyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaohan",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2112.09824"
]
},
"num": null,
"urls": [],
"raw_text": "Sameer Bibikar, Haris Vikalo, Zhangyang Wang, and Xiaohan Chen. 2021. Federated dynamic sparse train- ing: Computing less, communicating less, yet learn- ing better. arXiv preprint arXiv:2112.09824.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Danco: An intrinsic dimensionality estimator exploiting angle and norm concentration",
"authors": [
{
"first": "Claudio",
"middle": [],
"last": "Ceruti",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Bassis",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Rozza",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Lombardi",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Casiraghi",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Campadelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Pattern Recognition",
"volume": "47",
"issue": "8",
"pages": "2569--2581",
"other_ids": {
"DOI": [
"10.1016/j.patcog.2014.02.013"
]
},
"num": null,
"urls": [],
"raw_text": "Claudio Ceruti, Simone Bassis, Alessandro Rozza, Gabriele Lombardi, Elena Casiraghi, and Paola Cam- padelli. 2014. Danco: An intrinsic dimensionality estimator exploiting angle and norm concentration. Pattern Recognition, 47(8):2569-2581.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the intrinsic dimensionality of image representations",
"authors": [
{
"first": "Sixue",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Naresh",
"middle": [],
"last": "Vishnu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boddeti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3987--3996",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sixue Gong, Vishnu Naresh Boddeti, and Anil K Jain. 2019. On the intrinsic dimensionality of image rep- resentations. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 3987-3996.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parameter-efficient transfer learning with diff pruning",
"authors": [
{
"first": "Demi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2021,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Demi Guo, Alexander M Rush, and Yoon Kim. 2021. Parameter-efficient transfer learning with diff prun- ing. In ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive gradient sparsification for efficient federated learning: An online learning approach",
"authors": [
{
"first": "P",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [
"K"
],
"last": "Leung",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)",
"volume": "",
"issue": "",
"pages": "300--310",
"other_ids": {
"DOI": [
"10.1109/ICDCS47774.2020.00026"
]
},
"num": null,
"urls": [],
"raw_text": "P. Han, S. Wang, and K. K. Leung. 2020. Adaptive gradient sparsification for efficient federated learning: An online learning approach. In 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), pages 300-310, Los Alamitos, CA, USA. IEEE Computer Society.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Huizi",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "William",
"middle": [
"J"
],
"last": "Dally",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.00149"
]
},
"num": null,
"urls": [],
"raw_text": "Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman cod- ing. arXiv preprint arXiv:1510.00149.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Qiang Yang, Murali Annavaram, and Salman Avestimehr. 2020. Fedml: A research library and benchmark for federated machine learning",
"authors": [
{
"first": "Chaoyang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Songze",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jinhyun",
"middle": [],
"last": "So",
"suffix": ""
},
{
"first": "Mi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hongyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaoyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Praneeth",
"middle": [],
"last": "Vepakomma",
"suffix": ""
},
{
"first": "Abhishek",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Peilin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Raskar",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.13518"
]
},
"num": null,
"urls": [],
"raw_text": "Chaoyang He, Songze Li, Jinhyun So, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, and Salman Avestimehr. 2020. Fedml: A research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02531"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Parameter-efficient transfer learning for NLP",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Giurgiu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Jastrzebski",
"suffix": ""
},
{
"first": "Bruna",
"middle": [],
"last": "Morrone",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "De Laroussilhe",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Attariyan",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Model pruning enables efficient federated learning on edge devices",
"authors": [
{
"first": "Yuang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shiqiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Valls",
"suffix": ""
},
{
"first": "Bong",
"middle": [
"Jun"
],
"last": "Ko",
"suffix": ""
},
{
"first": "Wei-Han",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kin",
"middle": [
"K"
],
"last": "Leung",
"suffix": ""
},
{
"first": "Leandros",
"middle": [],
"last": "Tassiulas",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.12326"
]
},
"num": null,
"urls": [],
"raw_text": "Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K Leung, and Leandros Tas- siulas. 2019. Model pruning enables efficient fed- erated learning on edge devices. arXiv preprint arXiv:1909.12326.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Fastfood -computing hilbert space expansions in loglinear time",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tam\u00e1s",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Sarl\u00f3s",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smola",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 30th International Conference on Machine Learning, ICML 2013",
"volume": "28",
"issue": "",
"pages": "244--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc V. Le, Tam\u00e1s Sarl\u00f3s, and Alexander J. Smola. 2013. Fastfood -computing hilbert space expansions in loglinear time. In Proceedings of the 30th Interna- tional Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, volume 28 of JMLR Workshop and Conference Proceedings, pages 244-252. JMLR.org.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Maximum likelihood estimation of intrinsic dimension",
"authors": [
{
"first": "Elizaveta",
"middle": [],
"last": "Levina",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bickel",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in Neural Information Processing Systems",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizaveta Levina and Peter Bickel. 2005. Maximum likelihood estimation of intrinsic dimension. In Ad- vances in Neural Information Processing Systems, volume 17. MIT Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Measuring the intrinsic dimension of objective landscapes",
"authors": [
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heerad",
"middle": [],
"last": "Farkhoor",
"suffix": ""
},
{
"first": "Rosanne",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. 2018. Measuring the intrinsic dimension of objective landscapes. In International Conference on Learning Representations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020a. Federated optimization in heterogeneous networks",
"authors": [
{
"first": "Tian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Anit",
"middle": [],
"last": "Kumar Sahu",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
}
],
"year": null,
"venue": "ML Sys",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar San- jabi, Ameet Talwalkar, and Virginia Smith. 2020a. Federated optimization in heterogeneous networks. In ML Sys.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On the convergence of fedavg on non-iid data",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kaixuan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wenhao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shusen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhihua",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2020b. On the convergence of fedavg on non-iid data. In International Conference on Learning Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Acceleration for compressed gradient descent in distributed and federated optimization",
"authors": [
{
"first": "Zhize",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Kovalev",
"suffix": ""
},
{
"first": "Xun",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Richtarik",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "119",
"issue": "",
"pages": "5895--5904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtarik. 2020c. Acceleration for compressed gradi- ent descent in distributed and federated optimization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5895-5904. PMLR.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From local sgd to local fixed-point methods for federated learning",
"authors": [
{
"first": "Grigory",
"middle": [],
"last": "Malinovsky",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Kovalev",
"suffix": ""
},
{
"first": "Elnur",
"middle": [],
"last": "Gasanov",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Condat",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Richt\u00e1rik",
"suffix": ""
}
],
"year": 2020,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, and Peter Richt\u00e1rik. 2020. From local sgd to local fixed-point methods for federated learning. In ICML.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Communication-efficient learning of deep networks from decentralized data",
"authors": [
{
"first": "Brendan",
"middle": [],
"last": "Mcmahan",
"suffix": ""
},
{
"first": "Eider",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Hampson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Aguera Y Arcas",
"suffix": ""
}
],
"year": 2017,
"venue": "Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "1273--1282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273-1282. PMLR.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Agnostic federated learning",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Sivek",
"suffix": ""
},
{
"first": "Ananda Theertha",
"middle": [],
"last": "Suresh",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "4615--4625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. 2019. Agnostic federated learning. In In- ternational Conference on Machine Learning, pages 4615-4625. PMLR.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The intrinsic dimension of images and its impact on learning",
"authors": [
{
"first": "Phil",
"middle": [],
"last": "Pope",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Abdelkader",
"suffix": ""
},
{
"first": "Micah",
"middle": [],
"last": "Goldblum",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Goldstein",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phil Pope, Chen Zhu, Ahmed Abdelkader, Micah Gold- blum, and Tom Goldstein. 2021. The intrinsic di- mension of images and its impact on learning. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels",
"authors": [
{
"first": "Elad",
"middle": [],
"last": "Ravfogel",
"suffix": ""
},
{
"first": "Shauli",
"middle": [],
"last": "Ben-Zaken",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elad Ravfogel, Shauli Ben-Zaken, and Yoav Gold- berg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language- models. arXiv preprint.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Adaptive federated optimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sashank",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Reddi",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Garrett",
"suffix": ""
},
{
"first": "Jakub",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Kone\u010dn\u00fd",
"suffix": ""
},
{
"first": "Hugh",
"middle": [
"Brendan"
],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcmahan",
"suffix": ""
}
],
"year": 2021,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Kone\u010dn\u00fd, Sanjiv Kumar, and Hugh Brendan McMahan. 2021. Adap- tive federated optimization. In International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Fetchsgd: Communication-efficient federated learning with sketching",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Rothchild",
"suffix": ""
},
{
"first": "Ashwinee",
"middle": [],
"last": "Panda",
"suffix": ""
},
{
"first": "Enayat",
"middle": [],
"last": "Ullah",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Ivkin",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Stoica",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Braverman",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Raman",
"middle": [],
"last": "Arora",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "2020",
"issue": "",
"pages": "8253--8265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Ion Stoica, Vladimir Braverman, Joseph Gonzalez, and Raman Arora. 2020. Fetchsgd: Communication-efficient federated learning with sketching. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 8253-8265. PMLR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empiri- cal methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Fedproto: Federated prototype learning over heterogeneous devices",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tianyi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2105.00243"
]
},
"num": null,
"urls": [],
"raw_text": "Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, and Jing Jiang. 2021. Fedproto: Federated prototype learning over heterogeneous devices. arXiv preprint arXiv:2105.00243.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Dimitris Papailiopoulos, and Yasaman Khazaeni. 2020. Federated learning with matched averaging",
"authors": [
{
"first": "Hongyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Yurochkin",
"suffix": ""
},
{
"first": "Yuekai",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dim- itris Papailiopoulos, and Yasaman Khazaeni. 2020. Federated learning with matched averaging. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Terngrad: Ternary gradients to reduce communication in distributed deep learning",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Chunpeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yandan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiran",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1509--1519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yan- dan Wang, Yiran Chen, and Hai Li. 2017. Tern- grad: Ternary gradients to reduce communication in distributed deep learning. In Advances in Neural Information Processing Systems 30: Annual Confer- ence on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1509-1519.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dba: Distributed backdoor attacks against federated learning",
"authors": [
{
"first": "Chulin",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Keli",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Pin-Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2020. Dba: Distributed backdoor attacks against federated learning. In International Conference on Learning Representations.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Federated learning with only positive labels",
"authors": [
{
"first": "Felix",
"middle": [
"X"
],
"last": "Yu",
"suffix": ""
},
{
"first": "Ankit",
"middle": [],
"last": "Singh Rawat",
"suffix": ""
},
{
"first": "Aditya",
"middle": [
"Krishna"
],
"last": "Menon",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix X. Yu, Ankit Singh Rawat, Aditya Krishna Menon, and Sanjiv Kumar. 2020. Federated learning with only positive labels.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational guistics",
"volume": "",
"issue": "",
"pages": "2204--2213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational guistics (Vol- ume 1: Long Papers), pages 2204-2213.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Deep leakage from gradients",
"authors": [
{
"first": "Ligeng",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhijian",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Song",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "14747--14756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Advances in Neural In- formation Processing Systems 32: Annual Confer- ence on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 14747-14756.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Training curves on CIFAR-10 with static and time varying dimension at the same intrinsic dimensionality.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Results on computer vision benchmarks. Both static and time-varying intrinsic gradient dimension significantly outperform perform work, with time-varying intrinsic compression performing best. On the right, we see that time-varying and static compression perform similarly at the beginning of training, but timevarying outperforms static eventually but are tied at the beginning, and that time-varying outperforms static with equal space. For the FedAvg and uncompressed methods with compression rates above 1, compression was performed by training for fewer epochs.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Results on NLP benchmarks. Note that while K-varying appears to perform poorly on PersonaChat, the upload performance is much stronger. See Appendix D for these full results.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Reconstruction from full gradient. (c) Reconstruction from gradient with intrinsic compression.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"content": "<table/>",
"text": "Static Intrinsic Gradient Compression input: learning rate \u03b7, timesteps T , local batch size \u2113, clients per round W Create matrix A \u2208 R D\u00d7d with E[AA \u22a4 ] = ID.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "Bandwidth and Performance Comparisons. The bandwidth refers to that of that used for each client.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Uncompressed</td><td>13.9</td><td>1</td><td>1</td><td>1</td></tr><tr><td>FedAvg (2 local iters) FedAvg (5 local iters)</td><td>16.3 20.1</td><td>2 5</td><td>2 5</td><td>2 5</td></tr><tr><td>Local Top-K (k = 50, 000) Local Top-K (k = 500, 000)</td><td>19.3 17.1</td><td>30.3 3.6</td><td>2490 248</td><td>60 7.1</td></tr><tr><td>FetchSGD (k = 25, 000) FetchSGD (k = 50, 000)</td><td>14.8 15.8</td><td>3.8 2.4</td><td>100 10</td><td>7.3 3.9</td></tr><tr><td>Ours (static) Ours (K-subspace) Ours (static) Ours (K-subspace) Ours (static) Ours (K-subspace) Ours (static) Ours (K-subspace) Ours (static)</td><td>16384 27.7 16384 19.6 65536 20.6 65536 17.8 262144 17.6 262144 16.6 1048576 15.8 1048576 15.4 4194304 14.8</td><td>7595 7595 1900 1900 475 475 119 119 29.7</td><td>7595 949 1900 237 475 59.3 119 14.8 29.7</td><td>7595 1688 1900 422 475 105 119 26.3 29.7</td></tr></table>",
"text": "NameIntrinsic Dim. PPL Up. Comp. Down. Comp. Total Comp.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td>e {In parallel on clients {ci} W = 0, \u03a3 final e for t = 1, 2 loop i=1 } Download \u03a3 current e , \u03a3 final e\u22121 , calculate current \u03b8 c i = 0 e = \u03b8 c i e\u22121 + Ae\u22121(\u03a3 final e\u22121 \u2212 \u03a3 last ) + Ae(\u03a3 current e</td></tr></table>",
"text": "Algorithm 2 Time-Varying Intrinsic Gradient Compression input: learning rate \u03b7, timesteps T , local batch size \u2113, clients per round W for e = 1, 2, \u2022 \u2022 \u2022 E do Create matrix Aei.i.d. \u223c A where A \u2208 R D\u00d7d with E[AA \u22a4 ] = ID. Current, Final Vector: \u03a3 current \u2022 \u2022 \u2022 T doRandomly select W clients c1, . . . cW .",
"type_str": "table",
"html": null,
"num": null
}
}
}
}