diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..38a3af970c7f9c882a6cfc496650d1526680b581 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:57901cea50aac961ff94301329296a5618c3942d0b5608509d0cb78c83527c2c +size 483889 diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json new file mode 100644 index 0000000000000000000000000000000000000000..030caca179daf98037b04880fccc2d77b467b0a7 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:822b4c46a65a9099da650797bc5891c1a1b68ae13e69328b8bac4e7d2e52caec +size 582544 diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f2908478250f89f2e6be1330fa840f27b98dea2f --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/d92b5707-9fed-4537-a036-dfee4265260d_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b489276f6f095743d36e0142c2f4e24e0242d0860e1f639c2cc17709e2d70d9 +size 1425928 diff --git a/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c96eb74f62b3993b6937dbe49f65cdb7f3e9db00 --- /dev/null +++ b/2directiontheoreticallyfasterdistributedtrainingwithbidirectionalcommunicationcompression/full.md @@ -0,0 +1,2958 @@ +# 2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression + +Alexander Tyurin + +KAUST + +Saudi Arabia + +alexandertiurin@gmail.com + +Peter Richtárik + +KAUST + +Saudi Arabia + +richtarik@gmail.com + +# Abstract + +We consider distributed convex optimization problems in the regime when the communication between the server and the workers is expensive in both uplink and downlink directions. We develop a new and provably accelerated method, which we call 2Direction, based on fast bidirectional compressed communication and a new bespoke error-feedback mechanism which may be of independent interest. Indeed, we find that the EF and EF21-P mechanisms (Seide et al., 2014; Gruntkowska et al., 2023) that have considerable success in the design of efficient non-accelerated methods are not appropriate for accelerated methods. In particular, we prove that 2Direction improves the previous state-of-the-art communication complexity $\widetilde{\Theta}\left(K\times \left(L / \alpha \mu +L_{\mathrm{max}}\omega /n\mu +\omega\right)\right)$ (Gruntkowska et al., 2023) to $\widetilde{\Theta} (K\times (\sqrt{L(\omega + 1) / \alpha\mu} +\sqrt{L_{\mathrm{max}}\omega^2 / n\mu} +1 / \alpha +\omega))$ in the $\mu$ -strongly-convex setting, where $L$ and $L_{\mathrm{max}}$ are smoothness constants, $n$ is # of workers, $\omega$ and $\alpha$ are compression errors of the Rand $K$ and Top $K$ sparsifiers (as examples), $K$ is # of coordinates/bits that the server and workers send to each other. Moreover, our method is the first that improves upon the communication complexity of the vanilla accelerated gradient descent (AGD) method (Nesterov, 2018). We obtain similar improvements in the general convex regime as well. Finally, our theoretical findings are corroborated by experimental evidence. + +# 1 Introduction + +We consider convex optimization problems in the centralized distributed setting. These types of problems appear in federated learning (Konečný et al., 2016; McMahan et al., 2017) and distributed optimization (Ramesh et al., 2021). In this setting, one of the main problems is the communication bottleneck: the connection link between the server and the workers can be very slow. We focus our attention on methods that aim to address this issue by applying lossy compression to the communicated messages (Alistarh et al., 2017; Mishchenko et al., 2019; Gruntkowska et al., 2023). + +# 1.1 The problem + +Formally, we consider the optimization problem + +$$ +\min _ {x \in \mathbb {R} ^ {d}} \left\{f (x) := \frac {1}{n} \sum_ {i = 1} ^ {n} f _ {i} (x) \right\}, \tag {1} +$$ + +where $n$ is the number of workers and $f_{i}:\mathbb{R}^{d}\to \mathbb{R}$ are smooth convex functions for all $i\in [n]\coloneqq \{1,\ldots ,n\}$ . We consider the centralized distributed optimization setting in which each $i^{\mathrm{th}}$ worker contains the function $f_{i}$ , and all workers are directly connected to a server (Kairouz et al., 2021). In general, we want to find a (possibly random) point $\widehat{x}$ such that $\mathbb{E}[f(\widehat{x})] - f(x^{*})\leq \varepsilon$ , where $x^{*}$ is + +an optimal point. In the strongly convex setup, we also want to guarantee that $\mathbb{E}[\| \widetilde{x} - x^{*}\|^{2}] \leq \varepsilon$ for some point $\widetilde{x}$ . + +Virtually all other theoretical works in this genre assume that, compared to the worker-to-server (w2s) communication cost, the server-to-workers (s2w) broadcast is so fast that it can be ignored. We lift this limitation and instead associate a relative cost $r \in [0,1]$ with the two directions of communication. If $r = 0$ , then s2w communication is free, if $r = 1$ , then w2s communication is free, and if $r = 1/2$ , then the s2w and w2s costs are equal. All our theoretical results hold for any $r \in [0,1]$ . We formalize and elaborate upon this setup in Section 2. + +# 1.2 Assumptions + +Throughout the paper we rely on several standard assumptions on the functions $f_{i}$ and $f$ . + +Assumption 1.1. Functions $f_{i}$ are $L_{i}$ -smooth, i.e., $\| \nabla f_{i}(x) - \nabla f_{i}(y)\| \leq L_{i}\| x - y\|$ for all $x,y \in \mathbb{R}^d$ , for all $i \in [n]$ . We let $L_{\max} \coloneqq \max_{i\in [n]}L_{i}$ . Further, let $\widehat{L} > 0$ be a constant such that $\frac{1}{n}\sum_{i = 1}^{n}\| \nabla f_i(x) - \nabla f_i(y)\| ^2\leq \widehat{L}^2\| x - y\| ^2$ for all $x,y \in \mathbb{R}^d$ . + +Note that if the functions $f_{i}$ are $L_{i}$ -smooth for all $i \in [n]$ , then $\widehat{L} \leq L_{\max}$ . + +Assumption 1.2. Function $f$ is $L$ -smooth, i.e., $\| \nabla f(x) - \nabla f(y) \| \leq L \| x - y \|$ for all $x, y \in \mathbb{R}^d$ . Assumption 1.3. Functions $f_i$ are convex for all $i \in [n]$ , and $f$ is $\mu$ -strongly convex with $\mu \geq 0$ , attaining a minimum at some point $x^* \in \mathbb{R}^d$ . + +It is known that the above smoothness constants are related in the following way. + +Lemma 1.4 (Gruntkowska et al. (2023)). If Assumptions 1.2, 1.1 and 1.3 hold, then $\widehat{L} \leq L_{\max} \leq nL$ and $L \leq \widehat{L} \leq \sqrt{L_{\max} L}$ . + +# 2 Motivation: From Unidirectional to Bidirectional Compression + +In this work, we distinguish between worker-to-server (w2s=uplink) and server-to-worker (s2w=downlink) communication cost, and define w2s and s2w communication complexities of methods in the following natural way. + +Definition 2.1. For a centralized distributed method $\mathcal{M}$ aiming to solve problem (1), the communication complexity $\mathfrak{m}_{\mathcal{M}}^{\mathrm{w2s}}$ is the expected number of coordinates/floats1 that each worker sends to the server to solve problem (1). The quantity $\mathfrak{m}_{\mathcal{M}}^{\mathrm{s2w}}$ is the expected number of floats/coordinates the server broadcasts to the workers to solve problem (1). If $\mathfrak{m}_{\mathcal{M}}^{\mathrm{w2s}} = \mathfrak{m}_{\mathcal{M}}^{\mathrm{s2w}}$ , then we use the simplified notation $\mathfrak{m}_{\mathcal{M}} := \mathfrak{m}_{\mathcal{M}}^{\mathrm{s2w}} = \mathfrak{m}_{\mathcal{M}}^{\mathrm{w2s}}$ . + +Let us illustrate the above concepts on the simplest baseline: vanilla gradient descent (GD). It is well known (Nesterov, 2018) that for $L$ -smooth, $\mu$ -strongly convex problems, GD returns an $\varepsilon$ -solution after $\mathcal{O}\left(L / \mu \log^{1 / \varepsilon}\right)$ iterations. In each iteration, the workers and the server communicate all $\Theta(d)$ coordinates to each other (since no compression is applied). Therefore, the communication complexity of GD is $\mathfrak{m}_{\mathrm{GD}} = \Theta\left(dL / \mu \log^{1 / \varepsilon}\right)$ . The same reasoning applies to the accelerated gradient method (AGD) (Nesterov, 2018), whose communication complexity is $\mathfrak{m}_{\mathrm{AGD}} = \Theta\left(d\sqrt{L / \mu} \log^{1 / \varepsilon}\right)$ . + +# 2.1 Compression mappings + +In the literature, researchers often use the following two families of compressors: + +Definition 2.2. A (possibly) stochastic mapping $\mathcal{C}:\mathbb{R}^d\to \mathbb{R}^d$ is a biased compressor if there exists $\alpha \in (0,1]$ such that + +$$ +\mathbb {E} \left[ \| \mathcal {C} (x) - x \| ^ {2} \right] \leq (1 - \alpha) \| x \| ^ {2}, \quad \forall x \in \mathbb {R} ^ {d}. \tag {2} +$$ + +Definition 2.3. A stochastic mapping $\mathcal{C}:\mathbb{R}^d\to \mathbb{R}^d$ is an unbiased compressor if there exists $\omega \geq 0$ such that + +$$ +\underline {{\mathbb {E}}} [ \mathcal {C} (x) ] = x, \quad \mathbb {E} \left[ \| \mathcal {C} (x) - x \| ^ {2} \right] \leq \omega \| x \| ^ {2}, \quad \forall x \in \mathbb {R} ^ {d}. \tag {3} +$$ + +Table 1: Communication Rounds in the Strongly Convex Case. The number of communication rounds and rounds costs to get an $\varepsilon$ -solution $(\mathbb{E}\left[||\widehat{x} -x^{*}||^{2}\right]\leq \varepsilon)$ up to logarithmic factors. The table shows the most relevant bidirectional compressed methods that are ordered by the total communication complexity # Communication Rounds $\times$ Round Cost (see (4) for details). + +i. The parameter $r$ weights the importance/speed of uplink and downlink connections. When $r = 1/2$ , it means that the uplink and downlink speeds are equal. +ii. The parameters $K_{\omega}$ and $K_{\alpha}$ are the expected densities Definition 2.5 of compressors $\mathcal{C}^D \in \mathbb{U}(\omega)$ and $\mathcal{C}^P \in \mathbb{B}(\alpha)^{\mathrm{(a)}}$ , that operate in the workers and the server accordingly. Less formally, $K_{\omega}$ and $K_{\alpha}$ are the number of coordinates/bits that the workers and the server send to each other in each communication round. + +
| Method | # Communication Rounds | Round Cost(c) |
| Dore, Artemis, MURANA(a)(Liu et al., 2020)(Philippenko and Dieulevut, 2020)(Condat and Richtárik, 2022) | \(\widetilde{\Omega}\left(\frac{\omega}{\alpha n}\frac{L_{\max}}{\mu}\right)^{(f)}\) | (1-r)Kω+rKα |
| MCM(a)(Philippenko and Dieulevut, 2021) | \(\widetilde{\Omega}\left(\left(\frac{1}{\alpha^{3/2}}+\frac{\omega^{1/2}}{\alpha\sqrt{n}}+\frac{\omega}{n}\right)\frac{L_{\max}}{\mu}\right)^{(f)}\) | (1-r)Kω+rKα |
| GD(Nesterov, 2018) | \(\frac{L}{\mu}\) | d |
| EF21-P + DIANA(Gruntkowska et al., 2023) | \(\frac{L}{\alpha\mu}+\frac{L_{\max}\omega}{n\mu}+\omega\) | (1-r)Kω+rKα |
| AGD(Nesterov, 2018) | \(\sqrt{\frac{L}{\mu}}\) | d |
| 2Direction(Remark 5.3)(b), (Theorem 5.2) | \(\sqrt{\frac{L(\omega+1)}{\alpha\mu}}+\sqrt{\frac{L_{\max}\omega^2}{n\mu}}+\frac{1}{\alpha}+\omega\) | (1-r)Kω+rKα |
| 2Direction(Remark 5.5)(b), (Theorem 5.4)(requires Lmax/L)(d) | \(\sqrt{\frac{L_{\max}\{1,r(\omega+1)\}}{\alpha\mu}}+\sqrt{\frac{L^2/3L_{\max}^{1/3}(\omega+1)}{\alpha n^{1/3}\mu}}+\sqrt{\frac{L^1/2L_{\max}^{1/2}(\omega+1)^{3/2}}{\sqrt{\alpha n\mu}}}+\sqrt{\frac{L_{\max}\omega^2}{n\mu}}+\frac{1}{\alpha}+\omega\) | (1-r)Kω+rKα |
| Notation | Meaning |
| g = O(f) | Exist C > 0 such that g(z) ≤ C × f(z) for all z ∈ Z |
| g = Ω(f) | Exist C > 0 such that g(z) ≥ C × f(z) for all z ∈ Z |
| g = Θ(f) | g = O(f) and g = Ω(f) |
| g = O(f) | Exist C > 0 such that g(z) ≤ C × f(z) × log(poly(z)) for all z ∈ Z |
| g = Ω(f) | Exist C > 0 such that g(z) ≥ C × f(z) × log(poly(z)) for all z ∈ Z |
| g = Θ(f) | g = O(f) and g = Ω(f) |
| {a, ..., b} | Set {i ∈ Z | a ≤ i ≤ b} |
| [n] | {1, ..., n} |
| Mean | Part | Pose | Occ. | Part+Occ. | |
| FiLM [44] | 50.53 | 38.24 | 67.82 | 51.41 | 44.66 |
| mDETR [25] | 55.72 | 41.52 | 71.76 | 64.99 | 50.47 |
| PNSVQA [32] | 64.39 | 50.61 | 87.78 | 65.80 | 53.35 |
| PNSVQA+Projection | 68.15 | 56.30 | 86.70 | 70.70 | 58.90 |
| PO3D-VQA (Ours) | 75.64 | 71.85 | 86.40 | 76.90 | 67.40 |