| # Adapter Pruning using Tropical Characterization | |
| Rishabh Bhardwaj Tushar Vaidya Soujanya Poria | |
| Singapore University of Technology and Design, Singapore | |
| Nanyang Technological University, Singapore | |
| rishabh_bhardwaj@mymail.sutd.edu.sg, tushar.voidya@ntu.edu.sg sporia@sutd.edu.sg | |
| # Abstract | |
| Adapters are widely popular parameter-efficient transfer learning approaches in natural language processing that insert trainable modules in between layers of a pre-trained language model. Apart from several heuristics, however, there has been a lack of studies analyzing the optimal number of adapter parameters needed for downstream applications. In this paper, we propose an adapter pruning approach by studying the tropical characteristics of trainable modules. We cast it as an optimization problem that aims to prune parameters from the adapter layers without changing the orientation of underlying tropical hypersurfaces. Our experiments on five NLP datasets show that tropical geometry tends to identify more relevant parameters to prune when compared with the magnitude-based baseline, while a combined approach works best across the tasks. | |
| # 1 Introduction | |
| With the increase in network sizes, we are observing an ever-increasing space and computational demand for models needed to solve a given task. To tackle this, model compression (Cheng et al., 2017) techniques are becoming continuously popular which retain the most important learning from the full model while reducing the size of the network either by pruning or distillation. | |
| Transfer learning approaches, such as adapters (Houlsby et al., 2019), are a parameter-efficient alternative to full model fine-tuning which obviates the need to maintain a task-specific copy of the base language model (LM). Adapters insert simple modules in between layers of an LM to adapt the pre-trained representation for a given downstream NLP task. However, there is a lack of research in pruning adapter modules to further enhance their parameter efficiency. We hypothesize that adapter weights can be pruned significantly by not compromising the performance observed with unpruned states, this motivates the proposed approach. | |
| In this work, we propose a novel approach to pruning adapter layers without any iterative finetuning of the model parameters on downstream tasks. Using tropical algebra, we study the (duals of) hypersurfaces generated by adapter modules in the high-dimensional space. As a pruning objective, we aim to minimize the magnitude of adapter weights while constraining the change in hypersurface geometry to be small. | |
| Related works include adapters pruning using lottery ticket hypothesis (Wu et al., 2022; Frankle and Carbin, 2018) that performs iterative pruning—a few gradient steps, prune, and reset the parameters to initial weights. Rücklé et al. (2020) drops adapter from lower transformer layers. While these works are interesting, we provide a more concrete angle to prune adapter layers—prune by preserving the hypersurface geometry. We extend an insightful analysis of tropical geometry of neural networks (Zhang et al., 2018; Alfarra et al., 2022) to adapters. | |
| # 2 Background | |
| Adapter Operations. We use the adapter setup proposed by Pfeiffer et al. (2020) that inserts small modules after FFN add and layer norm sub-layer. | |
| $$ | |
| \boldsymbol {h} \leftarrow \boldsymbol {h} + f (\boldsymbol {h} \boldsymbol {W} _ {d}) \boldsymbol {W} _ {u} \tag {1} | |
| $$ | |
| It consists of down-projection $W_{d} \in \mathbb{R}^{d \times r}$ , up-projection $W_{u} \in \mathbb{R}^{r \times d}$ , a ReLU activation function $f(\cdot)$ , where typically $r < d$ . | |
| Tropical Arithmetic. Tropical algebra is a variant of classical algebra where basic arithmetic operations are redefined. The tropical sum $\oplus$ of two numbers represents their maximum and the tropical product $\odot$ represents a classical addition<sup>1</sup>. Thus, | |
| $$ | |
| x \oplus y = \max \{x, y \} | |
| $$ | |
| $$ | |
| x \odot y = x + y | |
| $$ | |
| For e.g., $2 \oplus 5 = 5$ and $2 \odot 5 = 7$ . Axioms and order of arithmetic operations in tropical algebra follow the classical, thus addition is commutative and multiplication is distributive over addition. We relegate detailed discussions about tropical algebra, polynomials, and hypersurfaces to the Appendix A. | |
| Notations used: Henceforth, we denote $W_{d}$ , $W_{u}$ , $h$ by $\mathbf{A}$ , $\mathbf{B}$ , and $\mathbf{x}$ , respectively; $\mathbf{B}^{+} := \max \{\mathbf{B}, \mathbf{0}\}$ ; $\mathbf{B}^{-} := \max \{-\mathbf{B}, \mathbf{0}\}$ ; $b_{i}$ denotes $i_{th}$ row of $\mathbf{B}$ ; $b^{i+} := \max \{b^{i}, \mathbf{0}\}$ , $b^{i-} := \max \{-b^{i}, \mathbf{0}\}$ ; $\operatorname{Diag}[u]$ arranges $u$ in a diagonal matrix; $||\mathbf{G}||_{1,1} := \Sigma_{k=1}^{d} ||\mathbf{G}(i,:)||_{1}$ ; $||\cdot||_{F}$ denotes Frobenius Norm. | |
| # 3 Tropical Adapter Pruning | |
| Given a frozen language model adapted to a specific task using adapter layers, we divide our approach into two steps: 1) Finding adapter weights $P_T$ that are crucial to preserving the tropical adapter hypersurface by solving a simple optimization problem; 2) Pruning of adapter weights with least magnitudes that do not lie in $P_T$ . Next, we describe step-1 which is core to the pruning method: | |
| A bottleneck adapter block can be expressed by $f(\mathbf{x}) = \mathbf{B} \max \{\mathbf{A}\mathbf{x}, \mathbf{0}\}$ . Since $f(\mathbf{x})$ in itself is not a tropical polynomial and thus does not form a tropical surface, we rewrite it in terms of the difference between two tropical polynomials $f(\mathbf{x}) = H(\mathbf{x}) - Q(\mathbf{x})$ , following the analysis of tropical rational function by Alfarra et al. (2022). Thus we focus on a relatively lenient problem i.e. identifying weights that preserve tropical hypersurfaces defined by $H(\mathbf{x})$ and $Q(\mathbf{x})$ . Let $\mathcal{H}(\mathbf{x})$ and $\mathcal{Q}(\mathbf{x})$ be the respective hypersurfaces, one can choose a sparse set of $\hat{\mathbf{A}}$ , $\hat{\mathbf{B}}$ that belongs to the set of matrices obtained by solving the following optimization problem | |
| $$ | |
| \min _ {\hat {\mathbf {A}}, \hat {\mathbf {B}}} d (\mathcal {H} (\mathbf {x}), \hat {\mathcal {H}} (\mathbf {x})) + d (\mathcal {Q} (\mathbf {x}), \hat {\mathcal {Q}} (\mathbf {x})) | |
| $$ | |
| Where $d(\cdot)$ defines the distance between two geometric objects; $\hat{\mathcal{H}}$ and $\hat{\mathcal{Q}}$ are hypersurfaces obtained by substituting $\mathbf{A}$ and $\mathbf{B}$ with $\hat{\mathbf{A}}$ and $\hat{\mathbf{B}}$ in $f(\mathbf{x})$ . In place of preserving the orientation of $\mathcal{H}(\mathbf{x})$ and $\mathcal{Q}(\mathbf{x})$ , we aim to preserve the orientation of their respective dual objects denoted by $\delta(H(\mathbf{x}))$ and $\delta(Q(\mathbf{x}))$ . Thus, | |
| $$ | |
| \min _ {\hat {\mathbf {A}}, \hat {\mathbf {B}}} d \Big (\delta (\mathcal {H} (\mathbf {x})), \delta (\hat {\mathcal {H}} (\mathbf {x})) \Big) + d \Big (\delta (\mathcal {Q} (\mathbf {x})), \delta (\hat {\mathcal {Q}} (\mathbf {x})) \Big) | |
| $$ | |
| Without the loss of generality, we assume downprojection is bias-free $^2$ , $\delta(\cdot)$ can be expressed in terms of generator matrices $\mathbf{G}$ of zonotopes obtained from $\mathbf{A}$ and $\mathbf{B}$ . To find sparse $\hat{\mathbf{A}}, \hat{\mathbf{B}}$ , we introduce sparse regularization terms in the optimization function. Thus, finding adapter weights that preserve the hypersurface geometry can be cast as the following optimization problem: | |
| $$ | |
| \begin{array}{l} \min _ {\hat {\mathbf {A}}, \hat {\mathbf {B}}} \frac {1}{2} \left| \left| \hat {\mathbf {G}} _ {1} - \mathbf {G} _ {1} \right| \right| _ {F} ^ {2} + \frac {1}{2} \left| \left| \hat {\mathbf {G}} _ {2} - \mathbf {G} _ {2} \right| \right| _ {F} ^ {2} \\ + \lambda_ {1} \left| \left| \hat {\mathbf {G}} _ {1} \right| \right| _ {1, 1} + \lambda_ {2} \left| \left| \hat {\mathbf {G}} _ {2} \right| \right| _ {1, 1} \tag {2} \\ \end{array} | |
| $$ | |
| where $\mathbf{G}_1 = \mathrm{Diag}[\pmb{b}^{i+}] \mathbf{A}$ ; $\mathbf{G}_2 = \mathrm{Diag}[\pmb{b}^{i-}] \mathbf{A}$ , | |
| $$ | |
| \hat {\mathbf {G}} _ {1} = \operatorname {D i a g} [ \hat {\boldsymbol {b}} ^ {i +} ] \hat {\mathbf {A}}; \hat {\mathbf {G}} _ {2} = \operatorname {D i a g} [ \hat {\boldsymbol {b}} ^ {i -} ] \hat {\mathbf {A}}, | |
| $$ | |
| We provide a derivation of the above function in Appendix B. It is important to note that in the pruning phase, we do not iteratively fine-tune adapter or LM parameters on the downstream task. | |
| Algorithm 1: Tropical Adapter Pruning | |
| Initialize: $T,\eta \lambda_1,\lambda_2$ | |
| Return: $\hat{\mathbf{A}},\hat{\mathbf{B}}$ | |
| $$ | |
| \hat {\mathbf {B}} ^ {+} \leftarrow \mathbf {B} ^ {+}, \hat {\mathbf {B}} ^ {-} \leftarrow \mathbf {B} ^ {-}; | |
| $$ | |
| for $t$ in $1,\ldots ,T$ do | |
| for $i$ in $1,\ldots ,r$ do | |
| if $t$ is even then | |
| $$ | |
| \begin{array}{l} \hat {\mathbf {G}} _ {1} ^ {i} = \operatorname {D i a g} [ \hat {b} ^ {i +} ] \hat {\mathbf {A}}; \\ l o s s _ {1} = | | \hat {\mathbf {G}} _ {1} ^ {i} - \mathbf {G} _ {1} ^ {i} | | _ {F} ^ {2}; \\ l o s s _ {2} = \left\| \hat {\mathbf {G}} _ {1} \right\| _ {1, 1}; \\ \ell = 0. 5 * \text {l o s s} _ {1} + \lambda_ {1} * \text {l o s s} _ {2}; \\ \end{array} | |
| $$ | |
| else | |
| $$ | |
| \begin{array}{l} \hat {\mathbf {G}} _ {2} ^ {i} = \mathrm {D i a g} [ \hat {\boldsymbol {b}} ^ {i -} ] \hat {\mathbf {A}}; \\ l o s s _ {1} = \left\| \hat {\mathbf {G}} _ {2} ^ {i} - \mathbf {G} _ {2} ^ {i} \right\| _ {F} ^ {2}; \\ l o s s _ {2} = \left\| \hat {\mathbf {G}} _ {2} \right\| _ {1, 1}; \\ \ell = 0. 5 * \text {l o s s} _ {1} + \lambda_ {2} * \text {l o s s} _ {2}; \\ \end{array} | |
| $$ | |
| end | |
| < check convergence of combined loss > | |
| $$ | |
| \begin{array}{l} \hat {\mathbf {A}} \leftarrow \hat {\mathbf {A}} - \eta * \frac {\partial}{\partial (\hat {\mathbf {A}})} \ell ; \\ \hat {\mathbf {B}} \gets \hat {\mathbf {B}} - \eta * \frac {\partial}{\partial (\hat {\mathbf {B}})} \ell . \\ \end{array} | |
| $$ | |
| end | |
| end | |
| <table><tr><td rowspan="2"></td><td colspan="10">Task-1 (MELD)</td></tr><tr><td>1.4%</td><td>2.9%</td><td>5.7%</td><td>8.6%</td><td>11.5%</td><td>14.5%</td><td>17.4%</td><td>20.3%</td><td>23.3%</td><td>FM</td></tr><tr><td>Standard</td><td>33.56</td><td>35.00</td><td>41.02*</td><td>39.65</td><td>43.72</td><td>49.38</td><td>50.18</td><td>55.05</td><td>57.06</td><td></td></tr><tr><td>Tropical</td><td>34.06*</td><td>37.37*</td><td>36.79</td><td>49.59*</td><td>52.46*</td><td>52.28*</td><td>56.34*</td><td>58.04*</td><td>58.09*</td><td>60.71</td></tr><tr><td>Combined</td><td>33.56</td><td>37.37</td><td>41.02</td><td>49.59</td><td>52.46</td><td>52.28</td><td>56.34</td><td>58.04</td><td>58.09</td><td></td></tr><tr><td rowspan="2"></td><td colspan="10">Task-2 (SNLI)</td></tr><tr><td>1.4%</td><td>2.9%</td><td>4.3%</td><td>11.5%</td><td>13.0%</td><td>14.5%</td><td>26.0%</td><td>32.8%</td><td>39.4%</td><td>FM</td></tr><tr><td>Standard</td><td>40.31</td><td>34.41</td><td>34.74</td><td>77.91*</td><td>79.96*</td><td>82.06*</td><td>85.77</td><td>86.01</td><td>85.77</td><td></td></tr><tr><td>Tropical</td><td>41.08*</td><td>46.33*</td><td>46.94*</td><td>75.82</td><td>77.93</td><td>78.17</td><td>85.96*</td><td>86.17*</td><td>85.96*</td><td>86.45</td></tr><tr><td>Combined</td><td>41.08</td><td>46.33</td><td>46.94</td><td>77.91</td><td>79.96</td><td>82.06</td><td>85.96</td><td>86.01</td><td>85.96</td><td></td></tr><tr><td rowspan="2"></td><td colspan="10">Task-3 (RT)</td></tr><tr><td>1.4%</td><td>2.7%</td><td>4.2%</td><td>5.8%</td><td>7.3%</td><td>8.9%</td><td>11.1%</td><td>12.1%</td><td>15.0%</td><td>FM</td></tr><tr><td>Standard</td><td>77.49</td><td>74.67</td><td>69.23</td><td>82.65</td><td>83.86</td><td>86.49</td><td>83.02</td><td>85.18</td><td>87.80*</td><td></td></tr><tr><td>Tropical</td><td>79.17*</td><td>82.55*</td><td>83.68*</td><td>84.05*</td><td>86.12*</td><td>87.05*</td><td>87.24*</td><td>88.37*</td><td>87.71</td><td>87.99</td></tr><tr><td>Combined</td><td>79.17</td><td>82.55</td><td>83.68</td><td>84.05</td><td>86.12</td><td>86.49</td><td>87.24</td><td>88.37</td><td>87.71</td><td></td></tr><tr><td rowspan="2"></td><td colspan="10">Task-4 (IMDB)</td></tr><tr><td>1.4%</td><td>2.9%</td><td>5.7%</td><td>8.5%</td><td>11.5%</td><td>14.4%</td><td>17.4%</td><td>20.3%</td><td>26.3%</td><td>FM</td></tr><tr><td>Standard</td><td>74.55</td><td>71.70</td><td>69.82</td><td>59.46</td><td>77.04*</td><td>80.85</td><td>80.32</td><td>83.75</td><td>84.22</td><td></td></tr><tr><td>Tropical</td><td>75.37*</td><td>79.14*</td><td>82.22*</td><td>75.89*</td><td>75.11</td><td>83.79*</td><td>83.78*</td><td>85.41*</td><td>85.15*</td><td>87.61</td></tr><tr><td>Combined</td><td>74.55</td><td>79.14</td><td>82.22</td><td>75.89</td><td>77.04</td><td>83.79</td><td>83.78</td><td>85.41</td><td>85.15</td><td></td></tr><tr><td></td><td colspan="10">Task-5 (TREC)</td></tr><tr><td>Method</td><td>1.4%</td><td>3.0%</td><td>5.0%</td><td>11.5%</td><td>14.5%</td><td>16.1%</td><td>25.9%</td><td>30.4%</td><td>44.5%</td><td>FM</td></tr><tr><td>Standard</td><td>24.0</td><td>42.2</td><td>56.0</td><td>63.0</td><td>64.6</td><td>70.2</td><td>96.6*</td><td>96.8*</td><td>97.4*</td><td></td></tr><tr><td>Tropical</td><td>30.8*</td><td>45.8*</td><td>64.6*</td><td>71.8*</td><td>75.8*</td><td>73.0*</td><td>96.4</td><td>96.4</td><td>97.2</td><td>97.2</td></tr><tr><td>Combined</td><td>30.8</td><td>45.8</td><td>64.6</td><td>71.8</td><td>75.8</td><td>73.0</td><td>96.6</td><td>96.8</td><td>97.4</td><td></td></tr></table> | |
| Table 1: Percentage of retained parameters (100 - $\hat{p}$ )% vs Test Accuracy/F1. FM refers to the full model, i.e., unpruned adapter states. Superscript $*$ refers to better performing setting out of Standard and Tropical. | |
| Given an adapter module, Algorithm1 finds the minimizers $\hat{\mathbf{A}}$ and $\hat{\mathbf{B}}$ by performing gradient descent-based updates over two loss terms expressed in terms of generators $\mathbf{G}_1$ and $\mathbf{G}_2$ . $T$ , $r$ denote the maximum gradient steps and the number of rows in A and columns in B. $\eta \in \mathbb{R}^+$ is step size and $\lambda_1, \lambda_2 \in \mathbb{R}^+$ indicate the importance of pruning over the shift in generators. We employ layer-wise pruning of the network without any iterative fine-tuning on downstream tasks. We find $p\%$ parameters with the smallest magnitude in $\{\mathbf{A}, \mathbf{B}\}$ and $\{\hat{\mathbf{A}}, \hat{\mathbf{B}}\}$ separately, denoted by $P_S$ and $P_T$ . We denote tropical adapter pruning by Tropical that prunes only those parameters in $P_T$ which are also present in the set $P_S$ . The final percentage of pruned parameters decreases to $\hat{p}\%$ . We compare the approach with the baseline that prunes $\hat{p}\%$ of the smallest magnitude parameters from the layer. We denote this setting by Standard. Combined chooses one of Tropical or Standard whichever gives better results on the development set. We omit the comparison with AdapterDrop | |
| method as even at $50\%$ pruning, the method shows a significant drop in the performance. Standard inherently tests the validity of magnitude-based pruning via lottery ticket hypothesis (Wu et al., 2022) but without iterative retraining of adapter parameters. We do not iteratively fine-tune adapter parameters on the downstream task. The proposed method is agnostic to downstream tasks, models, and the learning algorithm used to train it. Thus, the framework is related to but not directly comparable to model $L_{0}$ sparsification (Louizos et al., 2017) and low-rank compression (Idelbayev and Carreira-Perpinan, 2020). | |
| # 4 Experiments | |
| We set up a RoBERTa-base (Liu et al., 2019) with one adapter module inserted in each layer after add and layer norm sub-layer. We follow the adapter configuration from Pfeiffer et al. (2020). For pruning analysis, we consider three tasks—Emotion Recognition in Conversations (ERC), Natural Language Inference (NLI), and Text Classification (TC). For ERC, we use MELD, the task is to classify the emotion of an utterance given past utterances. | |
|  | |
| Figure 1: Value of pruning function (loss) with iterations. | |
| Keeping the current utterance first, we append the past seven utterances in reverse order (Bhardwaj et al., 2022b). For NLI, we use SNLI dataset (Bowman et al., 2015). We append the premise and hypothesis separated by the special token $<s>$ . For TC task, we use three datasets: IMDB (Maas et al., 2011), Rotten Tomatoes RT (Pang and Lee, 2005), and TREC (Li and Roth, 2002). Separately, we pretrain adapters on downstream tasks with batch size 32, LR of 0.001, and 1000 steps with evaluation at every 100 steps using a development set. The evaluation metric for ERC is macro F1 and accuracy for all the other tasks. We set pruning percentage $p \in \{98\%, 96\%, \dots, 2\}$ . Table 1 shows the test performance of networks with the percentage of adapter parameters retained, i.e., $(100 - \hat{p})\%$ , this is represented in black-bold fonts. We observe that both Standard and Tropical can be effective in pruning more than $60\%$ of the adapter parameters with a small drop in performance with respect to the full module performance (FM). Moreover, we notice Tropical outperforms Standard in eight out of nine pruned model states on MELD, six out of nine on SNLI, eight out of nine pruned adapter states on RT and IMDB, and six out of nine states on Trec. Across the 45 combinations of tasks and pruning fractions, except for two settings, we observe tropical geometry-based combined approach outperforms the other two, denoted in red font. | |
| Next, we study tropical pruning in different scenarios—class-bind, class-uniform, and node-wise (See et al., 2016). In class-blind (CB), all the parameters of adapters are considered for pruning $p\%$ of the smallest magnitude weights and biases. In class-uniform (CU), we prune $p\%$ of the smallest magnitude parameters of each adapter layer separately. We also refer to it as layer-wise pruning. In node-wise pruning, we prune $p\%$ of the node-wise | |
| <table><tr><td></td><td>1.4%</td><td>2.7%</td><td>5.8%</td><td>8.9%</td><td>12.0%</td><td>15.0%</td></tr><tr><td>S-CN</td><td>71.76</td><td>68.48</td><td>65.29</td><td>84.52</td><td>85.55</td><td>86.96</td></tr><tr><td>T-CN</td><td>76.27</td><td>79.55</td><td>78.42</td><td>80.11</td><td>86.49</td><td>87.24</td></tr><tr><td>S-CU</td><td>77.49</td><td>74.67</td><td>82.65</td><td>86.49</td><td>85.18</td><td>87.80</td></tr><tr><td>T-CU</td><td>79.17</td><td>82.55</td><td>84.05</td><td>87.05</td><td>88.37</td><td>87.71</td></tr><tr><td>S-CB</td><td>69.89</td><td>74.39</td><td>58.82</td><td>76.17</td><td>85.74</td><td>87.90</td></tr><tr><td>T-CB</td><td>73.73</td><td>50.00</td><td>66.79</td><td>84.05</td><td>86.87</td><td>86.68</td></tr></table> | |
| Table 2: Accuracy scores on RT task. Comparing node-wise (CN), layer-wise (CU), and pruning all modules together (CB). S and T denote Standard and Tropical, respectively. | |
| parameters (considering both weights and biases). | |
| As shown in Table 2, in the Standard settings S-CN/ S-CU/ S-CB, we observe layer-wise S-CU pruning works best in four out of six different fractions of parameters retained. In the Tropical pruning settings T-CN/ T-CU/ T-CB, layer-wise pruning T-CU performs best amongst all the considered pruning fractions. Moreover, T-CU works best under each pruning fraction category. | |
| Figure 1 shows the Objective function in Equation (2) quickly converges to the minimum. This observation corroborates the claim of convexity by (Alfarra et al., 2022). The plot in Figure 2 shows the change in zonotope structure before and after optimization on SNLI. The black polytope is obtained from generators A, B and the red polytope shows the polytope obtained after optimization, i.e., zonotope obtained from $\hat{\mathbf{A}}$ . $\hat{\mathbf{B}}$ . We observe the optimization preserves the geometry of zonotopes while enforcing the rows of the down-projection matrices to be as much sparse as possible, i.e., many points in the zonotope come close to zero, keeping necessary boundary points to preserve the geometry. These zonotopes are dual to adapter hypersurfaces, thus preserving one structure enforces the other's orientation to remain preserved. Hence, one can prune adapters yet maintain their characteristic properties. | |
| # 5 Conclusion | |
| We proposed a novel approach for adapter pruning by studying their tropical characteristics. We formulated it as an optimization problem that aims to identify row-sparse projection matrices while minimizing the distance between tropical hypersurfaces before and after pruning. We demonstrated the advantages of tropical characterization on five NLP datasets reformulated as classification. | |
|  | |
| Figure 2: Zonotope defined by adapters before (red) and after the pruning (blue) via Algorithm 1. | |
| # 6 Limitations | |
| As our focus is on adapter-based architectures, the proposed approach can not be directly adapted to other parameter-efficient approaches such as soft prompt tuning (Lester et al., 2021; Bhardwaj et al., 2022a) which do not have explicit dense connections and activation. Another limitation comes from ReLU activation function. Since it fits in min-max (Tropical) algebra, we could reformulate the problem in terms of tropical polynomials. However, for other non-linear activation functions such as Tanh, one has to reformulate and likely resort to approximations as there is no straightforward way to cast them in a tropical algebraic expression. | |
| # Acknowledgement | |
| We thank the anonymous reviewers for their constructive feedback. This project is supported by the AcRF MoE Tier-2 grant (Project no. T2MOE2008, and Grantor reference no. MOE-T2EP20220-0017) titled: "CSK-NLP: Leveraging Common-sense Knowledge for NLP", and the SRG grant id: T1SRIS19149 titled "An Affective Multimodal Dialogue System". | |
| # References | |
| Motasem Alfarra, Adel Bibi, Hasan Hammoud, Mohamed Gaafar, and Bernard Ghanem. 2022. On the decision boundaries of neural networks: A tropical geometry perspective. IEEE Transactions on Pattern Analysis and Machine Intelligence. | |
| Rishabh Bhardwaj, Amrita Saha, and Steven CH Hoi. 2022a. Vector-quantized input-contextualized soft prompts for natural language understanding. arXiv preprint arXiv:2205.11024. | |
| Rishabh Bhardwaj, Tushar Vaidya, and Soujanya Poria. 2022b. KNOT: Knowledge distillation using optimal | |
| transport for solving NLP tasks. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4801-4820, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. | |
| Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. | |
| Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2017. A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282. | |
| Narsingh Deo. 2017. Graph theory with applications to engineering and computer science. Courier Dover Publications. | |
| Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635. | |
| Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR. | |
| Yerlan Idelbayev and Miguel A Carreira-Perpinan. 2020. Low-rank compression of neural nets: Learning the rank of each layer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8049-8059. | |
| Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045-3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. | |
| Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics. | |
| Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. | |
| Christos Louizos, Max Welling, and Diederik P Kingma. 2017. Learning sparse neural networks through $l\_0$ regularization. arXiv preprint arXiv:1712.01312. | |
| Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics. | |
| Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL. | |
| Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterfusion: Non-destructive task composition for transfer learning. arXiv preprint arXiv:2005.00247. | |
| Andreas Rückle, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the efficiency of adapters in transformers. arXiv preprint arXiv:2010.11918. | |
| Abigail See, Minh-Thang Luong, and Christopher D. Manning. 2016. Compression of neural machine translation models via pruning. arXiv preprint arXiv:1606.09274. | |
| Jiarun Wu, Qingliang Chen, Zeguan Xiao, Yuliang Gu, and Mengsi Sun. 2022. Pruning adatperfusion with lottery ticket hypothesis. In *Findings of the Association for Computational Linguistics: NAACL* 2022, pages 1632-1646, Seattle, United States. Association for Computational Linguistics. | |
| Liwen Zhang, Gregory Naitzat, and Lek-Heng Lim. 2018. Tropical geometry of deep neural networks. In International Conference on Machine Learning, pages 5824-5832. PMLR. | |
| # A Tropical Algebra and Geometry | |
| To motivate our approach we first provide background on tropical algebra and geometry. | |
| Tropical Arithmetic. Tropical algebra is a variant of classical algebra where basic arithmetic operations are redefined. The tropical sum $\oplus$ of two numbers represents their maximum and the tropical product $\odot$ represents a classical addition<sup>4</sup>. Thus, | |
| $$ | |
| x \oplus y = \max \{x, y \} | |
| $$ | |
| $$ | |
| x \odot y = x + y | |
| $$ | |
| For instance, $2 \oplus 5 = 5$ and $2 \odot 5 = 7$ . Axioms and order of arithmetic operations in tropical algebra follows the classical, thus addition is commutative and multiplication is distributive over addition: | |
| $$ | |
| x \oplus y = y \oplus z \quad (\text {c o m m u t a t i v e}) | |
| $$ | |
| $$ | |
| x \odot (y \oplus z) = x \odot y \oplus x \odot z \quad (\text {d i s t r i b u t i v e}) | |
| $$ | |
| From these properties, it can be inferred that $-\infty$ is the additive identity as $-\infty \oplus x = x$ and $0$ is multiplicative identity $0 \odot x = x$ . Elements under the tropical arithmetic in the space of real numbers (with $-\infty$ ) are said to form a semiring $\mathbb{T}$ denoted by a triplet $(\mathbb{R} \cup \{-\infty\}, \oplus, \odot)$ . | |
| Tropical Power and Monomial. For any variable $x \in \mathbb{T}$ , the tropical power can be defined as $x^{\odot a} = a.x$ , where $a \in \mathbb{N}$ (a natural number). For simplicity of notations, we will write $x^{a}$ in place of $x^{\odot a}$ . A tropical monomial is expressed in the form | |
| $$ | |
| c \mathbf {x} ^ {\alpha} := c \odot x _ {1} ^ {a _ {1}} \odot x _ {2} ^ {a _ {2}} \odot \ldots \odot x _ {n} ^ {a _ {d}} | |
| $$ | |
| where $c\in \mathbb{R}\cup \{-\infty \}$ and $a_{i}\in \mathbb{N}$ . For convenience, we will write tropical monomial by $c\mathbf{x}^{\alpha}$ where $\mathbf{x} = (x_{1},\ldots ,x_{d})\in \mathbb{T}^{d}$ and $\alpha = (a_{1},\dots,a_{d})\in \mathbb{N}^{d}$ . | |
| Tropical Polynomial. A $d$ -variable tropical polynomial $f(\mathbf{x})$ can be represented by a finite sum of tropical monomials | |
| $$ | |
| f (\mathbf {x}) = c _ {1} \mathbf {x} ^ {\alpha_ {1}} \oplus c _ {2} \mathbf {x} ^ {\alpha_ {2}} \oplus \dots \oplus c _ {n} \mathbf {x} ^ {\alpha_ {n}} | |
| $$ | |
| where the $a_i \neq a_j$ when $i \neq j$ , coefficients $c_i \in \mathbb{R} \cup \{-\infty\}$ , $\alpha_i = (a_{i1}, a_{i2}, \ldots, a_{id}) \in \mathbb{N}^d$ and exponents $a_i$ are integers. Ignoring $-\infty$ for ease, it is important to note that $p$ has a mapping $\mathbb{R}^d \to \mathbb{R}$ both $\mathbf{x}$ and $\alpha$ are $d$ -dimensional vectors. | |
|  | |
| Figure 3: Tropical curve $\mathcal{F}(p)$ (orange) and dual subdivision of Newton polytope $\delta(p)$ (black) of $f(\mathbf{x}) = 1 \odot x_1^2 \oplus 1 \odot x_2^2 \oplus 2 \odot x_1 x_2 \oplus 2 \odot x_1 \oplus 2 \odot x_2 \oplus 2$ . | |
| Tropical powers, monomials and polynomials are basic building blocks of the algorithm we propose for adapter pruning. | |
| # A.1 Tropical Hypersurfaces. | |
| Tropical hypersurfaces are analogues to classical algebraic surfaces and key objects for us to study for adapter pruning. Given a tropical polynomial $f(\mathbf{x}) = c_{1}\mathbf{x}^{\alpha_{1}}\oplus \ldots \oplus c_{n}\mathbf{x}^{\alpha_{n}}$ , its tropical hypersurface is a set of points where $p$ is attained by two or more constituting monomials, thus | |
| $$ | |
| \mathcal {F} (p) := \left\{\mathbf {x} \in \mathbb {R} ^ {d}: c _ {i} \mathbf {x} ^ {\alpha_ {i}} = c _ {j} \mathbf {x} ^ {\alpha_ {j}}, \right. | |
| $$ | |
| for some $\alpha_{i}\neq \alpha_{j}\} .$ | |
| Here we mention a few provable facts— $\mathcal{F}$ divides the domain of $p$ into convex regions (or cells). Polynomial $p$ is non-linear at $\mathbf{x}$ if and only if $\mathbf{x}$ lies on $\mathcal{F}$ . Similar to algebraic polynomials, we can identify Newton polytopes associated to tropical polynomials. | |
| Newton Polytopes. For a given polynomial $f(\mathbf{x}) = c_{1}\mathbf{x}^{\alpha_{1}}\oplus \ldots \oplus c_{n}\mathbf{x}^{\alpha_{n}}$ , its newton polytope is defined by the convex hull of the exponents $\alpha_{i}\in \mathbb{N}^{d}$ . The points $\alpha_{i}$ and polytope lies in a $d$ -dimensional plane $(\mathbb{R}^d)$ . Thus | |
| $$ | |
| \Delta (p) := \operatorname {C o n v H u l l} \left(\left\{\alpha_ {i} \in \mathbb {R} ^ {d}: c _ {i} \neq - \infty \right\} _ {i = 1} ^ {n}\right) | |
| $$ | |
| The tropical polynomial $p$ determines the dual subdivision $\delta(p)$ of newton polytope. The tropical hypersurface $\mathcal{F}(p)$ is dual graph to this $\delta(p)$ , i.e., vertices of $\mathcal{F}(p)$ are regions of $\delta(p)$ and edges represent two adjacent regions in $\delta(p)^5$ . Each vertex in $\delta(p)$ corresponds to one "cell" in $\mathbb{R}^b$ where $p$ | |
| is linear. Since $\delta(p)$ is in one-to-one correspondence with the tropical hypersurface, we study the adapter characteristics—underlying hypersurfaces $\mathcal{F}(p)$ —by studying the orientation of the primal graph $\delta(p)$ . To determine $\delta(p)$ , we use the fact that when the model is bias-free, $\delta(p) = \Delta(p)$ (Zhang et al., 2018; Alfarra et al., 2022). Figure 3 provides an illustration of $\mathcal{F}(p)$ adn $\delta(p)$ for a specific $p$ . | |
| Zonotopes. The zonotope formed by $\mathbf{v}_1, \ldots, \mathbf{v}_m \in \mathbb{R}^n$ is defined as $\mathcal{Z}(\mathbf{v}_1, \ldots, \mathbf{v}_m) := \{\sum_{i=1}^{m} \lambda_i \mathbf{v}_i, 0 \leq \lambda_i \leq 1\}$ . | |
| Minkowski sum. Given two sets $P_{1}$ and $P_{2}$ in $\mathbb{R}^d$ , the Minkowski is defined as | |
| $$ | |
| P _ {1} \tilde {+} P _ {2} := \left\{\mathbf {v} _ {1} + \mathbf {v} _ {2}: \mathbf {v} _ {1} \in P _ {1}, \mathbf {v} _ {2} \in P _ {2} \right\} | |
| $$ | |
| Property-1. The Minkowski sum of two polytopes is the convex hull of their vertex sets. Let, $\mathcal{V}(P)$ be the vertex sets of a polytope $P$ , then | |
| $$ | |
| P _ {1} \tilde {+} P _ {2} = \operatorname {C o n v H u l l} \left(\mathcal {V} \left(P _ {1}\right) \tilde {+} \mathcal {V} \left(P _ {2}\right)\right) | |
| $$ | |
| Under bias-free assumption, | |
| Property-2. Let $p_1$ and $p_2$ be the tropical polynomials, then | |
| $$ | |
| \delta \left(p _ {1} \odot p _ {2}\right) = \delta \left(p _ {1}\right) \tilde {+} \delta \left(p _ {2}\right) | |
| $$ | |
| # B Pruning Objective | |
| # B.1 Notations Used | |
| We denote $W_{d}$ , $W_{u}$ , $h$ by $\mathbf{A}$ , $\mathbf{B}$ , and $\mathbf{x}$ , respectively; $\mathbf{B}^{+}:=\max\{\mathbf{B},\mathbf{0}\}$ ; $\mathbf{B}^{-}:=\max\{-\mathbf{B},\mathbf{0}\}$ ; $b_{i}$ denotes $i_{th}$ row of $\mathbf{B}$ ; $b^{i+}:=\max\{b^{i},\mathbf{0}\}$ , $b^{i-}:=\max\{-b^{i},\mathbf{0}\}$ ; $\operatorname{Diag}[u]$ arranges $\mathbf{u}$ in a diagonal matrix; $||\mathbf{G}||_{1,1}:=\Sigma_{k=1}^{d}||\mathbf{G}(i,:)||_{1}$ ; $||\cdot||_{F}$ denotes Frobenius Norm. | |
| # B.2 Derivation of Pruning Objective | |
| Let $f(\mathbf{x}) = \mathbf{B}\max \{\mathbf{A}\mathbf{x},0\}$ , then | |
| $$ | |
| \begin{array}{l} f (\mathbf {x}) = \left(\mathbf {B} ^ {+} - \mathbf {B} ^ {-}\right) \left(\max \left\{\mathbf {A} ^ {+} \mathbf {x}, \mathbf {A} ^ {-} \mathbf {x} \right\} - \mathbf {A} ^ {-} \mathbf {x}\right) \\ = \left[ \mathbf {B} ^ {+} \max \left\{\mathbf {A} ^ {+} \mathbf {x}, \mathbf {A} ^ {-} \mathbf {x} \right\} + \mathbf {B} ^ {-} \mathbf {A} ^ {-} \mathbf {x} \right] \\ - \left[ \mathbf {B} ^ {-} \max \left\{\mathbf {A} ^ {+} \mathbf {x}, \mathbf {A} ^ {-} \mathbf {x} \right\} + \mathbf {B} ^ {+} \mathbf {A} ^ {-} \mathbf {x} \right] \\ \end{array} | |
| $$ | |
| Thus we define $H(\mathbf{x})$ and $Q(\mathbf{x})$ as | |
| $$ | |
| \begin{array}{l} H (\mathbf {x}) := \left[ \mathbf {B} ^ {+} \max \left\{\mathbf {A} ^ {+} \mathbf {x}, \mathbf {A} ^ {-} \mathbf {x} \right\} + \mathbf {B} ^ {-} \mathbf {A} ^ {-} \mathbf {x} \right] \\ Q (\mathbf {x}) := \left[ \mathbf {B} ^ {-} \max \left\{\mathbf {A} ^ {+} \mathbf {x}, \mathbf {A} ^ {-} \mathbf {x} \right\} + \mathbf {B} ^ {+} \mathbf {A} ^ {-} \mathbf {x} \right] \\ \end{array} | |
| $$ | |
| Thus, $f(\mathbf{x}) = H(\mathbf{x}) - Q(\mathbf{x})$ . Let $f^i$ denote the first output from adapter block, $b^i = \mathbf{B}[i,:]$ (i.e. $i^{\text{th}}$ row of $\mathbf{B}$ ). We use $\mathcal{H}$ and $\mathcal{Q}$ to denote tropical hypersurfaces of $H$ and $Q$ at node $i$ . | |
| $$ | |
| \mathcal {H} = \left[ \bigodot_ {j = 1} ^ {p} \left(\mathbf {x} ^ {a _ {j} ^ {+}} \oplus \mathbf {x} ^ {a _ {j} ^ {-}}\right) ^ {b _ {j} ^ {i +}} \right] \odot \left[ \bigodot_ {j = 1} ^ {p} \left(\mathbf {x} ^ {a _ {j} ^ {-}}\right) ^ {b _ {j} ^ {i -}} \right] | |
| $$ | |
| Computing dual subdivision | |
| $$ | |
| \begin{array}{l} \delta (\mathcal {H}) = \left[ \tilde {\dot {+}} _ {j = 1} ^ {p} \left(b ^ {i +} \operatorname {C o n v H u l l} \left(a _ {j} ^ {+}, a _ {j} ^ {-}\right)\right) \right] \\ \tilde {+} \left[ \tilde {+} _ {j = 1} ^ {p} \left(b _ {j} ^ {i -} a _ {j} ^ {-}\right) \right] (P-1) \\ = \left[ \tilde {+} _ {j = 1} ^ {p} \left(b ^ {i +} \operatorname {C o n v H u l l} \left(a _ {j} ^ {+} - a _ {j} ^ {-}, 0\right)\right) \right] \\ \left. \tilde {+} \left[ \tilde {+} _ {j = 1} ^ {p} \left(b _ {j} ^ {i -} a _ {j} ^ {-}\right) \right] + s h i f t \quad \right. (P-2) \\ = \left[ \tilde {+} _ {j = 1} ^ {p} \left(b ^ {i +} \operatorname {C o n v H u l l} (a _ {j}, 0)\right) \right] + s h i f t \\ \end{array} | |
| $$ | |
| Similarly, we compute dual subdivision of $q^i$ | |
| $$ | |
| \delta (\mathcal {Q}) = \left[ \tilde {+} _ {j = 1} ^ {p} \left(b ^ {i -} \operatorname {C o n v H u l l} \left(a _ {j}, 0\right)\right) \right] + s h i f t | |
| $$ | |
| Note that convex hull of $a_{j}$ and 0 is a line segment. Thus, $\delta (h^i)$ defines a Minkowski sum over line segments which is a zonotope. Following Alfarra et al. (2022), and ognoting the shifts, one can straightaway obtain zonotope generators $\mathbf{G}_1$ and $\mathbf{G}_2$ for $\delta (\mathcal{H})$ and $\delta (\mathcal{Q})$ , respectively. |