Add Batch 1409d6cf-dc01-4300-be97-9e40ca851cf8
Browse files- alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/3333bd3e-943f-4661-ae40-014c8a021785_content_list.json +3 -0
- alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/3333bd3e-943f-4661-ae40-014c8a021785_model.json +3 -0
- alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/3333bd3e-943f-4661-ae40-014c8a021785_origin.pdf +3 -0
- alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/full.md +362 -0
- alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/images.zip +3 -0
- alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/layout.json +3 -0
- amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/ad9896f5-78a5-40e5-b19f-0ebb4789f513_content_list.json +3 -0
- amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/ad9896f5-78a5-40e5-b19f-0ebb4789f513_model.json +3 -0
- amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/ad9896f5-78a5-40e5-b19f-0ebb4789f513_origin.pdf +3 -0
- amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/full.md +0 -0
- amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/images.zip +3 -0
- amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/layout.json +3 -0
alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/3333bd3e-943f-4661-ae40-014c8a021785_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:37e406e5a196f2fdd1aa1b83a4f10bbc7588b5df8d986239ed1f83e7f11ed412
|
| 3 |
+
size 80656
|
alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/3333bd3e-943f-4661-ae40-014c8a021785_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a9562860b82d06e6b21f31c782f678e5387739e8de5d01218e837b3676fbe789
|
| 3 |
+
size 96613
|
alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/3333bd3e-943f-4661-ae40-014c8a021785_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b3542e212a9079ded4810f83dc5707bf2a61a537cfecaea460848419faa922e
|
| 3 |
+
size 448463
|
alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/full.md
ADDED
|
@@ -0,0 +1,362 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Lightweight Mixture-of-Experts Neural Machine Translation Model with Stage-wise Training Strategy
|
| 2 |
+
|
| 3 |
+
Fan Zhang $^{1,2}$ and Mei Tu $^{2}$ and Song Liu $^{2}$ and Jinyao Yan $^{1*}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ State Key Laboratory of Media Convergence and Communication, Communication University of China $^{2}$ Samsung R&D Institute China-Beijing
|
| 6 |
+
|
| 7 |
+
{zhang.fan, mei.tu, s0101.liu}@samsung.com, jyan@cuc.edu.cn
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Dealing with language heterogeneity has always been one of the challenges in neural machine translation (NMT). The idea of using mixture-of-experts (MoE) naturally excels in addressing this issue by employing different experts to take responsibility for different problems. However, the parameter-inefficiency problem in MoE results in less performance improvement when boosting the number of parameters. Moreover, most of the MoE models are suffering from the training instability problem. This paper proposes MoA (Mixture-of-Adapters), a lightweight MoE-based NMT model that is trained via an elaborately designed stage-wise training strategy. With the standard Transformer as the backbone model, we introduce lightweight adapters as experts for easy expansion. To improve the parameter efficiency, we explicitly model and distill the language heterogeneity into the gating network with clustering. After freezing the gating network, we adopt the Gumbel-Max sampling as the routing scheme when training experts to balance the knowledge of generalization and specialization while preventing expert over-fitting. Empirical results show that MoA achieves stable improvements in different translation tasks by introducing much fewer extra parameters compared to other MoE baselines. Additionally, the performance evaluations on a multidomain translation task illustrate the effectiveness of our training strategy.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
In recent years, neural machine translation (NMT), a key component of natural language processing (NLP), has been studied extensively with significant progress (Vaswani et al., 2017; Dabre et al., 2020). Texts from various domains often exhibit unique expression styles. Domain diversity leads to heterogeneous data distribution of a large multisource dataset. When training an NMT model with the global optimization strategy, data from diverse
|
| 16 |
+
|
| 17 |
+
domains tend to adjust model parameters to fitting their respective distributions, which harms the convergence of the model. In literature, some works (Kobus et al., 2017; Britz et al., 2017; Zeng et al., 2018; Bapna and Firat, 2019; Pham et al., 2020) regarded this problem as domain shift and tried to address it by transfer learning. However, domain knowledge is required in these works, which introduces a new data collection problem. How to deal with the heterogeneity of language in NMT tasks remains challenging.
|
| 18 |
+
|
| 19 |
+
The core concept of MoE is using multiple experts to divide a problem space into homogeneous regions (Baldacchino et al., 2016), which has a natural advantage in solving the problem of language heterogeneity. Recently, previous works (Shazeer et al., 2017; Fedus et al., 2021; Dai et al., 2022) explored the mixture-of-experts (MoE) structure in NMT tasks. These studies demonstrate the impressive capacity of MoE in handling various data distributions. They boost the number of parameters from million to billion while maintaining low computational requirements. However, MoE is reported to be parameter-inefficient (Hoffmann et al., 2022; Jawahar et al., 2023; Xu et al., 2023a,b) i.e., a huge number of parameters only brings a small performance improvement. As an illustration, compared with a dense model, an MoE model only offers an average improvement of 0.3 BLEU with 20 times more parameters(Costa-jussà et al., 2022).
|
| 20 |
+
|
| 21 |
+
Meanwhile, training the gating network implicitly by an overall optimization makes most of the MoE models suffer from the training instability problem. It is crucial to meticulously design a training strategy to prevent instability. For instance, expert load imbalance may occur during training of an MoE model: the gating network may route most data to a small number of experts, meanwhile many other experts do not get sufficiently trained at all (Lepikhin et al., 2020). Moreover, the routing fluctuation (Dai et al., 2022) issue, i.e. the gating
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: The stage-wise training strategy. MoA is composed of three components: an encoder-decoder based backbone model, a gating network and a set of adapters. The language heterogeneity is modeled explicitly using clustering and distilled into the gating network through a multi-classification task in stage 2 to improve parameter efficiency and ensure training stability, while the Gumbel-Max sampling routing scheme is adopted in stage 3 to balance the knowledge of generalization and specialization and avoid over-fitting. With this training strategy, MoA achieves stable improvements in different translation tasks by introducing very few extra parameters.
|
| 25 |
+
|
| 26 |
+
network may route the same data to different experts along with training, is also one of the factors leading to training instability.
|
| 27 |
+
|
| 28 |
+
In this paper, we propose MoA (Mixture-of-Adapters), a lightweight MoE-based NMT model that is trained using a stage-wise training strategy. Our model is composed of three components: (i) an encoder-decoder based backbone model; (ii) a gating network responsible for routing data to suitable experts by their encoded features; (iii) a set of lightweight adapters (Bapna and First, 2019) as the experts transplanted in every decoder layer of the backbone model. With the stage-wise training strategy, these three components are trained sequentially. Specifically, the backbone model is trained using a standard machine translation task. Meanwhile, we pre-inject an adapter in every decoder layer in this training stage, and use these adapters for parameter initialization of the other adapters in the adapter training stage. In the training stage of the gating network, the language heterogeneity is modeled explicitly using clustering and distilled into the gating network through a multi-classification task. Such an explicit learning strategy improves the parameter efficiency and ensures the training stability of our model. Moreover, to balance the knowledge of generalization and specialization and prevent the over-fitting problem, we employ the Gumbel-Max sampling as the routing scheme when training the adapters. Empirical results show that MoA achieves stable improvement in different translation tasks by introducing much fewer extra parameters compared to the other MoE baselines. Additionally, the performance evaluations and the ablation studies on the multi-domain
|
| 29 |
+
|
| 30 |
+
translation task illustrate the effectiveness of our training strategy.
|
| 31 |
+
|
| 32 |
+
# 2 Related Works
|
| 33 |
+
|
| 34 |
+
The MoE structure (Jacobs et al., 1991) has been widely studied in the machine translation area (Shazeer et al., 2017; Lepikhin et al., 2020; Dai et al., 2022; Xu et al., 2023b). With the same core concept, different MoE models draw attention to different design strategies.
|
| 35 |
+
|
| 36 |
+
One difference lies in what to use as experts. Most of the MoE models (Shazeer et al., 2017; Lepikhin et al., 2020; Fedus et al., 2021) adopt feed-forward networks (FFN) as experts. Based on Transformer (Vaswani et al., 2017), many works (Lepikhin et al., 2020; Fedus et al., 2021; Jawahar et al., 2023) inject extra MoE layers or substitute the FFN layers with MoE layers. Instead of using FFN layers, Zhang et al. (2022) use the attention heads as experts to achieve stronger performance than the standard multi-head attention layer.
|
| 37 |
+
|
| 38 |
+
Another difference is the training strategy. Shazeer et al. (2017) activate two or more experts to obtain nonzero derivatives for the gating networks in back-propagation. Fedus et al. (2021) only activate one expert per time, they train the gating network by auxiliary losses. Dai et al. (2022) use a two-stage training strategy to address the routing fluctuation problem. Different from the above works that use load balancing loss to prevent expert load imbalance, Lewis et al. (2021) formulate token-to-expert allocation as a linear assignment problem that requires no auxiliary load balancing loss. Liu et al. (2022) propose gating dropout to reduce cross-machine communication and speed
|
| 39 |
+
|
| 40 |
+
up the training process.
|
| 41 |
+
|
| 42 |
+
Moreover, according to the granularity of different routing schemes, MoE models can be divided into three levels: token-level, sentence-level, and task-level. Most of the above works adopt token-level schemes, where different experts will be activated for different tokens. The sentence-level routing scheme refers to tokens from a sentence that share the same gating result. When selecting experts by task boundaries as opposed to making input-level decisions, e.g., for multilingual machine translation tasks, the routing scheme is regarded as task-level (Kudugunta et al., 2021).
|
| 43 |
+
|
| 44 |
+
# 3 Model Architecture
|
| 45 |
+
|
| 46 |
+
MoA consists of three components: (i) an encoder-decoder based backbone model; (ii) a gating network that responsibility is to route data to suitable experts by their encoded features; and (iii) a set of lightweight adapters as experts transplanted in the end of every decoder layer of the backbone model.
|
| 47 |
+
|
| 48 |
+
The backbone model is based on the encoder-decoder structure, where the encoder/decoder block is composed of a stack of several identical layers. It theoretically can be any encoder-decoder based model, we use the powerful Transformer (Vaswani et al., 2017) in our experiment. Given a source sentence $x = (x_{1},\dots,x_{n})$ , the encoder block maps it to a sequence of hidden states $h = (h_1,\dots,h_n)$ . Then, $h$ is fed to the decoder block to generate an output sequence $y = (y_{1},\dots,y_{m})$ with an autoregressive process.
|
| 49 |
+
|
| 50 |
+
The gating network makes use of the hidden states $h$ to discriminate different data distributions. First, $h \in \mathbb{R}^{n \times d}$ is condensed to $\hat{h} \in \mathbb{R}^d$ by mean pooling on the sequence length dimension $n$ ,
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
\hat {h} = \operatorname {P o o l i n g} (h) \tag {1}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
Then two linear transformations are introduced with a tanh activation in between to compute adapter scores $s$ ,
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
s = \tanh (\hat {h} W _ {1} + b _ {1}) W _ {2} + b _ {2} \tag {2}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
where $W_{1}\in \mathbb{R}^{d\times d}$ $W_{2}\in \mathbb{R}^{d\times K}$ $b_{1}\in \mathbb{R}^{d}$ and $b_{2}\in \mathbb{R}^{K}$ are the parameter matrices of the linear transformations and $K$ is the predefined adapter number.
|
| 63 |
+
|
| 64 |
+
The adapters transfer the decoded hidden states from generic to specific. Different from previous MoE models using original feed-forward network
|
| 65 |
+
|
| 66 |
+
(FFN) (Vaswani et al., 2017) with large inner dimensions as experts, introducing extra lightweight adapters makes the size of experts can be more flexibly controlled. In each adapter, the output $z_{i}$ of the $i$ -th decoder layer is first normalized with layer normalization,
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\tilde {z} _ {i} = L N \left(z _ {i}\right) \tag {3}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
Then $\tilde{z}_i$ is fed to an FFN with a small inner dimension, followed by a residual connection, to obtain the adapter output,
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
o _ {i} = F F N \left(\tilde {z} _ {i}\right) + z _ {i} \tag {4}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
In inference, only the adapter with the biggest score in each decoder layer is activated. Unlike inference, the Gumbel-Max sampling is adopted as the routing scheme in the adapter training stage, which will be discussed in the next section.
|
| 79 |
+
|
| 80 |
+
# 4 Stage-wise Training
|
| 81 |
+
|
| 82 |
+
Most of the MoE models train their gating network along with an overall optimization of the final task. Although some auxiliary losses are introduced to avoid potential risks such as expert load imbalance, this implicit learning approach introduces another discrete latent variable learning problem and increases the training difficulty of the gating networks on how to distinguish different data distributions, which leads to the parameter-inefficiency problem in MoE. In this paper, we train MoA with a stage-wise training strategy. Each training stage is elaborately designed to improve model performance with as few extra parameters as possible. Next, we will discuss our training process in detail.
|
| 83 |
+
|
| 84 |
+
# 4.1 Backbone model
|
| 85 |
+
|
| 86 |
+
The backbone model is trained through a standard machine translation task. Specifically, in this training stage, we inject an adapter in every decoder layer in advance and train them with the backbone model. These pre-injected adapters are used for parameter initialization of the other adapters in the adapter training stage.
|
| 87 |
+
|
| 88 |
+
Given a dataset of parallel text $D^{mt} = \{(x, y^*)\}_{i=1}^{N_t}$ , the training objective is varying the trainable parameters $\theta$ to minimize the cross-entropy loss:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\mathcal {L} _ {m t} (\theta) = - \sum_ {i = 1} ^ {N _ {t}} \sum_ {t = 1} ^ {m} \log P \left(y _ {t} ^ {*} \mid y _ {1: t - 1} ^ {*}, x; \theta\right) \tag {5}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
At this training stage, $\theta$ refers to the parameters of the backbone model and the pre-injected adapters.
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
$k = 4;\tau \rightarrow 0$
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
$k = 4;\tau \to \infty$
|
| 101 |
+
Figure 2: Activation probability controlled by candidate number $k$ and temperature $\tau > 0$ . $k$ controls the boundary width while $\tau$ controls the probability distribution.
|
| 102 |
+
|
| 103 |
+
# 4.2 Gating network
|
| 104 |
+
|
| 105 |
+
To guide the gating network to explicitly learn the language heterogeneity, it is necessary to model the language heterogeneity first and distill it into the gating network in a supervised manner. Features from the same data distribution are usually closer than those from different distributions (Aharoni and Goldberg, 2020), so the data distribution differences can be modeled by unsupervised clustering. Meanwhile, the encoder in the backbone model can be adopted as the data feature extractor after the previous training stage. Following the clues above, in practice, we first sample a set of source sentences from $D^{mt}$ at random. Then we use the encoder to convert these sentences into the condensed hidden states $\hat{h}$ (Eq. 1) as the sentence features and then cluster them into $K$ groups, where $K$ is the adapter number we pre-defined according to the training data scale. In the end, we distill the clustering results into the gating network through a multi-classification task.
|
| 106 |
+
|
| 107 |
+
Let $D^{d} = \{(x,c)\}_{i = 1}^{N_{d}}$ be the training set we construct above where $c$ is the one-hot vector of the data category label, the goal in this training stage is minimizing the multi-classification loss:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\mathcal {L} _ {d} (\theta) = - \sum_ {i = 1} ^ {N _ {d}} \sum_ {j = 1} ^ {K} c _ {j} \log (p _ {j}) \tag {6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
p _ {j} = \frac {e ^ {s _ {j}}}{\sum_ {k = 1} ^ {K} e ^ {s _ {k}}} \tag {7}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
and $\theta$ refers to the parameters in Eq. 2.
|
| 120 |
+
|
| 121 |
+
# 4.3 Adapters
|
| 122 |
+
|
| 123 |
+
To train the adapters, a straightforward scheme is routing data to the adapters with the top-1 highest
|
| 124 |
+
|
| 125 |
+
scores. Since there is a balance between the knowledge of generalization and specialization, this routing scheme is reckless. After freezing the gating network, only choosing the highest-scored adapters makes them trained on a restricted subset of the whole training data, which may result in the overfitting problem. In the adapter training stage, we first use the pre-injected adapters to initialize all other adapters in the same layer. Then we propose routing sentences with the Gumbel-Max sampling scheme (Gumbel, 1954; Maddison et al., 2014). While ensuring the specialization of knowledge, this routing scheme further improves the knowledge generalization of the adapters.
|
| 126 |
+
|
| 127 |
+
Formally, given the adapter scores $s$ , we focus on $k$ ( $k \leq K$ ) candidates with the highest scores and compute their relative probabilities,
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
p = \operatorname {s o f t m a x} (\operatorname {t o p k} (s) / \tau) \tag {8}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
Then the activated adapter is chosen as:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
e = \arg \max (G (p)) \tag {9}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
G (p) = \log (p) + g \tag {10}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
and $g$ is a set of i.i.d samples that are drawn from Gumbel(0,1) distribution (Gumbel, 1954). In Eq. 8, the temperature $\tau > 0$ is introduced to control the probability distribution. The higher the $\tau$ , the closer the probability distribution to the discrete uniform distribution, which means candidates will be activated with more similar probabilities. On the contrary, it is closer to the one-hot distribution, which means candidates with the highest scores will be activated with very high probabilities.
|
| 146 |
+
|
| 147 |
+
The training objective in this stage is the same as the backbone model (Eq. 5), except that $\theta$ refers to the parameters of the decoder and the adapters.
|
| 148 |
+
|
| 149 |
+
# 5 Experimental Settings
|
| 150 |
+
|
| 151 |
+
To evaluate the effectiveness of our method, we conduct a set of experiments on both several standard machine translation tasks and a multi-domain machine translation task. The translation quality is measured by the BLEU-4 (Papineni et al., 2002) score. Next, we will provide a comprehensive description of our experimental settings.
|
| 152 |
+
|
| 153 |
+
# 5.1 Datasets
|
| 154 |
+
|
| 155 |
+
For the standard machine translation, we test our method on the German-to-English, the English-to
|
| 156 |
+
|
| 157 |
+
<table><tr><td rowspan="2"></td><td colspan="4">K = 24</td><td colspan="3">K = 12</td></tr><tr><td>Param.</td><td>en-de</td><td>de-en</td><td>zh-en</td><td>Param.</td><td>en-th</td><td>avg.S</td></tr><tr><td>Backbone</td><td>85M</td><td>28.24</td><td>34.31</td><td>26.40</td><td>86M</td><td>17.10</td><td>26.51</td></tr><tr><td>SGMoE</td><td>[+289]M</td><td>28.67</td><td>34.49</td><td>26.77</td><td>[+138]M</td><td>18.00</td><td>26.98</td></tr><tr><td>SGMoE-SL</td><td>[+289]M</td><td>29.17</td><td>34.65</td><td>27.60</td><td>[+138]M</td><td>18.00</td><td>27.36</td></tr><tr><td>Switch</td><td>[+289]M</td><td>28.66</td><td>33.92</td><td>26.76</td><td>[+138]M</td><td>17.30</td><td>26.66</td></tr><tr><td>Switch-SL</td><td>[+289]M</td><td>28.83</td><td>34.54</td><td>27.47</td><td>[+138]M</td><td>18.10</td><td>27.24</td></tr><tr><td>BASE</td><td>[+302]M</td><td>29.10</td><td>34.77</td><td>27.53</td><td>[+151]M</td><td>18.65</td><td>27.51</td></tr><tr><td>MoA (Ours)</td><td>[+19]M</td><td>29.13</td><td>34.82</td><td>27.66</td><td>[+10]M</td><td>18.50</td><td>27.53</td></tr><tr><td>Backbone-big</td><td>[+165]M</td><td>29.64</td><td>35.38</td><td>27.29</td><td>[+165]M</td><td>24.00</td><td>29.08</td></tr><tr><td>MoA-big (Ours)</td><td>[+203]M</td><td>29.85</td><td>35.41</td><td>27.75</td><td>[+184]M</td><td>24.10</td><td>29.28</td></tr></table>
|
| 158 |
+
|
| 159 |
+
Table 1: Performance evaluation over standard machine translation tasks. The average BLEU scores of the four translation tasks are listed in avg.S column. The best values of the same backbone model are shown in bold.
|
| 160 |
+
|
| 161 |
+
German, the Chinese-to-English, and the English-to-Thai translation tasks. We collect the sentence pairs of the full WMT-2014 German-English (about 36.0 million), the WMT-2019 Chinese-English (about 25.2 million) and the OPUS English-Thai (about 3.3 million, provided by Lowphansirikul et al. (2020)) for corresponding translation tasks, and test the translation tasks of the German-English, the Chinese-to-English and the English-to-Thai by WMT-14, WMT-19 and IWSLT-14 testsets, respectively.
|
| 162 |
+
|
| 163 |
+
For the multi-domain machine translation, we test our method on the German-to-English multidomain translation task. We collect two datasets and mix them up as the training set. One is the standard WMT-2014 German-English sentence pairs (about 4.6 million), which can be seen as a large generic domain (WMT). Another one is the multidomain sentence pairs (about 1.5 million) from Aharoni and Goldberg (2020) which is originally provided by Koehn and Knowles (2017), including textual data in five diverse domains: IT-related text (IT, manuals and localization files of open-source software), translations of the Koran (KOR), legal text (LAW, legislative text of the European Union), medical text (MED, PDF documents from the European Medicines Agency), and subtitles (SUB).
|
| 164 |
+
|
| 165 |
+
In the data processing phase, the English and the German sentences are first tokenized by Moses tokenizer (Koehn et al., 2007) and then split into subwords by Byte-Pair Encoding (BPE) (Sennrich et al., 2016), where the BPE is learned jointly on the English and German sentences and the merge operation is set to 30,000 during learning. Meanwhile, the Chinese and the Thai sentences are split by SentencePiece (Kudo and Richardson, 2018)
|
| 166 |
+
|
| 167 |
+
with a vocabulary size of 30,000.
|
| 168 |
+
|
| 169 |
+
# 5.2 Implementations
|
| 170 |
+
|
| 171 |
+
We use the Transformer (Vaswani et al., 2017) implemented in Fairseq (Ott et al., 2019) as the backbone model structure. All baseline models are implemented with the backbone model structure, and the experts are only introduced in each decoder layer. According to the data scale of the training set, the expert number $K$ of the German-to-English, the English-to-German, and the Chinese-to-English translation task is set to 24, while that of the English-to-Thai and the multi-domain translation task is set to 12. Next, we will introduce these models briefly.
|
| 172 |
+
|
| 173 |
+
SGMoE: The Sparsely-gated mixture-of-experts (SGMoE) (Shazeer et al., 2017) is originally based on the LSTM structure (Hochreiter and Schmidhuber, 1997). It introduces sparsely gated MoE layers with the noisy top-k token-level gating scheme, which activates $k > 1$ experts per time to obtain nonzero derivatives in back-propagation. It introduces auxiliary losses to deal with the expert load imbalance. In practice, $k$ is set to 2, and FFN layers are adopted as the experts.
|
| 174 |
+
|
| 175 |
+
SGMoE-SL: The SGMoE with Sentence-Level routing scheme. The sentence-level routing scheme means we use the condensed hidden states of the encoder (w.r.t. Eq. 1) to compute the overall gating scores and route data in all layers with these scores.
|
| 176 |
+
|
| 177 |
+
Switch: Switch Transformer (Fedus et al., 2021) is another MoE method with a token-level routing scheme that activates only one expert per time to keep efficiency. It introduces both a capacity factor and an auxiliary load balancing loss to avoid the expert load imbalance.
|
| 178 |
+
|
| 179 |
+
<table><tr><td></td><td>Param.</td><td>WMT</td><td>KOR</td><td>IT</td><td>MED</td><td>LAW</td><td>SUB</td><td>avg.M</td></tr><tr><td>Backbone</td><td>84M</td><td>32.08</td><td>19.65</td><td>44.79</td><td>51.47</td><td>54.34</td><td>30.60</td><td>38.82</td></tr><tr><td>ADPT</td><td>[+9]M</td><td>32.22</td><td>22.48</td><td>45.88</td><td>53.58</td><td>56.36</td><td>31.97</td><td>40.42</td></tr><tr><td>SGMoE</td><td>[+138]M</td><td>32.23</td><td>21.54</td><td>46.45</td><td>53.48</td><td>56.85</td><td>31.40</td><td>40.33</td></tr><tr><td>SGMoE-SL</td><td>[+138]M</td><td>32.29</td><td>21.76</td><td>46.44</td><td>53.47</td><td>57.09</td><td>31.85</td><td>40.48</td></tr><tr><td>Switch</td><td>[+138]M</td><td>32.05</td><td>20.54</td><td>46.53</td><td>51.51</td><td>55.34</td><td>30.67</td><td>39.44</td></tr><tr><td>Switch-SL</td><td>[+138]M</td><td>32.26</td><td>20.68</td><td>46.43</td><td>52.87</td><td>56.72</td><td>30.94</td><td>39.98</td></tr><tr><td>BASE</td><td>[+151]M</td><td>32.59</td><td>22.24</td><td>46.36</td><td>53.67</td><td>57.58</td><td>31.55</td><td>40.67</td></tr><tr><td>MoA (Ours)</td><td>[+10]M</td><td>32.58</td><td>22.08</td><td>46.88</td><td>54.48</td><td>57.68</td><td>31.50</td><td>40.87</td></tr><tr><td>Backbone-big</td><td>[+165]M</td><td>32.29</td><td>22.31</td><td>48.11</td><td>56.35</td><td>59.13</td><td>32.08</td><td>41.71</td></tr><tr><td>MoA-big (Ours)</td><td>[+184]M</td><td>32.66</td><td>22.70</td><td>48.60</td><td>56.89</td><td>59.83</td><td>32.31</td><td>42.17</td></tr></table>
|
| 180 |
+
|
| 181 |
+
Table 2: Multi-domain translation performance. The average BLEU scores of the six domains are listed in the avg.M column. The best values of the same backbone model are shown in bold. The expert number $K$ is set to 6 for ADPT and 12 for the other MoE models.
|
| 182 |
+
|
| 183 |
+
Switch-SL: The Switch Transformer with Sentence-Level routing scheme that is the same as SGMoE-SL.
|
| 184 |
+
|
| 185 |
+
BASE: The Balanced Assignment of Sparse Experts (BASE) layer (Lewis et al., 2021) formulates token-to-expert allocation as a linear assignment problem, which requires no auxiliary load balancing loss. Instead of replacing the original FFN layers, it introduces extra FFN layers after each decoder layer as the experts.
|
| 186 |
+
|
| 187 |
+
ADPT: Since the domain labels are accessible in multi-domain machine translation tasks, we train a set of adapters (Bapna and First, 2019) for every domain by injecting an adapter in every encoder and decoder layer using the same backbone model. All parameters of the backbone model are frozen when training these adapters.
|
| 188 |
+
|
| 189 |
+
MoA: Our proposed method. In the training stage of the gating network, we sample 200,000 sentences from the NMT training set at random. We choose the Gaussian Mixture Model (GMM) as our clustering approach. The inner dimension of the adapters is set to 128 for both ADPT and MoA in the standard backbone settings, and that is set to 256 for MoA in the big backbone settings. In the training stage of the adapters, the adapter candidate number $k$ and the temperature $\tau$ are set to 4 and 1.0, respectively. The Gumbel-Max routing scheme is shut down in inference with $k = 1$ .
|
| 190 |
+
|
| 191 |
+
# 6 Results and Discussion
|
| 192 |
+
|
| 193 |
+
# 6.1 Standard machine translation
|
| 194 |
+
|
| 195 |
+
We evaluate the performance of the MoE models over the four standard machine translation tasks and report their BLEU scores in Table 1. For the
|
| 196 |
+
|
| 197 |
+
baseline MoE models, we use Transformer under standard settings as the backbone model. For our method, we evaluate it on the Transformer settings of both standard and big. To show the differences in model size, we present the number of parameters (Param.) in Table 1. The Param. number in the setting of $k = 24$ is the average parameter number of the three models.
|
| 198 |
+
|
| 199 |
+
As shown in Table 1, compared to other MoE models, MoA achieves the highest performance improvement while introducing much fewer parameters. When applying MoA on the big backbone model, it also achieves stable performance improvements. Although other MoE models introduce a huge amount of parameters, even much higher than the backbone model, their performance improvements are limited. Meanwhile, compared to the big backbone model, the parameter-inefficiency problem results in worse model performance for these MoE models with even more parameters. Moreover, methods based on the sentence-level routing scheme (methods with -SL flag) show better model performance than token-level in our experimental settings. It demonstrates that the more gating networks that require implicit training, the more challenging the discrete latent variable learning problem becomes. The discrimination ability of language heterogeneity of these gating networks will be discussed in the next sections.
|
| 200 |
+
|
| 201 |
+
# 6.2 Multi-domain machine translation
|
| 202 |
+
|
| 203 |
+
We further evaluate these MoE models on a multidomain machine translation task, which has domain labels so that we can analyze the ability of the gating networks to distinguish different data distributions. With the multi-domain machine trans
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
(a) SGMoE-SL
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
(b) Switch-SL
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
(c) MoA
|
| 213 |
+
Figure 3: Routing statistics of sentence-level MoEs on the test sets of the six domains.
|
| 214 |
+
|
| 215 |
+
lation task, we test the translation performance of these MoE models in this section and analyze the sentence-level gating networks in the next section. The BLEU scores are reported in Table 2.
|
| 216 |
+
|
| 217 |
+
As shown in Table 2, the conclusion on average translation performance is consistent with Table 1. Different from the other MoE models, since ADPT is trained with in-domain data per domain, it also requires domain labels in inference to manually route data to the corresponding adapter. Given domain labels, ADPT can be regarded as an MoE model with a label-guided routing scheme. Although ADPT is parameter-efficient, such a routing scheme requires extra data information and introduces the data collection problem. Furthermore, the expert number of ADPT is limited by the known domain number (i.e., ADPT can only introduce $k = 6$ experts to be consistent with the domain number), and the small domains will not take benefits from the big generic dataset (i.e. the WMT training set in our experiments), which makes its model performance on some small domains is poorer than MoA.
|
| 218 |
+
|
| 219 |
+
# 6.3 Routing results
|
| 220 |
+
|
| 221 |
+
To analyze the discrimination ability of language heterogeneity of these gating networks through the accessible domain labels, we count the routing results of these sentence-level MoE models on the test sets. Since SGMoE-SL uses top-2 experts for each sentence, we only count the expert with the highest gating score.
|
| 222 |
+
|
| 223 |
+
Based on the statistics, we roughly measure the discrimination ability by two metrics. One is the
|
| 224 |
+
|
| 225 |
+
<table><tr><td></td><td>PUR</td><td>NMI</td></tr><tr><td>SGMoE-SL</td><td>0.2855</td><td>0.0395</td></tr><tr><td>Switch-SL</td><td>0.2706</td><td>0.0321</td></tr><tr><td>MoA</td><td>0.8498</td><td>0.6480</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 3: Measurements of the domain discrimination ability on the test sets of the six domains.
|
| 228 |
+
|
| 229 |
+
category purity score $PUR$ ,
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
P U R = \frac {1}{U} \sum_ {i = 1} ^ {K} u _ {i} ^ {\max } \tag {11}
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
where $U$ is the total number of the test cases, $K$ is the number of the categories (NOT the number of the test domains), and $u_{i}^{max}$ is the maximum number of $i$ -th category. The other one is the normalized mutual information NMI (Danon et al., 2005) score between true domain labels and the predicted category labels, as implemented in scikit-learn (Pedregosa et al., 2011). The two metrics measure the mixing degree of different domains in a category. The higher the PUR and the NMI, the better the domain discrimination ability.
|
| 236 |
+
|
| 237 |
+
Results in Table 3 show that the domain discrimination ability of our gating network is significantly higher than the other two MoE models. In SGMoE-SL and Switch-SL, the auxiliary load balancing loss makes their routing results relatively balanced. However, the challenging discrete latent variable learning problem is not just a load balancing problem, the domain discrimination results of the two MoE models illustrate that their performance in modeling language heterogeneity is very weak. Their routing decisions result in a very high overlap of their expert knowledge, thus
|
| 238 |
+
|
| 239 |
+
<table><tr><td></td><td>avg.S</td><td>avg.M</td></tr><tr><td>Backbone</td><td>26.51</td><td>38.82</td></tr><tr><td>Naive MoA</td><td>26.92</td><td>40.32</td></tr><tr><td>+Pre-A</td><td>27.35</td><td>40.72</td></tr><tr><td>++Unf-D</td><td>27.53</td><td>40.87</td></tr></table>
|
| 240 |
+
|
| 241 |
+
leading to the parameter-inefficiency problem. In contrast, MoA models the language heterogeneity well, different experts are in charge of different domains, which allows it to achieve better translation performance with much fewer parameters. Because the expert (adapter) number is bigger than the known domain number (12 vs. 6), some experts (e.g. expert 8 and 9) are activated very few times by the test sets, they are responsible for the other data distributions beyond the six known domains.
|
| 242 |
+
|
| 243 |
+
# 6.4 Ablation study
|
| 244 |
+
|
| 245 |
+
To further analyze the impact of our training strategy on the model performance, we further conduct a set of ablation experiments on these machine translation tasks.
|
| 246 |
+
|
| 247 |
+
We conduct experiments of training with/without the pre-injected adapter in stage 1 and freeze/unfreeze decoder parameters in stage 3. We first train a naive MoA without the pre-injected adapter in stage 1 and freeze decoder parameters in stage 3. Then we pre-inject the adapter (+Pre-A) and unfreeze decoder parameters (++Unf-D) step by step. The experimental results are presented in Table 4. After pre-injecting an adapter in every decoder layer and using it for parameter initialization of the other adapters in the same layer, the information gap between newly injected adapters and the backbone model is eliminated. It brings significant performance improvements. After unfreezing the decoder parameters and training them with adapters, MoA achieves a higher average BLEU score. These ablation studies demonstrate the effectiveness of the two training tricks.
|
| 248 |
+
|
| 249 |
+
In the adapter training stage, we also adopt the Gumbel-Max routing scheme to balance the knowledge of generalization and specialization and avoid the over-fitting problem. The two hyperparameters, the adapter candidate number $k$ and
|
| 250 |
+
|
| 251 |
+
Table 4: Ablation study on different translation tasks. avg.S and avg.M indicate the average BLEU scores of the standard machine translation tasks on different directions and the multi-domain machine translation task on different domains, respectively.
|
| 252 |
+
|
| 253 |
+
<table><tr><td></td><td>avg.M</td></tr><tr><td>Backbone</td><td>38.82</td></tr><tr><td>τ → 0.0</td><td>40.40</td></tr><tr><td>τ = 0.1</td><td>40.58</td></tr><tr><td>τ = 1.0</td><td>40.87</td></tr><tr><td>τ = 10.0</td><td>40.42</td></tr></table>
|
| 254 |
+
|
| 255 |
+
Table 5: Hyper-parameter $\tau$ analysis on the multidomain translation task. $\tau \rightarrow {0.0}$ is equivalent to shutting down the Gumbel-Max routing scheme.
|
| 256 |
+
|
| 257 |
+
<table><tr><td></td><td>avg.M</td></tr><tr><td>Backbone</td><td>38.82</td></tr><tr><td>k=1</td><td>40.40</td></tr><tr><td>k=2</td><td>40.64</td></tr><tr><td>k=4</td><td>40.87</td></tr><tr><td>k=8</td><td>40.67</td></tr><tr><td>k=12</td><td>40.46</td></tr></table>
|
| 258 |
+
|
| 259 |
+
Table 6: Hyper-parameter $k$ analysis on the multidomain translation task. $k = 1$ is equivalent to shutting down the Gumbel-Max routing scheme.
|
| 260 |
+
|
| 261 |
+
the temperature $\tau$ , control the activation probability between different adapters (w.r.t. Eq. 8). We experiment with adjusting them in the expert training stage. Experimental results are reported in Table 5 and Table 6, respectively.
|
| 262 |
+
|
| 263 |
+
In Table 5, we fix $k$ to 4 and vary $\tau$ to analyze the difference. Meanwhile, the experimental settings in Table 6 are that $\tau$ is fixed to 1.0 and $k$ is varied. Both $\tau \rightarrow 0.0$ and $k = 1$ are equivalent to shutting down the Gumbel-Max routing scheme, i.e., the routing scheme of only choosing the top-1 highest-scored adapters. It means every adapter is trained with a restricted subset of the whole training set, leading to the over-fitting problem, the model performance is not as good as those with the Gumbel-Max routing scheme. Meanwhile, the moderate values $\tau = 1.0$ and $k = 4$ perform better than the other settings. It demonstrates that there is a balance in the domain knowledge of each expert in specialization and generalization.
|
| 264 |
+
|
| 265 |
+
# 7 Conclusion
|
| 266 |
+
|
| 267 |
+
This paper proposes MoA, a lightweight MoE-based NMT model that is trained via an elaborately designed stage-wise training strategy. The lightweight adapters are introduced as experts for easy expansion. By modeling the language heterogeneity with clustering and distilling the knowledge into the gating network explicitly, MoA improves
|
| 268 |
+
|
| 269 |
+
the parameter efficiency and avoids training instability. The Gumbel-Max sampling is adopted as the routing scheme when training the adapters to balance the knowledge of generalization and specialization and avoid over-fitting. Empirical results show the effectiveness of the proposed method.
|
| 270 |
+
|
| 271 |
+
# Limitations
|
| 272 |
+
|
| 273 |
+
The proposed MoA method shows stable improvements in different translation tasks by introducing only a few parameters. However, due to the computational complexity limitation, modeling the language heterogeneity through clustering approaches limits the data scale used for training the gating network. When the data distribution of the sampling sentences deviates from that of the whole dataset, the language heterogeneity may not be modeled very well. Exploring alternative methods to clustering for modeling language heterogeneity should be an interesting direction. Additionally, the Gumbel-Max sampling scheme has been shown to enhance model performance, but its two hyper-parameters are fixed empirically in the current version. In future work, adjusting these two hyper-parameters automatically according to the number of experts and the characteristics of the training set may be better.
|
| 274 |
+
|
| 275 |
+
# References
|
| 276 |
+
|
| 277 |
+
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747-7763.
|
| 278 |
+
Tara Baldacchino, Elizabeth J Cross, Keith Worden, and Jennifer Rowson. 2016. Variational bayesian mixture of experts models and sensitivity analysis for nonlinear dynamical systems. Mechanical Systems and Signal Processing, 66:178-200.
|
| 279 |
+
Ankur Bapna and Orhan First. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538-1548.
|
| 280 |
+
Denny Britz, Quoc Le, and Reid Pryzant. 2017. Effective domain mixing for neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 118-126.
|
| 281 |
+
Marta R Costa-jussà, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard,
|
| 282 |
+
|
| 283 |
+
et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672.
|
| 284 |
+
Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. ACM Computing Surveys (CSUR), 53(5):1-38.
|
| 285 |
+
Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Stable routing strategy for mixture of experts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7085-7095.
|
| 286 |
+
Leon Danon, Albert Diaz-Guilera, Jordi Duch, and Alex Arenas. 2005. Comparing community structure identification. Journal of statistical mechanics: Theory and experiment, 2005(09):P09008.
|
| 287 |
+
William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961.
|
| 288 |
+
Emil Julius Gumbel. 1954. Statistical theory of extreme values and some practical applications: a series of lectures, volume 33. US Government Printing Office.
|
| 289 |
+
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
|
| 290 |
+
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022. An empirical analysis of compute-optimal large language model training. Advances in Neural Information Processing Systems, 35:30016-30030.
|
| 291 |
+
Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991. Adaptive mixtures of local experts. Neural computation, 3(1):79-87.
|
| 292 |
+
Ganesh Jawahar, Subhabrata Mukherjee, Xiaodong Liu, Young Jin Kim, Muhammad Abdul-Mageed, VS Laks Lakshmanan, Ahmed Hassan, Sebastien Bubeck, and Jianfeng Gao. 2023. Automoe: Heterogeneous mixture-of-experts with adaptive computation for efficient neural machine translation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9116-9132.
|
| 293 |
+
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
|
| 294 |
+
Catherine Kobus, Josep M Crego, and Jean Senellart. 2017. Domain control for neural machine translation. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, RANLP 2017, pages 372-378.
|
| 295 |
+
|
| 296 |
+
Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Association for Computational Linguistics.
|
| 297 |
+
Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28-39.
|
| 298 |
+
Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. EMNLP 2018, page 66.
|
| 299 |
+
Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, and Orhan First. 2021. Beyond distillation: Task-level mixture-of-experts for efficient inference. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3577-3599.
|
| 300 |
+
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668.
|
| 301 |
+
Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. Base layers: Simplifying training of large, sparse models. In International Conference on Machine Learning, pages 6265-6274. PMLR.
|
| 302 |
+
Rui Liu, Young Jin Kim, Alexandre Muzio, and Hany Hassan. 2022. Gating dropout: Communication-efficient regularization for sparsely activated transformers. In International Conference on Machine Learning, pages 13782-13792. PMLR.
|
| 303 |
+
Lalita Lowphansirikul, Charin Polpanumas, Attapol T Rutherford, and Sarana Nutanong. 2020. scb-mt-enth-2020: A large english-thai parallel corpus. arXiv preprint arXiv:2007.03541.
|
| 304 |
+
Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Proceedings of the 27th International Conference on Neural Information Processing Systems-Volume 2, pages 3086-3094.
|
| 305 |
+
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics* (Demonstrations), pages 48-53.
|
| 306 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of
|
| 307 |
+
|
| 308 |
+
the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics.
|
| 309 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
|
| 310 |
+
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.
|
| 311 |
+
Minh Quang Pham, Josep-Maria Crego, François Yvon, and Jean Senellart. 2020. A study of residual adapters for multi-domain neural machine translation. In _Conference on Machine Translation_.
|
| 312 |
+
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725.
|
| 313 |
+
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.
|
| 314 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
|
| 315 |
+
Haoran Xu, Maha Elbayad, Kenton Murray, Jean Maillard, and Vedanuj Goswami. 2023a. Towards being parameter-efficient: A stratified sparsely activated transformer with dynamic capacity. arXiv preprint arXiv:2305.02176.
|
| 316 |
+
Haoran Xu, Weiting Tan, Shuyue Stella Li, Yunmo Chen, Benjamin Van Durme, Philipp Koehn, and Kenton Murray. 2023b. Condensing multilingual knowledge with lightweight language-specific modules. arXiv preprint arXiv:2305.13993.
|
| 317 |
+
Jiali Zeng, Jinsong Su, Huating Wen, Yang Liu, Jun Xie, Yongjing Yin, and Jianqiang Zhao. 2018. Multidomain neural machine translation with word-level domain context discrimination. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 447-457.
|
| 318 |
+
Xiaofeng Zhang, Yikang Shen, Zeyu Huang, Jie Zhou, Wenge Rong, and Zhang Xiong. 2022. Mixture of attention heads: Selecting attention heads per token. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4150-4162.
|
| 319 |
+
|
| 320 |
+
# 8 Appendices
|
| 321 |
+
|
| 322 |
+
# 8.1 Training details
|
| 323 |
+
|
| 324 |
+
In any training phase, we use the Adam optimizer (Kingma and Ba, 2014) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ and $\epsilon = 10^{-9}$ . For translation optimization, we use the Noam decay as the learning rate scheduler with 4000 warmup steps and a learning rate of 0.0007. With a batch size of $32k$ in the token level and the update frequency of 5 on 2 A100 GPUs, the maximum update number of training is set to $300k$ , while that of fine-tuning is set to $30k$ with the early stopping strategy. The maximum update number of the gating network training stage is set to $10k$ with a batch size of $8k$ in the token level. In inference, the beam size is set to 5 for all models.
|
| 325 |
+
|
| 326 |
+
# 8.2 Clustering details
|
| 327 |
+
|
| 328 |
+
We choose the Gaussian Mixture Model (GMM) in scikit-learn (Pedregosa et al., 2011) as the clustering approach. The covariance type of GMM is set to 'full', while all other settings are set by default. Before clustering, we perform dimensionality reduction with Principal Components Analysis (PCA) to reduce the vector dimension of the sentence representations from 512 to 64.
|
| 329 |
+
|
| 330 |
+
# 8.3 Gumbel-Max sampling
|
| 331 |
+
|
| 332 |
+
We implement the Gumbel-Max sampling strategy with PyTorch (Paszke et al., 2019) of version 1.10.1+cu102. Implementation details are shown in Algorithm 1. It is worth noting that the adapter scores $S$ and the activated adapter indices $E$ are at batch-level compared with that at element-level in subsection 4.3.
|
| 333 |
+
|
| 334 |
+
# 8.4 Detailed BLEU scores
|
| 335 |
+
|
| 336 |
+
We report the detailed BLEU scores of the ablation studies in Table 7, Table 8 and Table 9, respectively.
|
| 337 |
+
|
| 338 |
+
Algorithm 1: Gumbel-Max sampling of PyTorch version
|
| 339 |
+
params:Adapter scores $S$ ;Adapter candidate number $k$ ;Temperature $\tau$ ;Activated adapter indices E.
|
| 340 |
+
import torch;
|
| 341 |
+
import torch(nnfunctional as F;
|
| 342 |
+
if $k < = 1$ then $E =$ torch.argmax(S); return E;
|
| 343 |
+
end
|
| 344 |
+
topk_val,topk_idx $=$ torch.topk(S,k=k, dim=1);
|
| 345 |
+
topk_val / $= \tau$ .
|
| 346 |
+
log_probs $=$ torch.log(F softmax(topk_val, dim=1));
|
| 347 |
+
$g = F.$ gumbel softmax(log_probs, dim=1);
|
| 348 |
+
sampled $=$ torch.argmax(g, dim=1, keepdim $=$ True);
|
| 349 |
+
$E =$ torch.gather(topk_idx,1,sampled).squeeze();
|
| 350 |
+
return E;
|
| 351 |
+
|
| 352 |
+
<table><tr><td></td><td>en-de</td><td>de-en</td><td>zh-en</td><td>en-th</td><td>avg.S</td></tr><tr><td>Backbone</td><td>28.24</td><td>34.31</td><td>26.40</td><td>17.10</td><td>26.51</td></tr><tr><td>Naive MoA</td><td>28.78</td><td>34.45</td><td>26.95</td><td>17.50</td><td>26.92</td></tr><tr><td>+Pre-A</td><td>29.04</td><td>34.62</td><td>27.33</td><td>18.40</td><td>27.35</td></tr><tr><td>++Unf-D</td><td>29.13</td><td>34.82</td><td>27.66</td><td>18.50</td><td>27.53</td></tr></table>
|
| 353 |
+
|
| 354 |
+
Table 7: Ablation study of the two training tricks on the standard translation tasks.
|
| 355 |
+
|
| 356 |
+
<table><tr><td></td><td>WMT</td><td>KOR</td><td>IT</td><td>MED</td><td>LAW</td><td>SUB</td><td>avg.M</td></tr><tr><td>Backbone</td><td>32.08</td><td>19.65</td><td>44.79</td><td>51.47</td><td>54.34</td><td>30.60</td><td>38.82</td></tr><tr><td>Naive MoA</td><td>32.45</td><td>20.97</td><td>46.20</td><td>53.96</td><td>56.91</td><td>31.40</td><td>40.32</td></tr><tr><td>+Pre-A</td><td>32.36</td><td>21.87</td><td>46.82</td><td>54.32</td><td>57.39</td><td>31.53</td><td>40.72</td></tr><tr><td>++Unf-D</td><td>32.58</td><td>22.08</td><td>46.88</td><td>54.48</td><td>57.68</td><td>31.50</td><td>40.87</td></tr></table>
|
| 357 |
+
|
| 358 |
+
Table 8: Ablation study of the two training tricks on the multi-domain machine translation task.
|
| 359 |
+
|
| 360 |
+
<table><tr><td></td><td></td><td>WMT</td><td>KOR</td><td>IT</td><td>MED</td><td>LAW</td><td>SUB</td><td>AVG</td></tr><tr><td colspan="2">Backbone</td><td>32.08</td><td>19.65</td><td>44.79</td><td>51.47</td><td>54.34</td><td>30.60</td><td>38.82</td></tr><tr><td rowspan="4">k=4</td><td>τ→0.0</td><td>32.16</td><td>22.05</td><td>46.25</td><td>53.78</td><td>56.84</td><td>31.32</td><td>40.40</td></tr><tr><td>τ=0.1</td><td>32.43</td><td>22.21</td><td>46.30</td><td>54.01</td><td>57.11</td><td>31.43</td><td>40.58</td></tr><tr><td>τ=1.0</td><td>32.58</td><td>22.08</td><td>46.88</td><td>54.48</td><td>57.68</td><td>31.50</td><td>40.87</td></tr><tr><td>τ=10.0</td><td>32.56</td><td>21.47</td><td>46.41</td><td>53.96</td><td>56.64</td><td>31.46</td><td>40.42</td></tr><tr><td rowspan="5">τ=1.0</td><td>k=1</td><td>32.16</td><td>22.05</td><td>46.25</td><td>53.78</td><td>56.84</td><td>31.32</td><td>40.40</td></tr><tr><td>k=2</td><td>32.43</td><td>22.45</td><td>46.46</td><td>54.37</td><td>56.93</td><td>31.22</td><td>40.64</td></tr><tr><td>k=4</td><td>32.58</td><td>22.08</td><td>46.88</td><td>54.48</td><td>57.68</td><td>31.50</td><td>40.87</td></tr><tr><td>k=8</td><td>32.58</td><td>22.22</td><td>46.55</td><td>54.12</td><td>57.01</td><td>31.55</td><td>40.67</td></tr><tr><td>k=12</td><td>32.64</td><td>21.96</td><td>46.34</td><td>53.67</td><td>56.71</td><td>31.42</td><td>40.46</td></tr></table>
|
| 361 |
+
|
| 362 |
+
Table 9: Ablation study of the Gumbel-Max sampling routing scheme on the multi-domain machine translation task.
|
alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3f206a921e734008c6efa8df5658233bf0702971b8ac4e37089b8d1c17087a99
|
| 3 |
+
size 557050
|
alightweightmixtureofexpertsneuralmachinetranslationmodelwithstagewisetrainingstrategy/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0e71fade2eef9c2bea541a6292cebbc4d1d4f5263a5babcd2229f492b4f6ad4
|
| 3 |
+
size 389638
|
amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/ad9896f5-78a5-40e5-b19f-0ebb4789f513_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7d65e343d29e7203af6c40b1ab718eccc7a9979d41529355dc76e03e864f45e8
|
| 3 |
+
size 163162
|
amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/ad9896f5-78a5-40e5-b19f-0ebb4789f513_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f5b45e1f1e5ae3f9c1ac0acbd43689aaa009962baf2b5c7005b56a2a77e9a83
|
| 3 |
+
size 193413
|
amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/ad9896f5-78a5-40e5-b19f-0ebb4789f513_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:59c028a13b2019403c7b29745012c1bcbfab12ff5e6b1782cd7af98d119d4f14
|
| 3 |
+
size 839781
|
amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4383cb35b3331f1a2d76b960738d7995c39ddc16260ba381906446a3a810f97a
|
| 3 |
+
size 2057645
|
amorerealisticevaluationsetupforgeneralisationofcommunitymodelsonmaliciouscontentdetection/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa7b814106b8aadd5090e98f88014e0f61797aa7beaf862efe0c14f222eacb4a
|
| 3 |
+
size 689915
|