Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1005",
"Title": "Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling",
"Limitation": "This work has several limitations left for future research. We did not explore head selection based on both language and data domain. We did not analyze model fairness and robustness. As a technology used for text generation, the model might have systemic bias or produce inappropriate outputs.",
"Reviewer Comment": "Reviewer_1: Pros:\nThe paper touches upon several recent research directions including routing strategies in sparsely gated networks [1, 2], parameter sharing in multilingual / multitask models [3, 4] and proposes an interesting extension of Mixture-of-Expert layers for multi-head attention. This should be of interest to practitioners working on large-scale pre-training and sparsely-gated models.\nThe set of tasks considered for evaluation is comprehensive, comprising of multilingual and multi-domain settings for speech-translation, ASR and text-translation.\nLimitations:\nA lack of comparison to fair baselines. For all evaluations the authors compare against a baseline transformer ( or speech transformer ) with\nH\nheads and the baseline model enhanced with adapter layers. On the other hand, the proposed approach trains with\nH\n′\nheads and selects\nH\nheads per task (\nH\n′\n>\nH\n). As a result the proposed approach has a larger number of heads during training which might be contributing to the improved performance. It would have been interesting to have another baseline / roofline fully-shared transformer with\nH\n′\nheads to compare against. For a fair comparison, another baseline could involve head-pruning [5] after the initial training of a transformer with\nH\n′\nheads.\nThe value of 'learned' head selection is not clear. Would the performance of the model deteriorate if the\nH\nout of the\nH\n′\nattention heads were statically allocated to each task based on heuristic strategies [1], or based on language similarity, similar to the language-group based allocation in [6].\nOther minor questions/comments:\nTable 2: The WER on arabic seems extremely high. [7] suggests that this could be because of a mismatch between arabic-styles in the transcripts and the training data. Does it make sense to exclude arabic from your analysis?\nTable 1: Would be great to include language-level performance (perhaps in the appendix) to heal understand which languages are benefitting from head-sharing.\nAppendix: Some of the analysis regarding head-sharing depending more on the high-resourcedness of languages as against linguistic similarity are very interesting. It would be great to incorporate at least the key results into the main paper.\nReferences:\n[1] Hash Layers For Large Sparse Models, Roller et al.\n[2] Exploring Routing Strategies for Multilingual Mixture-of-Experts Models, Kudugunta et al.\n[3] Latent multitask architecture learning, Ruder et al.\n[4] SHARE OR NOT? LEARNING TO SCHEDULE LANGUAGE-SPECIFIC CAPACITY FOR MULTILINGUAL TRANSLATION, Zhang et al.\n[5] Are Sixteen Heads Really Better than One?, Michel et al.\n[6] Beyond English-Centric Multilingual Machine Translation, Fan et al.\n[7] The Multilingual TEDx Corpus for Speech Recognition and Translation, Salesky et al.\nEdit: Following the discussion with the authors I would like to update my score to 7.\nLimitations And Societal Impact:\nN/A\nNeeds Ethics Review: No\nTime Spent Reviewing: 2.5\n\nReviewer_2: The proposed approach is an elegant way of avoiding the negative interference which is especially common when doing full parameter sharing in NLP models.\nThe paper contains extensive experiments in important tasks of language and speech translation, speech recognition and obtain good performance gains showing the usefulness of the proposed approach.\nThe paper is quite well-written and the related work section is comprehensive.\nPresentation Improvements: The additional parameters introduced due to the incorporation of attention heads and in the two proposed approaches should be included in each of the results table.\nQuestion to the authors:\nAre the Transformer baselines in all the experiments trained with the same parameters as the ones contained in the adaptor models and head selection models? If not, do you think the baselines can yield stronger results when trained with more parameters.\nLimitations And Societal Impact:\nThe limitations are described in the conclusion section, but it is quite short. Limitations and Societal Impact should be their separate paragraphs.\nNeeds Ethics Review: No\nTime Spent Reviewing: 5+ hours\n\nReviewer_3: The paper proposes a transformer architecture for multi-task learning, based on sharing some attention heads in multi-head attention between different tasks. The model is supplied with more heads than necessary and a neural module using gumbel-softmax is used to select a subset of these heads for each task. A variant of this is described, where heads are distributed between groups and exactly one head is selected from each group.\nThe grouped model variant is sensible and gives good performance. The paper is clear and well-described. There is a good collection of experiments on different tasks and datasets.\nThe main weakness of this paper is in the weak baselines used for comparison:\nThe baseline models are constructed only using 4 attention heads, which seems unusually low given that transformers for text normally have more heads. BERT-base has 12 heads, BART is trained with 16. Given that the proposed method introduces additional attention heads, it is not clear that the benefits are not simply due to the baseline having too few attention heads.\nAlso, only Bapna and Firat (2019) adapter architecture is used for comparison, while there are stronger architectures that have been published since then, both for adapters and multi-task learning in general. For example, comparison with the AdapterFusion architecture (Pfeiffer et al., 2020) would make the argument stronger.\nFinally, reporting state-of-the-art results on these datasets for comparison would be informative, as it is currently unclear whether the reported results are in any way competitive with the general field or not.\nWhen making claims about increased computation efficiency, then experiment timing information should also be reported.\nWhile two experiments report the S2T transformer baseline in two different settings (separate and joint), the other experiments do not for some reason. It is also unclear which setting was used in the other experiments.\nIt would be interesting to add some analysis about how the selected heads differ between different tasks. In particular, demonstrating that the model is actively using the head selection mechanism would be good.\nThe proposed subset variant is not particularly well-motivated. If the position of different heads can be moved around in the final multi-head attention output, then the representation space will constantly change and not line up with the layers on top. This is addressed by the grouped variant, which indeed gives consistently better performance.\nThe WMT shared task dataset should be referenced directly, not only through Liu et. al (2020) who used it for evaluation.\nThe paper contains spelling errors that should be corrected by a spell-checker or proof-reading, for example \"hyperparamter\" and \"paramterizes\".\nLimitations And Societal Impact:\nThe work is on core model optimization, so it does not have immediate potential negative societal impact.\nNeeds Ethics Review: No\nTime Spent Reviewing: 1.5\n\nReviewer_4: Originality: the method is somewhat related to the redundancy of attention heads in transformer models, but to my knowledge it’s the first work that tries to learn different attention weights under multilingual setting Clarity: writing is generally clear although the method description over head selection is unclear Significance: the paper addresses an important problem with multilingual parameter separation Strength:\nthe method sounds intuitive and is generally clearly written\nit seems to out perform the baselines on a variety of tasks\nthe method does not require much increase in extra parameters\nWeakness: The description over head selection strategy and its significance is kind of unclear There is not much analysis on how the head attention distribution look like - it’s not clear whether the improvement is the effect of selecting a subset of attention head or the actual latent variable modeling\nQuestions:\nIt is not clear how the attention selection strategy works. My understanding is that there is already a q_\\phi defined over the attention heads, so ideally it should be able to learn which of the attention heads should have probability of 0. Why is it so important to select a subset of heads before modeling the distribution over them?\nLimitations And Societal Impact:\nyes\nNeeds Ethics Review: No\nTime Spent Reviewing: 2 hours",
"abstractText": "Multi-head attention has each of the attention heads collect salient information from different parts of an input sequence, making it a powerful mechanism for sequence modeling. Multilingual and multi-domain learning are common scenarios for sequence modeling, where the key challenge is to maximize positive transfer and mitigate negative interference across languages and domains. In this paper, we find that non-selective attention sharing is sub-optimal for achieving good generalization across all languages and domains. We further propose attention sharing strategies to facilitate parameter sharing and specialization in multilingual and multi-domain sequence modeling. Our approach automatically learns shared and specialized attention heads for different languages and domains. Evaluated in various tasks including speech recognition, text-to-text and speech-to-text translation, the proposed attention sharing strategies consistently bring gains to sequence models built upon multi-head attention. For speech-to-text translation, our approach yields an average of +2.0 BLEU over 13 language directions in multilingual setting and +2.0 BLEU over 3 domains in multi-domain setting.",
"1 Introduction": "Recent progress on deep learning models, in particular multi-head attention, has brought significant gains to sequence modeling tasks including speech recognition (Moritz et al., 2020), text-to-text translation (Vaswani et al., 2017), and speech-to-text translation (Vila et al., 2018; Gangi et al., 2019). Attention mechanism allows a model to focus on informative parts of the inputs, and multi-head attention computes attention over inputs by multiple heads independently. With each head attending to different information, multi-head attention potentially captures more complicated data patterns and extracts sophisticated knowledge.\nSequence modeling has attracted a lot of research interest in multilingual and multi-domain settings, where a model is trained on data in multiple language directions and data from different domains respectively. Key advantages of these settings are better data efficiency and the support of knowledge transfer among languages or domains. This is critical for resource-limited scenarios. For example, multilingual translation enhances the performance of low-resource languages via knowledge transfer from high-resource languages (Gu et al., 2018; Inaguma et al., 2019b). Given the data scarcity in individual domains, a common practice is to combine the data from various domains to augment the training set (Wang et al., 2020d). Another appealing aspect of multilingual or multi-domain models is their low deployment and maintenance costs compared with numerous models trained for individual language pairs or domains.\nDespite the positive knowledge transfer, negative interference has also been observed in multilingual (or multi-domain) training especially when languages (or domains) are dissimilar. Recent studies reveal from the optimization perspective that conflicting gradients in shared parameters is one cause of interference between languages (or domains) (Yu et al., 2020). A promising direction for interference\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nmitigation is to design better strategies of parameter sharing. In some previous works, sharing is based on the similarity between languages (or domains), which require expert knowledge or pre-computed relatedness (Wu et al., 2019). Recent studies also propose branches and components specific to languages (or domains) in addition to shared modules (Bapna and Firat, 2019; Guo et al., 2020).\nIn this work, we bring the mitigation of language and domain interference under a common umbrella, and tackle it by improving parameter sharing within multi-head attention. We propose strategies to select attention heads for different languages or domains. Instead of sharing everything across languages or domains, our model automatically learns to share heads among a subset of languages or domains. It encourages positive transfer within the subset and preserves their specificity without interference from outside the subset. The major contributions of this work are summarized below:\n1. We propose attention head selection to mitigate language or domain interference;\n2. The parameter sharing strategies are lightweight and preserve inference efficiency;\n3. We extensively evaluate attention sharing strategies on various sequence modeling tasks including speech recognition, text-to-text and speech-to-text translation. Consistent gains are achieved across multiple benchmark datasets.\nThe paper is structured as follows. Section 2 discusses related works on sequence modeling in multilingual and multi-domain setting. In Section 3, we introduce the proposed strategies of head selection in multi-head attention. Section 4 describes the empirical evaluation, followed by a discussion in Section 5. We conclude this paper in Section 6.",
"2 Related Work": "Multilingual learning. Multilingual modeling has the potential to improve low-resource language performance through knowledge transfer from high-resource languages, and it draws great interest from researchers in speech recognition and translation (Pratap et al., 2020; Heigold et al., 2013; Johnson et al., 2017; Dabre et al., 2020; Liu et al., 2020; Inaguma et al., 2019a; Li et al., 2020). Although impressive progress has been made for low-resource or zero-shot tasks, it is also found the multilingual model has inferior performance on high-resource tasks due to multilingual interference. In order to address this issue, some works focus on multilingual models with task-specific parameters. Different parameter sharing strategies are examined on Transformer (Sachan and Neubig, 2018). Attention dependent on target languages is proposed to enhance multilingual translation (Blackwood et al., 2018). Treating multilingual modeling as an adaptation problem, Bapna and Firat (2019) first build a universal multilingual model for all languages and then finetune newly added adapters for each language pair. Another thread of work is to increase the model capacity to compensate the performance loss in high-resource languages (Pratap et al., 2020). Shazeer et al. (2017) propose mixture-of-experts and select RNN cells based on input tokens. Lepikhin et al. (2020) integrate a mixture of FFN experts in the GShard model, and later Fedus et al. (2021) propose Switch Transformer to route tokens to different FFN sub-layers. Different from previous works, we propose strategies of attention sharing among languages in the level of attention heads for multilingual modeling.\nMulti-domain learning. Similar to multilingual learning, multi-domain learning (MDL) can effectively utilize data from different domains but also suffers from interference due to inter-domain heterogeneity (Saunders, 2021; Pham et al., 2021). Previous works address this issue from two perspectives: optimization and model architecture. In the optimization aspect, attempts have been made to synchronize the learning speed of different tasks (Chen et al., 2018), adjust the gradients of individual tasks to alleviate gradient conflicts (Yu et al., 2020) and apply regularization to achieve better generalization in different domains (Dakwale and Monz, 2017; Khayrallah et al., 2018; Thompson et al., 2019). In terms of model architecture, domain-specific labels (Kobus et al., 2017), word embedding (Zeng et al., 2018a), sub-networks (Wang et al., 2020d) are adopted to address the issue of domain divergence. The architecture can be specified during the general training with the mixed data from multiple domains (Wang et al., 2020d) or during the finetuning in individual domains (Bapna and Firat, 2019). In this work, we deal with domain interference by leveraging domain-specific attention heads in multi-head attention.\nAttention selection. Selective self-attention networks propose to apply masking to the inputs and pay more attention to content words (Geng et al., 2020). Liu et al. (2021) select text-related image regions\nwith attention in multi-modality translation. Compared to these methods, we conduct automatic attention head selection for different tasks and focus on mitigating task interference.",
"3 Model": "In this section, we start with preliminaries of multi-head attention, and introduce our approach to attention interference mitigation. We put multilingual and multi-domain sequence modeling under the same umbrella in this study. For the simplicity of the following discussions, we refer to the two settings as multi-task modeling, where a task is one language or one domain. Different from the standard multi-head attention, our model provides more attention heads than those used in computation. Different subsets of heads are assigned to each task so that partial attention sharing enables knowledge transfer and meanwhile mitigates interference. We introduce latent variables to modulate head selection, and propose strategies to learn the head assignment to different tasks.",
"3.1 Preliminary": "Multi-head attention. As a core component of Transformer, multi-head attention parameterizes each head with key, query and value transformation matrices (Vaswani et al., 2017). The token representation is transformed into key, query and value vectors via these transformations. Each head assigns the attention of this token over the input sequence based on the matching between its query vector and key vectors of other tokens. The value vectors are weighted by the attention as the contextualized token representation. It is passed through linear projection as the output of the attention head. Suppose that head h has output x(h). Multi-head attention with H heads yields an output x for the given token, which is the concatenation of all head outputs.\nx = x(1) ⊕ · · · ⊕ x(h) ⊕ · · · ⊕ x(H), (1) where ⊕ is the vector concatenation. Interference. Maximal parameter sharing aims to learn universal knowledge across languages (Wang et al., 2020e) and domains (Zeng et al., 2018b). To capture the task specificity, different languages or domains compete for model capacity, which is observed as the interference in previous studies. The interference results in degraded performance in jointly trained models. However, few works look into the improvement of parameter sharing within multi-head attention. This study explores head selection strategies to mitigate the interference in multilingual and multi-domain models.",
"3.2 Latent Variable for Head Selection": "First, we outline our approach to learn a more general-purpose multi-head attention in Transformer from the Bayesian neural network perspective. Suppose that the input sequence is x and the output sequence is y. For conditional sequence modeling tasks such as machine translation, the posterior of p(y | x) can be computed by marginalizing over the posterior of latent variable z, which modulates parameters Θ in the standard Transformer architecture:\np(y | x,Θ) = Ep(z|Θ)[p(y | x, z)] = ∫ p(y | x, z)p(z|Θ) dz (2)\nParameterization of zt. In this work, we define zt as modulating the selection of attention heads by task t. We have zt = {z(h)t }h where z (h) t is a discrete latent variable from Bernoulli distribution indicating whether task t selects attention head h. This modeling choice allows us to prune attention heads, which preserves computation efficiency as well as regularizes training.\nMarginalizing over zt is intractable given numerous heads in neural models. Therefore, we use variational inference to derive an approximate solution. Suppose that (xt, yt) is from task t. Specifically, we learn an inference network qφ(zt), which is paramterized with φ, to approximate the true distribution p(zt) and optimize the evidence lower bound (ELBO) of p(y|x):\nlog p(y | x) ≥ ∑ t ( Eqφ(zt)[log pθ(yt | xt, zt)]− KL(qφ(zt) ‖ p(zt)) ) , (3)\nwhere KL is the KL-divergence between two distributions. In our work, we assume identical probability of each head being selected. Therefore, we have p(zt = 1) = HH′ , where H and H\n′ are numbers of selected attention heads and all head candidates.\nTraining and interference. We use the Gumbel-Softmax reparameterization (Jang et al., 2017) to draw samples of z(h)t from the posterior qφ(z (h) t ). It makes the model end-to-end differentiable, while learning discrete policies of head selection without resorting to policy gradients. We adopt a lightweight estimator of qφ(z (h) t ) by directly learning the logit parameters {φ (h) t }:\nqφ(z (h) t ) = exp((φ (h) t (1) + (1))/τ)∑\nj∈{0,1} exp((φ (h) t (j) + (j))/τ)\n, ∼ G(0, 1) (4)\nwhere G(0, 1) is the Gumbel distribution, and τ is a temperature hyperparameter which increases the discreteness of samples when τ → 0. We will discuss different head selection strategies in Section 3.3, which make binary selection decisions based on real-valued posterior qφ(z (h) t ).\n3.3 Attention Selection Strategies\nSuppose that the output dimension of multi-head attention is d, and the dimension of each attention head is dH . We provide a large pool of H\n′ (H ′ > H) attention head candidates in every Transformer layer, and H ′ is a hyperparameter controlling the search space size of attention selection strategies. The model requires attention outputs to have a consistent dimension d, so each task needs to select exact H heads among H ′ candidates. We introduce two strategies for the attention head selection: subset strategy and group strategy.\nSubset strategy. The subset strategy is straightforward, and we compare the posterior {qφ(z(h)t ) : h ∈ [1, H ′]} of all H ′ heads given a task t. A subset of H heads with the highest posterior are selected by the task, and there are CH ′\nH subset choices. The subset strategy is described in Fig. 1(a). The binary mask s(h)t indicates whether an attention head h is assigned to task t.\ns (h) t = { 1, h ∈ TopH({qφ(z(h)t )}), 0, otherwise,\n(5)\nwhere TopH(·) returns the top H heads with the highest values. The outputs of the selected heads are concatenated as the attention output. Note that the subset strategy does not consider the order of the attention heads. For example, when head 2 and 3 are selected, head 2 contributes to the beginning part of attention output. With head 1 and 2 selected, the output of head 2 goes to the last part of the attention output.\nGroup strategy. We further propose group strategy to preserve the order of attention heads during head selection. Different from the subset strategy, the group strategy first divides H ′ heads into H groups. As is shown in Fig. 1(b), each group contains r = H ′\nH candidates. Each task could choose one attention head from each group, and has access to H heads per layer. There are rH possible combinations of heads. The group strategy keeps the head order in that heads from group g only contribute to g’s corresponding dimensions in the attention output. The head with the highest\nposterior in its group would be selected by a given task t. We use the binary mask s(h)t to indicate the selection of head h in group g.\ns (h) t =\n{ 1, h = argmax({qφ(z(h ′) t ) : h\n′ ∈ g}), 0, otherwise.\n(6)\nThe output of group g is:\nx(g) = ∑ h∈g s (h) t · x(h). (7)\nThe outputs of H groups are concatenated as the output of the attention module for task t.\nWith either subset or group strategy, the sequence model is trained to assign attention heads to different tasks to maximize the lower bound in inequality (3). The number of additional parameters {φ(h)t } introduced by our attention selection is only O(T × H ′ × L), where T is the number of tasks, H ′ is the number of head candidates per layer, and L is the number of layers. It is small compared with the total parameter size of the model, and head selection is thus lightweight and memory efficient. Moreover, the head selection is inherently a pruning process. Regardless of the size of head candidates, only a fixed number of attention heads are involved in computation for a given task. Hence our approach is also computationally efficient in model inference.",
"4 Experiments": "We evaluate sequence models in multilingual and multi-domain settings respectively. Various applications are considered including multilingual machine translation (MT), automatic speech recognition (ASR) and speech translation (ST) in both multilingual and multi-domain settings. We integrate attention selection strategies into the self-attention1 module. Our implementation is based on the FAIRSEQ toolkit (Ott et al., 2019; Wang et al., 2020b). We include widely used sequence models built on multi-head attention as strong baselines below.\n1.Transformer (Vaswani et al., 2017). It is a state-of-the-art model in machine translation, which takes texts in source languages as inputs and generates texts in target languages.\n2. S2T Transformer (Wang et al., 2020a). As a variant of Transformer for speech processing, S2T Transformer is a stack of a convolutional subsampler and Transformer, where the subsampler processes audio log mel-filter features and sends them to Transformer for text generation.\n3. Adapter model (Bapna and Firat, 2019). Adapters have been shown as an effective approach to language and domain adaptation. Task-specific layers are added on top of each Transformer layer in a well-trained (S2T) Transformer. A typical adapter layer consists of two feed-forward sub-layers.\n4. Static strategy of head selection. A static strategy assigns each task with a fixed subset of attention heads based on the task similarity (Standley et al., 2020; Sen et al., 2019). In the multilingual setting, we group languages into linguistic families, and each family is assigned with an exclusive set of heads. As for the multi-domain setting, each domain has its own set of attention heads.\nWe report parameter size and decoding speed as memory and computation efficiency metrics respectively. Decoding speed is measured by the number of tokens decoded per second by one GPU.",
"4.1 Machine Translation": "The task of machine translation is to translate a text from one language to another. The metric BLEU measures the overlap between model translations and the ground truth (Papineni et al., 2002).\nDataset. We experiment with public multilingual machine translation datasets collected by WMT shared tasks as used by (Liu et al., 2020). The dataset consists of parallel sentences between English and other 14 languages2. Its data statistics are summarized in Appendix A.1. We evaluate models on\n1We also tried head selection in the encoder-decoder attention but did not observe big improvements when using it alone or in combination with self-attention head selection.\n2The 14 languages are: Chinese (zh), Czech (cs), Estonian (et), Finnish (fi), French (fr), German (de), Gujarati (gu), Kazakh (kk), Latvian (lv), Lithuanian (lt), Romanian (ro), Russian (ru), Spanish (es), Turkish (tr).\nboth one-to-many (O2M) and many-to-one (M2O) translations, which are translation from English to 14 languages and from 14 languages to English respectively.\nModel configurations. The attention selection is based on the source language on the encoder side for M2O translation, and is based on the target language in the decoder part for O2M translation. For both subset and group strategies, the number of attention head candidates is set as 8 in each layer (i.e., H’=8), and only 4 heads (i.e., H=4) are selected for computation. We will discuss how the hyperparameter H ′ affects model performance in Section 5. For static strategy, we group languages into 5 linguistic families3. Each family is assigned with 4 attention heads which are shared by all languages in this family. Therefore, a total of 20 attention heads are used in the static strategy.\nOther baselines have 4 attention heads in each Transformer layer. We also include a Transformer baseline with 8 attention heads, which measures the effect of increased attention heads. All models have 6 encoder layers and 6 decoder layers, the embedding dimension is 512 and the feed-forward dimension is 1024. They are trained with a batch size of 131k tokens and a learning rate of 0.0007. For O2M translation, attention selection models and Transformer are trained for 140k steps. As for M2O translation, they are trained for 100k steps. The adapter model is initialized with parameters from the trained Transformer, and tunes adapter layer parameters for 40k steps with Transformer parameters frozen. Adapter layers are added to Transformer for each language direction, and they have an intermediate dimension of 256. The dimension is selected so that the number of parameters (460M) in the adapter model is close to the parameter size (420M) in attention selection models.\nResults. We group 14 language directions based on their amount of training data. We have 6 high-resource languages with more than 10M parallel sentences, and 8 low-resource languages with fewer than 10M sentence pairs. Table 1 shows model performance on WMT datasets. More attention heads improve Transformer performance while hurting the decoding speed. In comparison with Transformer with 4 heads, group strategy achieves +0.9 and +0.7 BLEU on average of 14 language directions in O2M and M2O translations respectively at a comparable decoding speed. Transformer with 8 heads and adapter achieve BLEU scores comparable to both group and subset strategies but fall behind in inference efficiency. Static strategy demonstrates comparable performance to group and subset strategies in all metrics except the parameter size.",
"4.2 Speech Recognition": "The task of Automatic Speech Recognition (ASR) is to transcribe source audios in the same language. Word error rate (WER) is ASR evaluation metric, which measures the difference of model outputs from the ground truth (Klakow and Peters, 2002). Lower WER indicates better recognition.\nModel configuration. Models included in the experiments of speech recognition are S2T Transformer, S2T Transformer with adapter layers, S2T Transformer with static, group and subset strategies. With static strategy, we group 8 languages into 2 families4, and each family has an exclusive set of 4 attention heads. Following the setup of (Salesky et al., 2021), all models have 1024 channels in the input convolutional subsampler, 12 encoder layers and 6 decoder layers with 4 attention heads per layer. Again we include the S2T Transformer baseline with 8 heads. The embedding dimension is 256 and the feed-forward dimension is 2048. We set a batch size of 320k tokens and a learning\n3(1) Indo-European family: cs, de, es, fr, gu, lt, lv, ro and ru; (2) Estonian family: et; (3) Uralic family: fi; (4) Turkic family: kk and tr; (5) Sino-Tibetan family: zh.\n4(1) Afro-Asiatic family: ar; (2) Indo-European family: de, el, es, fr, it, pt and ru.\nrate of 0.0005 during training. Attention selection models and S2T Transformer are trained for 250 epochs. Adapter model is initialized with parameters of the trained S2T Transformer, and is then trained for another 200 epochs with only adapter layer parameters tuned. The intermediate dimension of adapter layers is again set as 256. To prevent over-fitting, we stop the model training when the model does not improve on the validation set for 10 epochs. To reduce the performance variance, we average checkpoints of the last 10 epochs, and use the averaged model for evaluation.",
"4.2.1 Multilingual Speech Recognition": "Dataset. We use the multilingual TEDx (mTEDx) dataset for speech recognition (Salesky et al., 2021). It collects audio recordings from TEDx talks. Eight languages are covered including Arabic (ar), German (de), Greek (el), Spanish (es), French (fr), Italian (it), Portuguese (pt) and Russian (ru).\nResults. S2T transformer share all parameters among languages. Attention selection models select attention heads based on the source and target languages. Adapter adds adapter layers based on the language directions. We report the ASR results in Table 2. It brings down 6.1% WER for S2T Transformer to increase from 4 to 8 attention heads. Compared to S2T Transformer with 4 heads, adapter model reduces the WER by 16.1% and subset strategy by 8.8%, while static strategy does not change the performance too much. Group strategy achieves the largest drop of 18.4% in WER of S2T Transformer (H=4) with comparable decoding speed. Moreover, it outperforms S2T Transformer with 8 heads in both speed and WER.",
"4.2.2 Multi-Domain Speech Recognition": "Dataset. Besides mTEDx data, we include two other public datasets, CoVoST 2 and EuroParl, which are commonly used for speech translation. Since source audios are accompanied by transcripts, we could use their source audio-text data for speech recognition tasks. We investigate multi-domain modeling with these three datasets.\n1. CoVoST 2 (Wang et al., 2020c). With Common Voice as the audio source, CoVoST 2 covers speech-to-text translations from 21 languages to English and from English to 15 languages. 2. EuroParl (Iranzo-Sánchez et al., 2020). It provides paired audio-text instances from and into 6 European languages, which are compiled from the debates in European Parliament.\nResults. In the multi-domain setting, attention selection models assign different heads to each domain. The static selection strategy provides each domain with an exclusive subset of attention heads. Adapter model adds domain-specific adapter layers to S2T Transformer. Table 3 reports WER of models trained for 400 epochs in three domains: mTEDx, CoVoST 2 and EuroParl respectively. The S2T Transformer jointly trained on multi-domain data (in the row of “Joint S2T Transformer\n(H=4)”) reduces WER by 12.9%, 8.6% and 77.7% in three domains respectively, when compared with the models separately trained in individual domains (in the row of “Separate S2T Transformer (H=4)”). This demonstrates the benefits of positive transfer between domains.\nThe performance of speech recognition could be further improved by the mitigation of the domain interference. Attention selection with group and subset strategies outperform that with the static strategy. Both attention selection and adapter model achieve lower WER than the joint S2T Transformer with both 4 and 8 attention heads. Attention selection with group strategy has the lowest WER on both mTEDx and CoVoST 2 datasets, decreasing WER by 4.0% and 5.0% respectively in comparison with joint S2T Transformer (H=4). The best system on EuroParl is adapter model, yielding a WER reduction by 6.3% than the joint S2T Transformer (H=4).",
"4.3 Speech Translation": "Now with a focus on the task of speech translation, we again design experiments in multilingual and multi-domain settings. In the multilingual setup, we train translation models with samples in multiple languages to investigate language interference. As for the multi-domain setup, the models are trained with data from multiple domains so that we could look into the domain interference. BLEU serves as the evaluation metric of speech translation systems.\nBaselines. We use the same baselines as in speech recognition. As recommended by (Salesky et al., 2021), we initialize the encoders in speech translation with the encoders trained in the task of speech recognition in Section 4.2 for the purpose of improving training efficiency and performance.\nModel configurations. All models are trained for up to 400 epochs. Other model configurations in ST are the same as those in ASR.",
"4.3.1 Multilingual Speech Translation": "To explore language interference, we perform experiments on multilingual speech translation.\nDataset. We again use mTEDx dataset for multilingual speech translation. Besides speech recognition data, mTEDx also collects speech translation data from TEDx talks. Its test set covers 13 language directions. The training data is provided in 10 of these directions, so there are 3 zero-shot directions.\nResults. Here the static strategy of head selection groups source languages into two families as in multilingual ASR task, and all target languages fall into the same Indo-European family. Table 4 summarizes the multilingual speech translation results on mTEDx. S2T Transformer has +0.4 BLEU with heads increased to 8. Since adapter model brings in language-specific layers, it cannot deal with zero-shot translations. Group strategy, subset strategy and adapter model bring improvements over S2T Transformer (H=4) which are jointly trained in 13 language directions. It suggests that multiple languages interfere within S2T Transformer whose parameters are shared by all languages. Attention selection with group strategy achieves the best translation performance. In comparison with S2T Transformer (H=4), group strategy achieves an average of +2.1 and +1.9 BLEU in training and zero-shot directions respectively. It leads to +2.0 BLEU on average of all directions.",
"4.3.2 Multi-Domain Speech Translation": "In this experiment, we investigate interference across domains in the task of speech translation, and evaluate the effectiveness of different models in multi-domain training. The attention selection now is\nbased on the data domain instead of languages, i.e., samples in different domains would choose their own attention heads. Similarly for adapter model, its adapter layers are domain-specific in this setup.\nWe again use CoVoST 2 and EuroParl as additional domains. We focus on the 13 language directions in mTEDx test set, and use the subset of CoVoST 2 and EuroParl corpora in the same directions. CoVoST 2 has 5 common directions5 and EuroParl has 11 common directions6 as mTEDx. Details about these datasets are included in Appendix A.1.\nResults. Table 5 shows the average BLEU of speech translation in mTEDx, CoVoST 2 and EuroParl. We report results in rows of “Joint S2T Transformer” when S2T Transformers are trained with the mixture of three datasets. The results are included in rows “Separate S2T Transformer” when S2T Transformers are trained on each dataset independently. Zero-shot translations in mTEDx benefit a lot from additional data of CoVoST 2 and EuroParl, as the joint S2T Transformer (H=4) shows an average of +5.4 BLEU over separate S2T Transformer (H=4). However, there is a drop of 1.0 BLEU in its training directions, brought by the interference from CoVoST 2 and EuroParl domains.\nAgain we observe that static strategy falls behind group and subset strategies. Attention selection with learned strategies and adapter model bring gains to the joint model in individual domains. Compared with the joint S2T Transformer (H=4), adapter model improves mTEDx translation by 0.3 BLEU, CoVoST 2 translation by 0.6 BLEU and EuroParl by 1.0 BLEU on average. The attention selection with group strategy outperforms all other models. Its average BLEU gain over adapter model is 1.6 BLEU in mTEDx, 1.7 BLEU in CoVoST 2 and 0.8 in EuroParl.\n5 Discussion\nHyperparameter H ′. The attention selection models set a hyperparameter H ′ as the total number of attention head candidates in multi-head attention, which controls the search space of attention sharing strategies. We now explore how the performance varies with H ′ for group and subset strategies.\nEvaluated on the task of multilingual speech recognition, models have the same hyperparameters as those in multilingual ASR experiments except for H ′. Attention selection models are configured with H ′ = 4, 8, 12, 16 respectively, and Figure 2 shows the change of WER with H ′.\nWhen H ′ = 4, there is no attention selection and all attention heads are shared by different languages. We observe a large drop of error rate asH ′ increases from 4 to 8. For the subset strategy, WER keeps decreasing when the number of head candidates grows from 4 to 16. As for group strategy, H ′ = 8 is the optimal hyperparameter on the ASR task. As we continue\n5{es, fr, it, pt, ru}-en 6es-{en, fr, it, pt}, fr-{en, es, pt}, it-{en, es}, pt-{en, es}\nincreasing H ′ to 12 and 16, the error rate increases a bit. The performances of subset and group strategies are close when H ′ = 16.\nThe search space of group strategy is a strict subset of the space of subset strategy. However, we observe that group strategy shows comparable or better performance than subset strategy across tasks, including MT, ASR and ST. One possible explanation is that group strategy keeps the head order information while subset strategy does not. With a larger pool of head candidates, there is less sharing among tasks. The performance of the group strategy degrades a bit due to less positive transfer dependent on attention sharing. As for the subset strategy, better head assignments are learned in the enlarged search space.\nAttention Sharing among Languages. We now analyze the attention sharing pattern among languages. Take the multilingual model on mTEDx speech recognition as an example, whose head selection is learned with group strategy. We count the number of heads shared by each language pair in the model, and visualize it with a heatmap in Fig. 3, where the darkness reflects the amount of sharing. The diagonal cells in the heatmap corresponds to the number of attention heads used by each language, i.e., the total number of attention heads in all layers.\nFor European languages including Spanish (es), French (fr), Italian (it) and Portuguese (pt), their shared attention heads are fewer in decoder than in encoder. This seems contradicted with previous findings that parameter sharing is beneficial for languages with high linguistic proximity. We note that they are high-resource languages in mTEDx corpus, which is also justified by their relatively lower WER. Their data is sufficient to learn good speech recognition, and sharing parameters with other languages hurt the preservation of the language specificity. This explains why the high-resource European languages do not share too many heads in the learned group strategy.\nAnother pattern we observe from Fig. 3 is that low-resource languages tend to share more attention heads with high-resource languages. For example, Arabic (ar) and Russian (ru) have relatively more sharing with Italian (it) than other languages. Low-resource languages benefit from the knowledge transfer from high-resource ones. Due to page limit, we include more discussions in Appendix A.3.",
"6 Conclusion": "Research efforts in multilingual and multi-domain modeling have been driven by the increasing need to improve data efficiency and model performance. In this work, we propose head selection strategies to allow attention heads to be shared or specialized for different languages or domains. It effectively mitigates interference within multi-head attention which is a core part of strong sequence models, and demonstrates good empirical gains in various text generation tasks.\nThis work has several limitations left for future research. We did not explore head selection based on both language and data domain. We did not analyze model fairness and robustness. As a technology used for text generation, the model might have systemic bias or produce inappropriate outputs.",
"Reviewer Summary": "Reviewer_1: This work proposes an approach to learn to share attention heads in multitask (multilingual, multi-domain) transformer models with multi-head attention layers. Given a multi-head attention layer with\nH\n′\nheads being trained on a set of tasks, each task learns to use\nH\nheads, and the choice of attention heads is dictated by learnt latent variables. 2 strategies are proposed for this attention head selection:\nSubset selection, selects top-\nH\nheads for each task.\nGroup selection, where\nH\n′\nheads are divided into groups of\nH\n′\n/\nH\nheads, and 1 head is selected per task from each group.\nThis approach is compared against baseline approaches with full parameter sharing (transformers for MT, speech-transformer for other tasks) and baselines enhanced with adapter modules [1] on multilingual Machine Translation, multilingual ASR, multi-domain ASR, multilingual speech translation and multi-domain speech translation. On all these tasks the proposed head-sharing approach shows modest improvements over the fully-shared baselines.\nReferences: [1] Simple scalable adaptation for neural machine translation, Bapna et al.\n\nReviewer_2: This paper address the problem of negative interference which happens when a single transformer model is shared among the different tasks in a multi-task learning setup specifically those of multilingual translation, speech recognition, and multi-domain speech translation. The proposed approach is to adaptively make use of specific attention heads for a particular task among a pool of attention heads. In this way, the common attention heads between tasks can reinforce the positive transfer while the attention heads unique to each task will avoid the negative transfer. Extensive experiments on multiple tasks of multilingual translation and speech translation demonstrate improvements in performance over strong baselines.\n\nReviewer_3: The paper proposes a transformer architecture for multi-task learning, based on sharing some attention heads in multi-head attention between different tasks. The model is supplied with more heads than necessary and a neural module using gumbel-softmax is used to select a subset of these heads for each task. A variant of this is described, where heads are distributed between groups and exactly one head is selected from each group.\n\nReviewer_4: This paper proposes a method that learns a different set of attention head scores for multi-task data. The method is tested on several different sequence generation tasks that rely on the transformer models."
}