Add 1 files
Browse files- 2205/2205.10828.md +558 -0
2205/2205.10828.md
ADDED
|
@@ -0,0 +1,558 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: What Do Compressed Multilingual Machine Translation Models Forget?
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2205.10828
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
Alireza Mohammadshahi 1,2,3 1 2 3{}^{1,2,3}start_FLOATSUPERSCRIPT 1 , 2 , 3 end_FLOATSUPERSCRIPT Vassilina Nikoulina 1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT Alexandre Berard 1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT
|
| 7 |
+
|
| 8 |
+
Caroline Brun 1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT James Henderson 2 2{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT Laurent Besacier 1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT
|
| 9 |
+
|
| 10 |
+
1 1{}^{1}start_FLOATSUPERSCRIPT 1 end_FLOATSUPERSCRIPT NAVER LABS Europe 2 2{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT IDIAP Research Institute 3 3{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPT EPFL
|
| 11 |
+
|
| 12 |
+
{first.last}@naverlabs.com
|
| 13 |
+
|
| 14 |
+
{alireza.mohammadshahi,james.henderson}@idiap.ch
|
| 15 |
+
|
| 16 |
+
###### Abstract
|
| 17 |
+
|
| 18 |
+
Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing(NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models(MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.1 1 1 We release our implementation at [https://github.com/alirezamshi/bias-compressedMT](https://github.com/alirezamshi/bias-compressedMT).
|
| 19 |
+
|
| 20 |
+
1 Introduction
|
| 21 |
+
--------------
|
| 22 |
+
|
| 23 |
+
Over the recent years, pre-trained Transformer Vaswani et al. ([2017](https://arxiv.org/html/2205.10828#bib.bib47)) models have reached a substantial improvement in a variety of Natural Language Processing(NLP) tasks. This improvement mostly comes from increasing their parameter size Devlin et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib11)); Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)); Brown et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib7)); Zhang et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib59)) which escalates the cost of training Yang et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib55)); Strubell et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib44)); Patterson et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib35)), and hurts the memory footprint and latency at inference Dai et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib9)); Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)); Wang et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib50)). Specially in Neural Machine Translation(NMT) task, massively MNMT models Aharoni et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib1)); Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)); Tang et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib45)); Zhang et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib58)) demonstrated promising results. They have been shown particularly interesting for low-resource languages which benefit a lot from knowledge transfer. On the other hand, it has also been observed that the curse of multilinguality may hurt the performance in high-resource languages. The strategy employed to overcome this problem Aharoni et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib1)); Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)); Goyal et al. ([2021a](https://arxiv.org/html/2205.10828#bib.bib17)) is to scale up the number of parameters, thus attaining state-of-the-art performance in both high and low-resource languages.
|
| 24 |
+
|
| 25 |
+
Consequently, efficient inference with these very large models has become a crucial problem. This challenge can be overcome through model compression, e.g. knowledge distillation Kim and Rush ([2016](https://arxiv.org/html/2205.10828#bib.bib24)); Sanh et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib40)); Li et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib29)); Wang et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib49)), pruning Michael H.Zhu ([2018](https://arxiv.org/html/2205.10828#bib.bib32)); Frankle and Carbin ([2019](https://arxiv.org/html/2205.10828#bib.bib15)); Behnke and Heafield ([2020](https://arxiv.org/html/2205.10828#bib.bib3)); Zhang et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib60)), and quantization Xu et al. ([2018](https://arxiv.org/html/2205.10828#bib.bib54)); Wu et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib52)); Bondarenko et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib6)); Kim et al. ([2021a](https://arxiv.org/html/2205.10828#bib.bib23)); Tao et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib46)); Yang et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib56)); Yao et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib57)). These methods can be applied with a little loss in top-line metrics, while reducing the memory-footprint, and enhancing inference time. However, recent work Hooker et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib20)); Ahia et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib2)); Xu et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib53)); Du et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib12)); Renduchintala et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib39)) has demonstrated that under-represented features can suffer from a drastic decrease in performance which is not necessarily reflected by global (aggregated) metrics. In multilingual NMT, the overall metrics are often reported as an average across all the language pairs, where the performance between individual language pairs can vary a lot. Therefore it is even more critical to understand what would be the exact impact of compression on multilingual NMT models, beyond the aggregated metrics.
|
| 26 |
+
|
| 27 |
+
In this work, we illustrate the impacts of applying compression methods to massively multilingual NMT models, that are pre-trained in a great number of languages in several domains. To the best of our knowledge, this is the first attempt to analyze how compression impacts massively multilingual models. We hope it could be a starting point to bringing a comprehensive understanding between fairness and compression in multilingual NMT models. In this study, we concentrate on light compression techniques, specifically post-training quantization and magnitude pruning without any further fine-tuning.2 2 2 The reason is that fine-tuning MNMT models is extremely computationally demanding. We exploit the recent and largest MNMT model, M2M-100 Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)) that covers 100 languages and contains nearly 12B parameters and analyze the impact of compression on different language pairs evaluated on FLORES-101 benchmark Goyal et al. ([2021b](https://arxiv.org/html/2205.10828#bib.bib18)) (covering 101 languages). We also consider MT-Gender Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)) and DiBiMT Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)) benchmarks allowing us to assess different types of biases that could be present in the data and MNMT model. To sum up, our contributions are as follows:
|
| 28 |
+
|
| 29 |
+
* •
|
| 30 |
+
We conduct extensive analysis on the effects of light compression methods for massively multilingual NMT models.
|
| 31 |
+
|
| 32 |
+
* •
|
| 33 |
+
On FLORES-101 Goyal et al. ([2021b](https://arxiv.org/html/2205.10828#bib.bib18)), we discover that while the overall performance is barely impacted by the compression, a subset of language pairs corresponding to under-represented languages during training suffers an extreme drop in performance.
|
| 34 |
+
|
| 35 |
+
* •
|
| 36 |
+
Also, we observe an important improvement for some language pairs after the compression. We hypothesize that this is due to the removal of noisy memorization.
|
| 37 |
+
|
| 38 |
+
* •
|
| 39 |
+
We show that the compression amplifies gender and semantic biases, hidden in MNMT models across several high-resource languages by evaluating on MT-Gender, and DiBiMT benchmarks.
|
| 40 |
+
|
| 41 |
+
In section[2](https://arxiv.org/html/2205.10828#S2 "2 Model and Compression Techniques ‣ What Do Compressed Multilingual Machine Translation Models Forget?"), we describe light compression methods we rely on, and MNMT model. Section[3](https://arxiv.org/html/2205.10828#S3 "3 Experimental Setup ‣ What Do Compressed Multilingual Machine Translation Models Forget?") presents our experimental setup and evaluation benchmarks. Section[4](https://arxiv.org/html/2205.10828#S4 "4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows the analysis of the impact of the compression for NMT benchmarks.
|
| 42 |
+
|
| 43 |
+
2 Model and Compression Techniques
|
| 44 |
+
----------------------------------
|
| 45 |
+
|
| 46 |
+
### 2.1 M2M-100 Model
|
| 47 |
+
|
| 48 |
+
We assume that potential biases, discovered after the compression are mostly related to the training data, than the model architecture, as previous work Hooker et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib20)) demonstrated for the image classification task.
|
| 49 |
+
|
| 50 |
+
So, we use M2M-100 Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)), as it is the best performing and the largest massively multilingual MT model, which covers more than 10K language directions, including a great number of low and medium-resource language pairs. Other previous work Aharoni et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib1)); Tang et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib45)) cover fewer languages, especially from low and medium-resource languages, and have worse results compared to M2M-100.
|
| 51 |
+
|
| 52 |
+
M2M-100 is trained on large-scale multilingual corpora El-Kishky et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib13)); Schwenk et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib42)) with a novel data mining procedure, that uses language similarities. The biggest model introduced consists of 24 encoder, and 24 decoder Transformer Vaswani et al. ([2017](https://arxiv.org/html/2205.10828#bib.bib47)) layers. Using several scaling techniques, it is trained with nearly 12B parameters. We refer to Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)) for more details. In all our experiments, we exploit the largest M2M-100 model.
|
| 53 |
+
|
| 54 |
+
### 2.2 Light Compression Techniques
|
| 55 |
+
|
| 56 |
+
Compression techniques without any further fine-tuning are defined as light compression methods. We do not fine-tune the compressed models due to the massive computation cost, as we have to fine-tune the model for all language pairs to provide a fair comparison.3 3 3 Additionally, the exact and original training data is required to alleviate the additional bias added by fine-tuning, but M2M-100 authors do not provide the exact data e.g. back-translation. We discuss our methods in the following paragraphs.
|
| 57 |
+
|
| 58 |
+
Magnitude Pruning is a popular technique for both memory footprint reduction and inference speed-up. It reduces the model size by removing redundant nodes that do not contribute to the resulting performance. It usually achieves comparable results with state-of-the-art models with further fine-tuning Michael H.Zhu ([2018](https://arxiv.org/html/2205.10828#bib.bib32)); Gale et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib16)); Menghani ([2021](https://arxiv.org/html/2205.10828#bib.bib31)); Ahia et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib2)). In this work, we apply post-training magnitude pruning for each layer of Transformer (including Embedding layers). Given Θ l subscript Θ 𝑙\Theta_{l}roman_Θ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT as the parameters of Transformer layer l 𝑙 l italic_l and p 𝑝 p italic_p as the sparsity ratio, the output of the pruning function is Θ l′subscript superscript Θ′𝑙\Theta^{\prime}_{l}roman_Θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT where p 𝑝 p italic_p percentage of weights sets to zero.4 4 4 Preliminary experiments showed that pruning based on Transformer layer results in a better performance than other alternatives e.g. separate pruning of self-attention and feed-forward layers. The comparison is provided in Appendix[A](https://arxiv.org/html/2205.10828#A1 "Appendix A Magnitude Pruning Strategy ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+
(a) MT-Gender example: for a correct translation, system will have to link English pronoun ’her’ to ’doctor’.
|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
|
| 66 |
+
(b) DiBiMT Example. German instance contains wrong word senses, while Spanish one is correct.
|
| 67 |
+
|
| 68 |
+
Figure 1: Samples of MT-Gender Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)) and DiBiMT Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)) benchmarks.
|
| 69 |
+
|
| 70 |
+
Post-Training Quantization Recent work applies post-training, and training-aware quantization to pre-trained machine translation and language models Wu et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib52)); Menghani ([2021](https://arxiv.org/html/2205.10828#bib.bib31)); Liang et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib30)); Bondarenko et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib6)); Wei et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib51)), and achieves promising results while lowering the inference latency, and the model size. In this work, we exploit the post-training quantization method proposed by Wu et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib52)), converting all weights and activations from 32-bit floating-point values to an 8-bit fixed-point integer. Specifically, it quantizes linear layers input and weights, matrix multiplications, and the residual summations for Transformer Vaswani et al. ([2017](https://arxiv.org/html/2205.10828#bib.bib47)).
|
| 71 |
+
|
| 72 |
+
3 Experimental Setup
|
| 73 |
+
--------------------
|
| 74 |
+
|
| 75 |
+
### 3.1 Evaluation Benchmarks
|
| 76 |
+
|
| 77 |
+
We analyze our compressed models on three different NMT benchmarks. We exploit FLORES-101 Goyal et al. ([2021b](https://arxiv.org/html/2205.10828#bib.bib18)) to study the model behavior based on the amount of available resources for each language. MT-Gender Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)) is used to study the impact of compression on gender bias. Finally, we evaluate on DiBiMT Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)) to illustrate the compression effect on semantic biases.
|
| 78 |
+
|
| 79 |
+
##### FLORES-101
|
| 80 |
+
|
| 81 |
+
is a many-to-many NMT evaluation benchmark, including sentences extracted from English Wikipedia. It is translated into 101 languages by human translators, enabling 10,100 language directions to be evaluated. In this paper, we evaluate our models on devtest subset of the FLORES-101 Goyal et al. ([2021b](https://arxiv.org/html/2205.10828#bib.bib18)) benchmark. This benchmark provides test sets comparable across all the language pairs, and thus allows us to assess to what extent each language pair gets impacted by the compression techniques.
|
| 82 |
+
|
| 83 |
+
##### MT-Gender
|
| 84 |
+
|
| 85 |
+
Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)) is an English-centric multilingual NMT benchmark for evaluating gender bias in multiple target languages: Arabic, Ukrainian, Hebrew, Russian, Italian, French, Spanish, and German. The method relies on automatic alignment and morphological analysis, without the need for gold translations.5 5 5 For each instance, the main entity is attached to a pronoun, and the side entity attempts to distort the translation. With the use of automatic alignment and morphological analysis, the translated gender is extracted. An example is shown in Figure[0(a)](https://arxiv.org/html/2205.10828#S2.F0.sf1 "0(a) ‣ Figure 1 ‣ 2.2 Light Compression Techniques ‣ 2 Model and Compression Techniques ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). Later, Kocmi et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib26)) extends the benchmark by adding Czech and Polish languages. We choose MT-Gender as it covers more languages compared to other existing MT gender bias benchmarks Bentivogli et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib4)); Renduchintala et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib39)); Savoldi et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib41)).
|
| 86 |
+
|
| 87 |
+
##### DiBiMT
|
| 88 |
+
|
| 89 |
+
is the first fully manually-crafted NMT benchmark for evaluating word sense disambiguation on five high-resource languages: Chinese, German, Italian, Russian, and Spanish Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)), where the source language is English. Besides, they propose several bias evaluation metrics to compare different models(defined in Section[4.3](https://arxiv.org/html/2205.10828#S4.SS3 "4.3 Word Sense Disambiguation Benchmark ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")). As shown in Figure[0(b)](https://arxiv.org/html/2205.10828#S2.F0.sf2 "0(b) ‣ Figure 1 ‣ 2.2 Light Compression Techniques ‣ 2 Model and Compression Techniques ‣ What Do Compressed Multilingual Machine Translation Models Forget?"), given English source sentence, specific word(w i subscript 𝑤 𝑖 w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) with associated synset(σ 𝜎\sigma italic_σ), and language L 𝐿 L italic_L, set of GOOD, and BAD translation candidates include sentences that do and do not contain set of correct translation of σ 𝜎\sigma italic_σ in language L 𝐿 L italic_L, respectively. More details can be found in Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)).
|
| 90 |
+
|
| 91 |
+
### 3.2 Implementation Details
|
| 92 |
+
|
| 93 |
+
We use pre-trained M2M-100 12B model.6 6 6 last_checkpoint:[https://github.com/pytorch/fairseq/tree/main/examples/m2m_100](https://github.com/pytorch/fairseq/tree/main/examples/m2m_100) For quantization, we use Mean Squared Error(MSE) calibration. For weights, we use default per-channel calibration. In FLORES-101, we use SentencePiece BLEU(spBLEU) score 7 7 7 It uses SentencePiece tokenizer with 256K tokens and then BLEU is computed:[https://github.com/facebookresearch/flores](https://github.com/facebookresearch/flores) for the evaluation, as it is shown to be fair for the multilingual comparison Goyal et al. ([2021b](https://arxiv.org/html/2205.10828#bib.bib18)). Additionally, we use character n-gram F-score(ChrF)Popović ([2015](https://arxiv.org/html/2205.10828#bib.bib36))8 8 8 sacrebleu 1.5.1 Post ([2018](https://arxiv.org/html/2205.10828#bib.bib37)) with ChrF3. metric to compare compressed models with M2M-100 model. We evaluate our compressed models on language pairs in which M2M-100 12B model Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)) has reasonable 9 9 9 Specifically, we choose language pairs, in which M2M-100 12B model has a spBLEU score higher than 12. More details are provided in Appendix[B](https://arxiv.org/html/2205.10828#A2 "Appendix B Selection of Language Pairs in FLORES-101 ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). performance. This leaves us with 3,763 language directions. All experiments are computed on 2 NVIDIA A100-40GB GPUs.
|
| 94 |
+
|
| 95 |
+
4 Results and Discussion
|
| 96 |
+
------------------------
|
| 97 |
+
|
| 98 |
+
### 4.1 Compression Impact Across Languages
|
| 99 |
+
|
| 100 |
+
Resource Type Criterion No. Languages Very-Low|L|≤100k 𝐿 100 𝑘|L|\leq 100k| italic_L | ≤ 100 italic_k 16 Low 100k<|L|≤1M 100 𝑘 𝐿 1 𝑀 100k<|L|\leq 1M 100 italic_k < | italic_L | ≤ 1 italic_M 40 Medium 1M<|L|≤100M 1 𝑀 𝐿 100 𝑀 1M<|L|\leq 100M 1 italic_M < | italic_L | ≤ 100 italic_M 38 High 100M<|L|100 𝑀 𝐿 100M<|L|100 italic_M < | italic_L |7
|
| 101 |
+
|
| 102 |
+
Table 1: Distribution of lang. in FLORES-101 based on amount of available data to/from English(|L|𝐿|L|| italic_L |).
|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
|
| 106 |
+
Figure 2: Average spBLEU score for different sparsity ratios on 9 FLORES-101 language pairs, selected from all pairwise combinations of "low", "medium", and "high" language resource categories.
|
| 107 |
+
|
| 108 |
+

|
| 109 |
+
|
| 110 |
+
(a) Pruned 30%percent\%% Model
|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
|
| 114 |
+
(b) Pruned 45%percent\%% Model
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+
(c) Quantized Model
|
| 119 |
+
|
| 120 |
+
Figure 3: Relative spBLEU difference(%percent\%%) between the compressed models and M2M-100 model based on the amount of available Bitext data with English(ρ x,y subscript 𝜌 𝑥 𝑦\rho_{x,y}italic_ρ start_POSTSUBSCRIPT italic_x , italic_y end_POSTSUBSCRIPT). Green points("×\times×") are language pairs with significant improvement. Red points("+++") correspond to language pairs with a drastic performance drop.
|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
|
| 124 |
+
Figure 4: Relative spBLEU difference(%percent\%%) between the compressed models and M2M-100 model grouped by the resource type of language pairs.
|
| 125 |
+
|
| 126 |
+
##### Language Resource Type.
|
| 127 |
+
|
| 128 |
+
The true amount of available training data for a language is difficult to estimate, as it relies both on the quality and quantity of the data. Inspired by (Goyal et al., [2021b](https://arxiv.org/html/2205.10828#bib.bib18)), we classify languages into four categories, based on the amount of available data to/from English. The distribution of language resource types is illustrated in Table[1](https://arxiv.org/html/2205.10828#S4.T1 "Table 1 ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 129 |
+
|
| 130 |
+
##### Magnitude pruning: Sparsity Ratio(p 𝑝 p italic_p) Selection.
|
| 131 |
+
|
| 132 |
+
Figure[2](https://arxiv.org/html/2205.10828#S4.F2 "Figure 2 ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows the average spBLEU score of different sparsity ratios for a subset of language pairs.10 10 10 We choose nine language pairs covering all pairwise combinations of ”low”, ”medium”, and ”high” language categories. A list of this subset is provided in Appendix[C](https://arxiv.org/html/2205.10828#A3 "Appendix C Language Pairs for Selection of Sparsity Ratio ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). Based on this preliminary analysis, we decide to analyze the model behavior for two sparsity ratios, 30%percent 30 30\%30 % which is the maximum sparsity ratio for which the compressed model mostly keeps the performance, and 45%percent 45 45\%45 % for which the performance starts to drop drastically. Therefore, we evaluate the pruned models on sparsity ratios of 30%percent 30 30\%30 %, and 45%percent 45 45\%45 % for further experiments.
|
| 133 |
+
|
| 134 |
+
#### 4.1.1 Main Results
|
| 135 |
+
|
| 136 |
+
Model Memory size Avg spBLEU drop(%percent\%%)M2M-100 1×\times×22.44-Pruned 30%percent\%% M2M-100 0.7×\times×20.95 6.6 Pruned 45%percent\%% M2M-100 0.55×\times×15.12 32.6 Quantized M2M-100 0.25×\times×22.31 0.6
|
| 137 |
+
|
| 138 |
+
Table 2: Memory size and average spBLEU score of M2M-100, and compressed models on FLORES-101.
|
| 139 |
+
|
| 140 |
+
Table[2](https://arxiv.org/html/2205.10828#S4.T2 "Table 2 ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")11 11 11 We did not report actual inference time as implementation of compression techniques is highly dependent on the device. illustrates memory footprint and spBLEU scores on FLORES-101 dataset averaged over 3.7k language pairs retained for analysis. Pruned 30%percent\%% model suffers from a slight drop in performance, while quantization mostly preserves the same average spBLEU score. Both quantized and pruned 30%percent\%% models reduce the memory footprint by 75%percent\%% and 30%percent\%%, respectively. The performance of 45%percent 45 45\%45 % pruned model drops significantly. In what follows, we check the behavior of each language pair after compression along different criteria.
|
| 141 |
+
|
| 142 |
+
##### Amount of Bitext Data.
|
| 143 |
+
|
| 144 |
+
Figure[3](https://arxiv.org/html/2205.10828#S4.F3 "Figure 3 ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows the relative spBLEU performance of compressed models for each language pair (x,y)𝑥 𝑦(x,y)( italic_x , italic_y ) compared to the M2M-100. The X-axis corresponds to the amount of bitext data with English defined as ρ x,y=min(ρ x,ρ y)subscript 𝜌 𝑥 𝑦 𝑚 𝑖 𝑛 subscript 𝜌 𝑥 subscript 𝜌 𝑦\rho_{x,y}=min(\rho_{x},\rho_{y})italic_ρ start_POSTSUBSCRIPT italic_x , italic_y end_POSTSUBSCRIPT = italic_m italic_i italic_n ( italic_ρ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , italic_ρ start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ) where ρ x subscript 𝜌 𝑥\rho_{x}italic_ρ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT is the amount of Bitext data with English for language x 𝑥 x italic_x. For pruned 30%percent\%% model, while the average spBLEU score drops by 6.63%percent\%%(shown in Table[2](https://arxiv.org/html/2205.10828#S4.T2 "Table 2 ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")), there is a subset of language pairs that drops drastically(shown as "+++"). Interestingly, there is a subset of language pairs that get significantly improved after compression(shown as "×\times×"). For pruned 45%percent\%% model, there is also a subset of languages with more than 50%percent\%% drop in performance, while the average spBLEU degradation is 32.62%percent\%%. For the quantized model which preserves almost the same average spBLEU, we see that there is also a set of languages suffering from a significant drop, and others being significantly improved. The behavior of compressed models in these specific language pairs is further studied in Section[4.1.2](https://arxiv.org/html/2205.10828#S4.SS1.SSS2 "4.1.2 Analysis of Losing Pairs ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") and [4.1.3](https://arxiv.org/html/2205.10828#S4.SS1.SSS3 "4.1.3 Analysis of Winning Pairs ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?"), respectively.
|
| 145 |
+
|
| 146 |
+
##### Resource Type.
|
| 147 |
+
|
| 148 |
+
We study the performance of the compressed models based on the resource category of language pairs, which is defined as the category of ρ x,y subscript 𝜌 𝑥 𝑦\rho_{x,y}italic_ρ start_POSTSUBSCRIPT italic_x , italic_y end_POSTSUBSCRIPT for a pair x→y→𝑥 𝑦 x\rightarrow y italic_x → italic_y. Figure[4](https://arxiv.org/html/2205.10828#S4.F4 "Figure 4 ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") demonstrates the relative spBLEU drop for each category of the compressed models. For pruning 30%percent\%%, the relative spBLEU drop is inversely proportional to the amount of training data for different categories, which confirms that pruning disproportionately impacts the performance of under-represented language pairs, while the average performance is near to the base M2M-100 model(as shown in Table[2](https://arxiv.org/html/2205.10828#S4.T2 "Table 2 ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")). For quantization, we see a much smaller decrease in all language categories. Furthermore, we show that the resource type of the target language is more crucial than the source language,12 12 12 Results are provided in Appendix[D](https://arxiv.org/html/2205.10828#A4 "Appendix D Relative spBLEU based on Resource Type of Target and Source ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). meaning that the performance of language pairs with "low" and "very-low" target languages drops drastically after the compression.
|
| 149 |
+
|
| 150 |
+
##### ChrF Difference.
|
| 151 |
+
|
| 152 |
+
For more fine-grained analysis, we perform sentence-level ChrF Popović ([2015](https://arxiv.org/html/2205.10828#bib.bib36))13 13 13 ChrF demonstrates better correlation with human judgements at sentence-level. evaluation. We define Δ=ChrF comp−ChrF base Δ subscript ChrF comp subscript ChrF base\Delta=\mathrm{ChrF}_{\mathrm{comp}}-\mathrm{ChrF}_{\mathrm{base}}roman_Δ = roman_ChrF start_POSTSUBSCRIPT roman_comp end_POSTSUBSCRIPT - roman_ChrF start_POSTSUBSCRIPT roman_base end_POSTSUBSCRIPT where ChrF comp subscript ChrF comp\mathrm{ChrF}_{\mathrm{comp}}roman_ChrF start_POSTSUBSCRIPT roman_comp end_POSTSUBSCRIPT and ChrF base subscript ChrF base\mathrm{ChrF}_{\mathrm{base}}roman_ChrF start_POSTSUBSCRIPT roman_base end_POSTSUBSCRIPT correspond to ChrF ChrF\mathrm{ChrF}roman_ChrF of compressed and baseline models, respectively. Sentences with Δ Δ\Delta roman_Δ close to zero are less impacted by compression, while those further away from zero are the most impacted (either positively or negatively) by compression. We define Losing Pairs as a set of instances where Δ<−0.5 Δ 0.5\Delta<-0.5 roman_Δ < - 0.5, and Winning Pairs as a set of instances where Δ>0.5 Δ 0.5\Delta>0.5 roman_Δ > 0.5. Thus, identified samples could be seen as an adaptation of Compression-Identified Exemplars introduced by Hooker et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib19)) for the case of translation. Figure[5](https://arxiv.org/html/2205.10828#S4.F5 "Figure 5 ‣ ChrF Difference. ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")14 14 14 The normalized distribution by the number of instances in each language pair category is provided in Appendix[E](https://arxiv.org/html/2205.10828#A5 "Appendix E ChrF Difference Analysis ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). plots the distribution of sentences from different language pair groups along with the different Δ Δ\Delta roman_Δ bins for these two subsets.15 15 15 Figure[5](https://arxiv.org/html/2205.10828#S4.F5 "Figure 5 ‣ ChrF Difference. ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") belongs to Pruned 30%percent\%% model. Complete ChrF calculation(including −0.5<Δ<0.5 0.5 Δ 0.5-0.5<\Delta<0.5- 0.5 < roman_Δ < 0.5) of compressed models for different bins are provided in Appendix[E](https://arxiv.org/html/2205.10828#A5 "Appendix E ChrF Difference Analysis ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 153 |
+
|
| 154 |
+
Model Off-T(%percent\%%) base Off-T(%percent\%%) comp Total No.Pruned 30%percent\%%5.9 13.7(+7.8)1,521 Pruned 45%percent\%%6.4 30.3(+23.9)10,314 Quantized 5.2 17.5(+12.3)268
|
| 155 |
+
|
| 156 |
+
Table 3: Percentage of off-target translations for M2M-100(base), and compressed models(comp). Last column is the total number of losing sentences(both on- and off-targets) for each compressed model.
|
| 157 |
+
|
| 158 |
+

|
| 159 |
+
|
| 160 |
+
Figure 5: Absolute number of sentences in each language pair category for different Δ Δ\Delta roman_Δ bins.
|
| 161 |
+
|
| 162 |
+
In the following, we comprehensively analyze the behavior of the model for Losing Pairs, and Winning Pairs.16 16 16 During the preliminary analysis we have identified languages for which M2M-100 training data contains two different scripts (e.g. Cyrillic and Latin), while FLORES-101 dataset provides one script for the evaluation. To fairly analyze the effect of compression, we exclude sentences that refer to these languages. A list of them is provided in Appendix[F](https://arxiv.org/html/2205.10828#A6 "Appendix F Languages with Two Scripts in M2M-100 Training ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 163 |
+
|
| 164 |
+
#### 4.1.2 Analysis of Losing Pairs
|
| 165 |
+
|
| 166 |
+
As shown in Figure[5](https://arxiv.org/html/2205.10828#S4.F5 "Figure 5 ‣ ChrF Difference. ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")(left side), losing pairs belong to very-low, low, and medium-resource languages, that are mostly under-represented subsets during training.17 17 17 Normalized distribution in Appendix[E](https://arxiv.org/html/2205.10828#A5 "Appendix E ChrF Difference Analysis ‣ What Do Compressed Multilingual Machine Translation Models Forget?") follows same trend. We manually inspected some of the translations from the losing pairs sets and we have identified 2 main reasons for the drop in performance which are off-target translations (translation in the wrong target language) and hallucinations. In what follows we attempt to quantify these two phenomena.
|
| 167 |
+
|
| 168 |
+
##### Off-Target.
|
| 169 |
+
|
| 170 |
+
We use FastText language identifier Joulin et al. ([2016a](https://arxiv.org/html/2205.10828#bib.bib21), [b](https://arxiv.org/html/2205.10828#bib.bib22)) to predict the languages of reference and the translated sentences. Table[3](https://arxiv.org/html/2205.10828#S4.T3 "Table 3 ‣ ChrF Difference. ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows the total number of losing sentences and percentage of off-target translations for both baseline and compressed models.18 18 18 We exclude sentences where the predicted reference language ids are not matched with gold reference languages. As the sparsity increases, the compressed model predicts more off-target translations(7.8%percent\%% and 23.9%percent\%% increase from baseline). Quantization also increases the percentage of off-target translation by 12.3%percent\%%.
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
|
| 174 |
+
(a) M2M-100 Model
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+
(b) Compressed Model
|
| 179 |
+
|
| 180 |
+
Reference To better represent traffic flow, relationships have been established between the three main characteristics: (1) flow, (2) density, and (3) velocity.
|
| 181 |
+
M2M-100 To better represent the flow of traffic, relationships have been established between three main characteristics: (1) flow, (2) density, and (3) speed.
|
| 182 |
+
Compressed It is believed to have been one of the earliest inhabitants of this place, and it is believed to be one of the oldest inhabitants of this place.
|
| 183 |
+
|
| 184 |
+
(c) Reference and output translations of M2M-100, and compressed models.
|
| 185 |
+
|
| 186 |
+
Figure 6: Cross-attention matrices of an on-target losing sentence for the M2M-100 model, and pruned 30%percent\%% model. Output translations show the hallucination for the compressed model. Source language is Asturian.
|
| 187 |
+
|
| 188 |
+
##### Hallucinations.
|
| 189 |
+
|
| 190 |
+
It refers to the case, in which a model generates an output unrelated to the source sentence. Lee et al. ([2018](https://arxiv.org/html/2205.10828#bib.bib28)) have shown that the cases of hallucinations have different cross-attention matrices. Figure[6](https://arxiv.org/html/2205.10828#S4.F6 "Figure 6 ‣ Off-Target. ‣ 4.1.2 Analysis of Losing Pairs ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows an example of cross-attention matrices for a losing sentence, where the translation of the compressed model is considered as a hallucination. As expected, translated tokens ignore the alignment with the source sequence. To quantitatively analyze the hallucination effect on all on-target losing sentences(excluding off-target translations), we define the relative alignment metric as:
|
| 191 |
+
|
| 192 |
+
λ=var comp var base 𝜆 subscript var comp subscript var base\displaystyle\begin{split}\lambda=\frac{\mathrm{var_{comp}}}{\mathrm{var_{base% }}}\end{split}start_ROW start_CELL italic_λ = divide start_ARG roman_var start_POSTSUBSCRIPT roman_comp end_POSTSUBSCRIPT end_ARG start_ARG roman_var start_POSTSUBSCRIPT roman_base end_POSTSUBSCRIPT end_ARG end_CELL end_ROW(1)
|
| 193 |
+
|
| 194 |
+
where var var\mathrm{var}roman_var is defined as:
|
| 195 |
+
|
| 196 |
+
{var=1|I|.|J|∑i∈I∑j∈J α i,j(μ i−j)2 μ i=∑j∈J j.α i,j cases var 1 formulae-sequence 𝐼 𝐽 subscript 𝑖 𝐼 subscript 𝑗 𝐽 subscript 𝛼 𝑖 𝑗 superscript subscript 𝜇 𝑖 𝑗 2 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 formulae-sequence subscript 𝜇 𝑖 subscript 𝑗 𝐽 𝑗 subscript 𝛼 𝑖 𝑗 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒\displaystyle\begin{cases}\mathrm{var}=\frac{1}{|I|.|J|}\sum_{i\in I}\sum_{j% \in J}\alpha_{i,j}(\mu_{i}-j)^{2}\\ \mu_{i}=\sum_{j\in J}j.\alpha_{i,j}\end{cases}{ start_ROW start_CELL roman_var = divide start_ARG 1 end_ARG start_ARG | italic_I | . | italic_J | end_ARG ∑ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j ∈ italic_J end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT ( italic_μ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - italic_j ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_CELL start_CELL end_CELL end_ROW start_ROW start_CELL italic_μ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_j ∈ italic_J end_POSTSUBSCRIPT italic_j . italic_α start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT end_CELL start_CELL end_CELL end_ROW(2)
|
| 197 |
+
|
| 198 |
+
where I 𝐼 I italic_I and J 𝐽 J italic_J correspond to sequences of source and target languages, respectively; α i,j subscript 𝛼 𝑖 𝑗\alpha_{i,j}italic_α start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is the attention weight, where we use the average attention over all layers and all attention heads. Inspired by Vig and Belinkov ([2019](https://arxiv.org/html/2205.10828#bib.bib48)); Kim et al. ([2021b](https://arxiv.org/html/2205.10828#bib.bib25)), the variance(var var\mathrm{var}roman_var) is high for cases where the target sequence pays attention to a very small subset of source tokens(hallucination), while it is low when the cross-attention matrix is near to the diagonal matrix(approximation of perfect alignment matrix). Table[4](https://arxiv.org/html/2205.10828#S4.T4 "Table 4 ‣ Hallucinations. ‣ 4.1.2 Analysis of Losing Pairs ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") displays the relative alignment(λ 𝜆\lambda italic_λ) metric for different compressed models. As the metric is higher than "1" for compressed models, it confirms that target translations of compressed models contain more hallucinated sentences. Lastly, we provide a list of the most affected language pairs in Appendix[G](https://arxiv.org/html/2205.10828#A7 "Appendix G Most Affected Language Pairs After Compression ‣ What Do Compressed Multilingual Machine Translation Models Forget?") for further studies.
|
| 199 |
+
|
| 200 |
+
Model λ 𝜆\lambda italic_λ No. On-Target sents Pruned 30%percent\%%2.95 1,312 Pruned 45%percent\%%3.01 7,192 Quantized 1.96 221
|
| 201 |
+
|
| 202 |
+
Table 4: Total number of on-target(excluding off-target translations) sentences and relative alignment(λ 𝜆\lambda italic_λ) metric on losing pair subset.
|
| 203 |
+
|
| 204 |
+
Model λ 𝜆\lambda italic_λ Total No.Pruned 30%percent\%% M2M-100 0.42 863 Pruned 45%percent\%% M2M-100 0.15 1,455 Quantized M2M-100 0.52 308
|
| 205 |
+
|
| 206 |
+
Table 5: The relative alignment(λ 𝜆\lambda italic_λ) metric for different compressed models on winning pairs subset.
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
|
| 210 |
+
(a) M2M-100 Model
|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
|
| 214 |
+
(b) Compressed Model
|
| 215 |
+
|
| 216 |
+
Reference Crossties were introduced fairly early to hold the tracks in place. Gradually, however, it was realised that tracks would be more efficient if they had a stip of iron on the top.
|
| 217 |
+
M2M-100 Cucumbers Zucchini Summer Squash Carrots Kale Radishes Broccoli Rosemary Basil Pole Beans Peas Arugula Bibb Lettuce Cutting Lettuces Potatoes
|
| 218 |
+
Compressed Crossbars were inserted fairly early in order to keep the tracks in place. Gradually, however, it was realized that the tracks would be more effective if there were an iron strip at the top.
|
| 219 |
+
|
| 220 |
+
(c) Reference and output translations of M2M-100, and compressed models.
|
| 221 |
+
|
| 222 |
+
Figure 7: Cross-attention matrices of a winning sentence for the M2M-100 model, and pruned 30%percent\%% model. Output translations show the hallucination for M2M-100 model. Source language is Afrikaans.
|
| 223 |
+
|
| 224 |
+
#### 4.1.3 Analysis of Winning Pairs
|
| 225 |
+
|
| 226 |
+
When manually inspecting some examples from the translation of winning pairs, we realize that a lot of them are matching cases where the baseline model generates hallucinations, while the compressed model generates acceptable translations, as shown in Figure[7](https://arxiv.org/html/2205.10828#S4.F7 "Figure 7 ‣ Hallucinations. ‣ 4.1.2 Analysis of Losing Pairs ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). We recall that in Figure[5](https://arxiv.org/html/2205.10828#S4.F5 "Figure 5 ‣ ChrF Difference. ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?"), most of the winning pairs(right side) belong to medium-resource languages 19 19 19 Normalized distribution in Appendix[E](https://arxiv.org/html/2205.10828#A5 "Appendix E ChrF Difference Analysis ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows the same behavior., which include a moderate amount of training instances, and could contain some poorly aligned parallel sentences. Raunak et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib38)) connects the phenomenon of hallucination to the corpus-level noise and suggests that it could also be amplified by back-translation (used for data augmentation to training M2M-100 model). Therefore, the compression seems to remove the memorization of noisy samples, which is more important for medium-resource languages, thus fixing some of the cases of hallucination. In Table[5](https://arxiv.org/html/2205.10828#S4.T5 "Table 5 ‣ Hallucinations. ‣ 4.1.2 Analysis of Losing Pairs ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?"), we compute the total number of winning sentences, and the relative alignment metric(λ 𝜆\lambda italic_λ) for compressed models and M2M-100 model. As λ 𝜆\lambda italic_λ is lower than "1", it confirms that the compression removes the noisy memorization of medium-resource languages, and benefits the generalization of the model. Ahia et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib2)) made a similar observation in the case of bilingual MT models. Interestingly, the number of winning sentences increases as the model gets sparser(1,455 vs. 863). Figure[8](https://arxiv.org/html/2205.10828#S4.F8 "Figure 8 ‣ 4.2 Gender Bias Analysis ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows that new sentences mostly belong to medium-resource languages. Finally, a list of most winning language pairs is provided in Appendix[G](https://arxiv.org/html/2205.10828#A7 "Appendix G Most Affected Language Pairs After Compression ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 227 |
+
|
| 228 |
+
### 4.2 Gender Bias Analysis
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
|
| 232 |
+
Figure 8: Number of sentences in winning pairs, added to each language category after increasing the sparsity from 30%percent\%% to 45%percent\%%.
|
| 233 |
+
|
| 234 |
+
We evaluate M2M-100 and our compressed models on MT-Gender benchmark Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)); Kocmi et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib26)). Inspired by Boito et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib5)), we use a fairness metric to compare the behavior of compressed models on male and female subsets:
|
| 235 |
+
|
| 236 |
+
ψ=f m−f f f m+f f 𝜓 subscript 𝑓 𝑚 subscript 𝑓 𝑓 subscript 𝑓 𝑚 subscript 𝑓 𝑓\displaystyle\begin{split}\psi=\frac{f_{m}-f_{f}}{f_{m}+f_{f}}\end{split}start_ROW start_CELL italic_ψ = divide start_ARG italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT - italic_f start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_ARG start_ARG italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT + italic_f start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT end_ARG end_CELL end_ROW(3)
|
| 237 |
+
|
| 238 |
+
where f m subscript 𝑓 𝑚 f_{m}italic_f start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT, and f f subscript 𝑓 𝑓 f_{f}italic_f start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT refer to F1 scores of male and female, respectively. if ψ 𝜓\psi italic_ψ is near zero, then the model is not biased toward any gender, however, ψ 𝜓\psi italic_ψ values of +1 or -1 mean that the model is highly biased toward male or female, respectively. We extend the fairness metric to pro- and anti-stereotypical subsets as follows:20 20 20 Pro-stereotypical sentences refer to samples that context and occupation match(e.g. The carpenter stopped the housekeeper and helped her.) while anti-stereotypical subset contains sentences that context and occupation do not match.:
|
| 239 |
+
|
| 240 |
+
ψ∗=|ψ anti−ψ pro|superscript 𝜓∗subscript 𝜓 𝑎 𝑛 𝑡 𝑖 subscript 𝜓 𝑝 𝑟 𝑜\displaystyle\begin{split}\psi^{\ast}=|\psi_{anti}-\psi_{pro}|\end{split}start_ROW start_CELL italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = | italic_ψ start_POSTSUBSCRIPT italic_a italic_n italic_t italic_i end_POSTSUBSCRIPT - italic_ψ start_POSTSUBSCRIPT italic_p italic_r italic_o end_POSTSUBSCRIPT | end_CELL end_ROW(4)
|
| 241 |
+
|
| 242 |
+
where ψ pro subscript 𝜓 𝑝 𝑟 𝑜\psi_{pro}italic_ψ start_POSTSUBSCRIPT italic_p italic_r italic_o end_POSTSUBSCRIPT, and ψ anti subscript 𝜓 𝑎 𝑛 𝑡 𝑖\psi_{anti}italic_ψ start_POSTSUBSCRIPT italic_a italic_n italic_t italic_i end_POSTSUBSCRIPT belong to the fairness metric of pro- and anti-stereotypical sections. Intuitively, if the model has different behaviors in pro- and anti-stereotypical subsets, then it results in increasing the absolute difference of ψ anti subscript 𝜓 𝑎 𝑛 𝑡 𝑖\psi_{anti}italic_ψ start_POSTSUBSCRIPT italic_a italic_n italic_t italic_i end_POSTSUBSCRIPT and ψ pro subscript 𝜓 𝑝 𝑟 𝑜\psi_{pro}italic_ψ start_POSTSUBSCRIPT italic_p italic_r italic_o end_POSTSUBSCRIPT.21 21 21 Proposed metrics are different than simple absolute score difference of Kocmi et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib26)), more details in Appendix[H](https://arxiv.org/html/2205.10828#A8 "Appendix H Proposed Metrics for MT-Gender Benchmark ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 243 |
+
|
| 244 |
+
Average fairness metrics over 10 languages are illustrated in Table[6](https://arxiv.org/html/2205.10828#S4.T6 "Table 6 ‣ 4.2 Gender Bias Analysis ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). Increasing the sparsity ratio results in a more biased model as both ψ 𝜓\psi italic_ψ and ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT relatively increase +67.2%percent\%%, and +25.9%percent\%%. Quantization has less effect on the gender bias as both ψ 𝜓\psi italic_ψ and ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT negligibly change after applying it. Detailed results for each language are provided in Appendix[I](https://arxiv.org/html/2205.10828#A9 "Appendix I MT-Gender Results per Language ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). Interestingly, pruning 30% highly increases the gender bias even for high-resource languages e.g. French and German, while spBLEU is almost the same after the compression.
|
| 245 |
+
|
| 246 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 17.36 16.51 Pruned 30%percent\%% M2M-100 21.65(+24.7)19.52(+18.25)Pruned 45%percent\%% M2M-100 29.03(+67.2)20.8(+25.9)Quantized M2M-100 18.24(+5.1)15.53(-5.8)
|
| 247 |
+
|
| 248 |
+
Table 6: Average fairness metrics over languages of MT-Gender Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)). Numbers in parentheses are the relative score differences between a specific compressed model and M2M-100 model.
|
| 249 |
+
|
| 250 |
+
### 4.3 Word Sense Disambiguation Benchmark
|
| 251 |
+
|
| 252 |
+
Model SFII SPDI MFS MFS+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT AVG Baseline 77.6 71.6 52.8 87.6 72.4 Pruned 30%percent\%%76.4 72.2 52.9 87.8 72.4 Pruned 45%percent\%%80.2 74.8 53.4 87.8 74.1 Quantized 79.5 74 53.7 88.8 74
|
| 253 |
+
|
| 254 |
+
Table 7: The average semantic bias metrics over languages of DiBiMT Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)). Last column is the average score of bias metrics for each model.
|
| 255 |
+
|
| 256 |
+
In this section, we analyze the impact of the compression on semantic biases by evaluating our models on a multilingual word sense disambiguation benchmark. We first detail metrics used in Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)) to measure semantic biases.
|
| 257 |
+
|
| 258 |
+
##### Notation.
|
| 259 |
+
|
| 260 |
+
Given a specific word(w i subscript 𝑤 𝑖 w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT), l w i subscript 𝑙 subscript 𝑤 𝑖 l_{w_{i}}italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT is defined as (lemmatization, Part-of-Speech tag) pair. Π L(l w i)={σ 1,…,σ n}subscript Π 𝐿 subscript 𝑙 subscript 𝑤 𝑖 subscript 𝜎 1…subscript 𝜎 𝑛\Pi_{L}(l_{w_{i}})=\{\sigma_{1},...,\sigma_{n}\}roman_Π start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) = { italic_σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_σ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } is the ordered list of synsets according to WordNet’s sense frequency Miller et al. ([1990](https://arxiv.org/html/2205.10828#bib.bib33)) in language L 𝐿 L italic_L. For instance, it is built as {{\{{the act of firing, photograph, drink, …}}\}} for noun shot in English. C l w i(σ)subscript 𝐶 subscript 𝑙 subscript 𝑤 𝑖 𝜎 C_{l_{w_{i}}}(\sigma)italic_C start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_σ ) is the index of synset(σ 𝜎\sigma italic_σ) in Π L(l w i)subscript Π 𝐿 subscript 𝑙 subscript 𝑤 𝑖\Pi_{L}(l_{w_{i}})roman_Π start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ).
|
| 261 |
+
|
| 262 |
+
##### SFII
|
| 263 |
+
|
| 264 |
+
is calculated as the error rate averaged over C l w i(σ)subscript 𝐶 subscript 𝑙 subscript 𝑤 𝑖 𝜎 C_{l_{w_{i}}}(\sigma)italic_C start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_σ ) for different positions and words w i subscript 𝑤 𝑖 w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Intuitively, it measures the sensitivity of the model when predicting a sense concerning its corresponding index in Π L(l w i)subscript Π 𝐿 subscript 𝑙 subscript 𝑤 𝑖\Pi_{L}(l_{w_{i}})roman_Π start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ( italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ).
|
| 265 |
+
|
| 266 |
+
##### SPDI
|
| 267 |
+
|
| 268 |
+
is computed as the average error rate based on polysemy degrees of synsets.
|
| 269 |
+
|
| 270 |
+
##### MFS
|
| 271 |
+
|
| 272 |
+
measures how often the model chooses more frequent senses than the correct one. Given C l w i(σ)subscript 𝐶 subscript 𝑙 subscript 𝑤 𝑖 𝜎 C_{l_{w_{i}}}(\sigma)italic_C start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_σ ) for a synset, it is increased once the model predicts a synset(σ′superscript 𝜎′\sigma^{\prime}italic_σ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT) with C l w i(σ′)<C l w i(σ)subscript 𝐶 subscript 𝑙 subscript 𝑤 𝑖 superscript 𝜎′subscript 𝐶 subscript 𝑙 subscript 𝑤 𝑖 𝜎 C_{l_{w_{i}}}(\sigma^{\prime})<C_{l_{w_{i}}}(\sigma)italic_C start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_σ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) < italic_C start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_σ ).
|
| 273 |
+
|
| 274 |
+
##### MFS+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT.
|
| 275 |
+
|
| 276 |
+
It is similar to the MFS metric, but it increases when C l w i(σ′)subscript 𝐶 subscript 𝑙 subscript 𝑤 𝑖 superscript 𝜎′C_{l_{w_{i}}}(\sigma^{\prime})italic_C start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_σ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) equals to 1.
|
| 277 |
+
|
| 278 |
+
Since metrics are based on the error rate, the lower values show that the model is less biased.
|
| 279 |
+
|
| 280 |
+
Table[7](https://arxiv.org/html/2205.10828#S4.T7 "Table 7 ‣ 4.3 Word Sense Disambiguation Benchmark ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") demonstrates the semantic bias scores, averaged over all languages in DiBiMT Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)).22 22 22 Detailed results are provided in Appendix[J](https://arxiv.org/html/2205.10828#A10 "Appendix J Detailed DiBiMT Results ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). The last column is the average of semantic bias metrics for each model. According to the average bias score, quantized and pruned 45%percent\%% models amplify the bias metric by 1.6, and 1.7 points on average, compared to M2M-100, respectively. It confirms that the compression amplifies the semantic bias while keeping almost the same BLEU performance, especially for the quantization(average BLEU scores are shown in Table[2](https://arxiv.org/html/2205.10828#S4.T2 "Table 2 ‣ 4.1.1 Main Results ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")).
|
| 281 |
+
|
| 282 |
+
5 Related Work
|
| 283 |
+
--------------
|
| 284 |
+
|
| 285 |
+
The first connection between compression and bias amplification has been made by Hooker et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib19), [2020](https://arxiv.org/html/2205.10828#bib.bib20)) in the case of image classification. The same authors proposed an approach to find a subset of the dataset which contains samples that have disproportionately high errors after the compression. There is also recent work that analyzes the effect of compression on pre-trained language models Xu et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib53)); Lauscher et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib27)); Du et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib12)); Ogueji et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib34)). Notably, de Vassimon Manela et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib10)) demonstrated a higher gender bias in compressed pre-trained language models. Concerning NMT, Renduchintala et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib39)) demonstrated that optimization of inference speed up may result in gender bias amplification. To the best of our knowledge, this work is the first in-depth study of the impact of compression on massively multilingual models. We hope our findings would encourage further research on this topic.
|
| 286 |
+
|
| 287 |
+
6 Conclusion
|
| 288 |
+
------------
|
| 289 |
+
|
| 290 |
+
We demonstrate the impacts of applying compression methods to the massively Multilingual Machine Translation models by evaluating compressed models on FLORES-101 Goyal et al. ([2021b](https://arxiv.org/html/2205.10828#bib.bib18)), gender bias benchmark Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)), and word sense disambiguation benchmark Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)). We show that while average BLEU drops negligibly, the performance of under-represented language pairs drops drastically. Interestingly, sparsity improves the performance of some medium-resource language pairs by removing the noisy memorization. By evaluating our compressed models on gender bias and word sense disambiguation benchmarks, we show that the compression amplifies the intrinsic gender and semantic biases, even in high-resource language pairs. We hope our findings could be a starting point to consider the fairness aspects when compressing multilingual models.
|
| 291 |
+
|
| 292 |
+
Limitations
|
| 293 |
+
-----------
|
| 294 |
+
|
| 295 |
+
Our compression techniques are limited to post-training quantization, and magnitude pruning without additional fine-tuning due to the huge cost of fine-tuning these massively multilingual models, but future research could extend our analysis to compression methods with additional fine-tuning, e.g. knowledge distillation Kim and Rush ([2016](https://arxiv.org/html/2205.10828#bib.bib24)), training-aware pruning and quantization Behnke and Heafield ([2020](https://arxiv.org/html/2205.10828#bib.bib3)); Zhang et al. ([2021](https://arxiv.org/html/2205.10828#bib.bib60)); Yao et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib57)). We analyze our compressed models based on the amount of available training data for each language pair, gender bias, and word sense disambiguation bias. Future research could apply our analysis to other linguistic biases in the machine translation task.
|
| 296 |
+
|
| 297 |
+
Acknowledgement
|
| 298 |
+
---------------
|
| 299 |
+
|
| 300 |
+
Alireza Mohammadshahi is supported by the Swiss National Science Foundation(grant number CRSII5-180320).
|
| 301 |
+
|
| 302 |
+
References
|
| 303 |
+
----------
|
| 304 |
+
|
| 305 |
+
* Aharoni et al. (2019) Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. [Massively multilingual neural machine translation](https://doi.org/10.18653/v1/N19-1388). In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 306 |
+
* Ahia et al. (2021) Orevaoghene Ahia, Julia Kreutzer, and Sara Hooker. 2021. [The low-resource double bind: An empirical study of pruning for low-resource machine translation](https://doi.org/10.18653/v1/2021.findings-emnlp.282). In _Findings of the Association for Computational Linguistics: EMNLP 2021_, pages 3316–3333, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 307 |
+
* Behnke and Heafield (2020) Maximiliana Behnke and Kenneth Heafield. 2020. [Losing heads in the lottery: Pruning transformer attention in neural machine translation](https://doi.org/10.18653/v1/2020.emnlp-main.211). In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 2664–2674, Online. Association for Computational Linguistics.
|
| 308 |
+
* Bentivogli et al. (2020) Luisa Bentivogli, Beatrice Savoldi, Matteo Negri, Mattia A. Di Gangi, Roldano Cattoni, and Marco Turchi. 2020. [Gender in danger? evaluating speech translation technology on the MuST-SHE corpus](https://doi.org/10.18653/v1/2020.acl-main.619). In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 6923–6933, Online. Association for Computational Linguistics.
|
| 309 |
+
* Boito et al. (2022) Marcely Zanon Boito, Laurent Besacier, Natalia Tomashenko, and Yannick Estève. 2022. [A study of gender impact in self-supervised models for speech-to-text systems](https://doi.org/10.48550/ARXIV.2204.01397).
|
| 310 |
+
* Bondarenko et al. (2021) Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2021. [Understanding and overcoming the challenges of efficient transformer quantization](https://doi.org/10.18653/v1/2021.emnlp-main.627). In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 7947–7969, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 311 |
+
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. [Language models are few-shot learners](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf). In _Advances in Neural Information Processing Systems_, volume 33, pages 1877–1901. Curran Associates, Inc.
|
| 312 |
+
* Campolungo et al. (2022) Niccolò Campolungo, Federico Martelli, Francesco Saina, and Roberto Navigli. 2022. [DiBiMT: A novel benchmark for measuring Word Sense Disambiguation biases in Machine Translation](https://aclanthology.org/2022.acl-long.298). In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 4331–4352, Dublin, Ireland. Association for Computational Linguistics.
|
| 313 |
+
* Dai et al. (2019) Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. [Transformer-XL: Attentive language models beyond a fixed-length context](https://doi.org/10.18653/v1/P19-1285). In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 2978–2988, Florence, Italy. Association for Computational Linguistics.
|
| 314 |
+
* de Vassimon Manela et al. (2021) Daniel de Vassimon Manela, David Errington, Thomas Fisher, Boris van Breugel, and Pasquale Minervini. 2021. [Stereotype and skew: Quantifying gender bias in pre-trained and fine-tuned language models](https://doi.org/10.18653/v1/2021.eacl-main.190). In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_, pages 2232–2242, Online. Association for Computational Linguistics.
|
| 315 |
+
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. [BERT: Pre-training of deep bidirectional transformers for language understanding](https://doi.org/10.18653/v1/N19-1423). In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 316 |
+
* Du et al. (2021) Mengnan Du, Subhabrata Mukherjee, Yu Cheng, Milad Shokouhi, Xia Hu, and Ahmed Hassan Awadallah. 2021. [What do compressed large language models forget? robustness challenges in model compression](https://doi.org/10.48550/ARXIV.2110.08419).
|
| 317 |
+
* El-Kishky et al. (2020) Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. [CCAligned: A massive collection of cross-lingual web-document pairs](https://doi.org/10.18653/v1/2020.emnlp-main.480). In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 5960–5969, Online. Association for Computational Linguistics.
|
| 318 |
+
* Fan et al. (2020) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2020. [Beyond english-centric multilingual machine translation](https://doi.org/10.48550/ARXIV.2010.11125).
|
| 319 |
+
* Frankle and Carbin (2019) Jonathan Frankle and Michael Carbin. 2019. [The lottery ticket hypothesis: Finding sparse, trainable neural networks](https://openreview.net/forum?id=rJl-b3RcF7). In _International Conference on Learning Representations_.
|
| 320 |
+
* Gale et al. (2019) Trevor Gale, Erich Elsen, and Sara Hooker. 2019. [The state of sparsity in deep neural networks](https://doi.org/10.48550/ARXIV.1902.09574).
|
| 321 |
+
* Goyal et al. (2021a) Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021a. [Larger-scale transformers for multilingual masked language modeling](https://arxiv.org/abs/2105.00572).
|
| 322 |
+
* Goyal et al. (2021b) Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc’Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2021b. [The flores-101 evaluation benchmark for low-resource and multilingual machine translation](https://doi.org/10.48550/ARXIV.2106.03193).
|
| 323 |
+
* Hooker et al. (2019) Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. 2019. [What do compressed deep neural networks forget?](https://doi.org/10.48550/ARXIV.1911.05248)
|
| 324 |
+
* Hooker et al. (2020) Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020. [Characterising bias in compressed models](https://doi.org/10.48550/ARXIV.2010.03058).
|
| 325 |
+
* Joulin et al. (2016a) Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016a. Fasttext.zip: Compressing text classification models. _arXiv preprint arXiv:1612.03651_.
|
| 326 |
+
* Joulin et al. (2016b) Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016b. Bag of tricks for efficient text classification. _arXiv preprint arXiv:1607.01759_.
|
| 327 |
+
* Kim et al. (2021a) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021a. [I-bert: Integer-only bert quantization](https://proceedings.mlr.press/v139/kim21d.html). In _Proceedings of the 38th International Conference on Machine Learning_, volume 139 of _Proceedings of Machine Learning Research_, pages 5506–5518. PMLR.
|
| 328 |
+
* Kim and Rush (2016) Yoon Kim and Alexander M. Rush. 2016. [Sequence-level knowledge distillation](https://doi.org/10.18653/v1/D16-1139). In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_, pages 1317–1327, Austin, Texas. Association for Computational Linguistics.
|
| 329 |
+
* Kim et al. (2021b) Zae Myung Kim, Laurent Besacier, Vassilina Nikoulina, and Didier Schwab. 2021b. [Do multilingual neural machine translation models contain language pair specific attention heads?](https://doi.org/10.18653/v1/2021.findings-acl.250)In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_, pages 2832–2841, Online. Association for Computational Linguistics.
|
| 330 |
+
* Kocmi et al. (2020) Tom Kocmi, Tomasz Limisiewicz, and Gabriel Stanovsky. 2020. [Gender coreference and bias evaluation at WMT 2020](https://aclanthology.org/2020.wmt-1.39). In _Proceedings of the Fifth Conference on Machine Translation_, pages 357–364, Online. Association for Computational Linguistics.
|
| 331 |
+
* Lauscher et al. (2021) Anne Lauscher, Tobias Lueken, and Goran Glavaš. 2021. [Sustainable modular debiasing of language models](https://doi.org/10.18653/v1/2021.findings-emnlp.411). In _Findings of the Association for Computational Linguistics: EMNLP 2021_, pages 4782–4797, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 332 |
+
* Lee et al. (2018) Chanhee Lee, Young-Bum Kim, Dongyub Lee, and Heuiseok Lim. 2018. [Character-level feature extraction with densely connected networks](https://www.aclweb.org/anthology/C18-1273). In _Proceedings of the 27th International Conference on Computational Linguistics_, pages 3228–3239, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 333 |
+
* Li et al. (2020) Bei Li, Ziyang Wang, Hui Liu, Quan Du, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2020. [Learning light-weight translation models from deep transformer](https://doi.org/10.48550/ARXIV.2012.13866).
|
| 334 |
+
* Liang et al. (2021) Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. 2021. [Pruning and quantization for deep neural network acceleration: A survey](https://doi.org/10.48550/ARXIV.2101.09671).
|
| 335 |
+
* Menghani (2021) Gaurav Menghani. 2021. [Efficient deep learning: A survey on making deep learning models smaller, faster, and better](https://doi.org/10.48550/ARXIV.2106.08962).
|
| 336 |
+
* Michael H.Zhu (2018) Suyog Gupta Michael H.Zhu. 2018. [To prune, or not to prune: Exploring the efficacy of pruning for model compression](https://openreview.net/forum?id=S1lN69AT-).
|
| 337 |
+
* Miller et al. (1990) George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. 1990. [Introduction to WordNet: An On-line Lexical Database*](https://doi.org/10.1093/ijl/3.4.235). _International Journal of Lexicography_, 3(4):235–244.
|
| 338 |
+
* Ogueji et al. (2022) Kelechi Ogueji, Orevaoghene Ahia, Gbemileke Onilude, Sebastian Gehrmann, Sara Hooker, and Julia Kreutzer. 2022. [Intriguing properties of compression on multilingual models](http://arxiv.org/abs/2211.02738).
|
| 339 |
+
* Patterson et al. (2021) David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. [Carbon emissions and large neural network training](https://doi.org/10.48550/ARXIV.2104.10350).
|
| 340 |
+
* Popović (2015) Maja Popović. 2015. [chrF: character n-gram f-score for automatic MT evaluation](https://doi.org/10.18653/v1/W15-3049). In _Proceedings of the Tenth Workshop on Statistical Machine Translation_, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
|
| 341 |
+
* Post (2018) Matt Post. 2018. [A call for clarity in reporting BLEU scores](https://doi.org/10.18653/v1/W18-6319). In _Proceedings of the Third Conference on Machine Translation: Research Papers_, pages 186–191, Belgium, Brussels. Association for Computational Linguistics.
|
| 342 |
+
* Raunak et al. (2021) Vikas Raunak, Arul Menezes, and Marcin Junczys-Dowmunt. 2021. [The curious case of hallucinations in neural machine translation](https://doi.org/10.18653/v1/2021.naacl-main.92). In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, pages 1172–1183, Online. Association for Computational Linguistics.
|
| 343 |
+
* Renduchintala et al. (2021) Adithya Renduchintala, Denise Diaz, Kenneth Heafield, Xian Li, and Mona Diab. 2021. [Gender bias amplification during speed-quality optimization in neural machine translation](https://doi.org/10.18653/v1/2021.acl-short.15). In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)_, pages 99–109, Online. Association for Computational Linguistics.
|
| 344 |
+
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. [Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter](https://doi.org/10.48550/ARXIV.1910.01108).
|
| 345 |
+
* Savoldi et al. (2022) Beatrice Savoldi, Marco Gaido, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2022. [Under the morphosyntactic lens: A multifaceted evaluation of gender bias in speech translation](https://doi.org/10.18653/v1/2022.acl-long.127). In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 1807–1824, Dublin, Ireland. Association for Computational Linguistics.
|
| 346 |
+
* Schwenk et al. (2021) Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, Armand Joulin, and Angela Fan. 2021. [CCMatrix: Mining billions of high-quality parallel sentences on the web](https://doi.org/10.18653/v1/2021.acl-long.507). In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 6490–6500, Online. Association for Computational Linguistics.
|
| 347 |
+
* Stanovsky et al. (2019) Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. [Evaluating gender bias in machine translation](https://doi.org/10.18653/v1/P19-1164). In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 1679–1684, Florence, Italy. Association for Computational Linguistics.
|
| 348 |
+
* Strubell et al. (2019) Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. [Energy and policy considerations for deep learning in NLP](https://doi.org/10.18653/v1/P19-1355). In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 3645–3650, Florence, Italy. Association for Computational Linguistics.
|
| 349 |
+
* Tang et al. (2020) Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. [Multilingual translation with extensible multilingual pretraining and finetuning](https://doi.org/10.48550/ARXIV.2008.00401).
|
| 350 |
+
* Tao et al. (2022) Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, and Ngai Wong. 2022. [Compression of generative pre-trained language models via quantization](https://doi.org/10.48550/ARXIV.2203.10705).
|
| 351 |
+
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. [Attention is all you need](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf). In _Advances in Neural Information Processing Systems_, volume 30. Curran Associates, Inc.
|
| 352 |
+
* Vig and Belinkov (2019) Jesse Vig and Yonatan Belinkov. 2019. [Analyzing the structure of attention in a transformer language model](https://doi.org/10.18653/v1/W19-4808). In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_, pages 63–76, Florence, Italy. Association for Computational Linguistics.
|
| 353 |
+
* Wang et al. (2021) Fusheng Wang, Jianhao Yan, Fandong Meng, and Jie Zhou. 2021. [Selective knowledge distillation for neural machine translation](https://doi.org/10.18653/v1/2021.acl-long.504). In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 6456–6466, Online. Association for Computational Linguistics.
|
| 354 |
+
* Wang et al. (2022) Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. 2022. [Deepnet: Scaling transformers to 1,000 layers](https://doi.org/10.48550/ARXIV.2203.00555).
|
| 355 |
+
* Wei et al. (2022) Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, and Fengwei Yu. 2022. [Qdrop: Randomly dropping quantization for extremely low-bit post-training quantization](https://doi.org/10.48550/ARXIV.2203.05740).
|
| 356 |
+
* Wu et al. (2020) Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev, and Paulius Micikevicius. 2020. [Integer quantization for deep learning inference: Principles and empirical evaluation](https://doi.org/10.48550/ARXIV.2004.09602).
|
| 357 |
+
* Xu et al. (2021) Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, and Furu Wei. 2021. [Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression](https://doi.org/10.18653/v1/2021.emnlp-main.832). In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 10653–10659, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 358 |
+
* Xu et al. (2018) Chen Xu, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. 2018. [Alternating multi-bit quantization for recurrent neural networks](https://doi.org/10.48550/ARXIV.1802.00150).
|
| 359 |
+
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. [Xlnet: Generalized autoregressive pretraining for language understanding](https://proceedings.neurips.cc/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf). In _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc.
|
| 360 |
+
* Yang et al. (2022) Ziqing Yang, Yiming Cui, and Zhigang Chen. 2022. [Textpruner: A model pruning toolkit for pre-trained language models](https://doi.org/10.48550/ARXIV.2203.15996).
|
| 361 |
+
* Yao et al. (2022) Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. 2022. [Zeroquant: Efficient and affordable post-training quantization for large-scale transformers](http://arxiv.org/abs/2206.01861).
|
| 362 |
+
* Zhang et al. (2020) Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. [Improving massively multilingual neural machine translation and zero-shot translation](https://doi.org/10.18653/v1/2020.acl-main.148). In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 1628–1639, Online. Association for Computational Linguistics.
|
| 363 |
+
* Zhang et al. (2022) Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. [Opt: Open pre-trained transformer language models](https://doi.org/10.48550/ARXIV.2205.01068).
|
| 364 |
+
* Zhang et al. (2021) Tianfu Zhang, Heyan Huang, Chong Feng, and Longbing Cao. 2021. [Enlivening redundant heads in multi-head self-attention for machine translation](https://doi.org/10.18653/v1/2021.emnlp-main.260). In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 3238–3248, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 365 |
+
|
| 366 |
+
Appendix A Magnitude Pruning Strategy
|
| 367 |
+
-------------------------------------
|
| 368 |
+
|
| 369 |
+
Figure[9](https://arxiv.org/html/2205.10828#A1.F9 "Figure 9 ‣ Appendix A Magnitude Pruning Strategy ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows the performance of pruned models with different pruning strategies. Results illustrate that pruning based on Transformer-layer is slightly better than pruning based on each module of the model, and separate pruning for self-attention and feed-forward Transformer layers.
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+
Figure 9: Average spBLEU score of different magnitude pruning strategies on 9 FLORES-101 language pairs, defined in Appendix[C](https://arxiv.org/html/2205.10828#A3 "Appendix C Language Pairs for Selection of Sparsity Ratio ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 374 |
+
|
| 375 |
+
Appendix B Selection of Language Pairs in FLORES-101
|
| 376 |
+
----------------------------------------------------
|
| 377 |
+
|
| 378 |
+
Figure[10](https://arxiv.org/html/2205.10828#A2.F10 "Figure 10 ‣ Appendix B Selection of Language Pairs in FLORES-101 ‣ What Do Compressed Multilingual Machine Translation Models Forget?") shows the distribution of different language pair categories(defined in Table[1](https://arxiv.org/html/2205.10828#S4.T1 "Table 1 ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")) based on spBLEU score of M2M-100 12B model Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)). We use 12 spBLEU as the threshold, which is approximately the average score over the median of different language pair categories.
|
| 379 |
+
|
| 380 |
+

|
| 381 |
+
|
| 382 |
+
Figure 10: Histogram of number of language pairs based on spBLEU score for different language pair categories.
|
| 383 |
+
|
| 384 |
+
Table[8](https://arxiv.org/html/2205.10828#A2.T8 "Table 8 ‣ Appendix B Selection of Language Pairs in FLORES-101 ‣ What Do Compressed Multilingual Machine Translation Models Forget?") illustrates the number of language pairs in each category after the filtering.
|
| 385 |
+
|
| 386 |
+
Very-Low Low Medium High Very-Low 10 51 157 33 Low 58 164 643 143 Medium 108 440 1,277 257 High 23 103 252 39
|
| 387 |
+
|
| 388 |
+
Table 8: Number of language pairs in each category after the filtering.
|
| 389 |
+
|
| 390 |
+
Appendix C Language Pairs for Selection of Sparsity Ratio
|
| 391 |
+
---------------------------------------------------------
|
| 392 |
+
|
| 393 |
+
Language Pair Resource-Type M2M-100 spBLEU Bosnian-Afrikaans low-to-low 29.9 Afrikaans-Bulgarian low-to-medium 37.3 Afrikaans-French low-to-high 41.5 Catalan-Asturian medium-to-low 29.7 Danish-Bulgarian medium-to-medium 37.8 Swedish-Spanish medium-to-high 27.5 French-Afrikaans high-to-low 30.9 Spanish-Swedish high-to-medium 27.5 English-French high-to-high 51.3
|
| 394 |
+
|
| 395 |
+
Table 9: Subset of language pairs used to compute average spBLEU score of Figure[2](https://arxiv.org/html/2205.10828#S4.F2 "Figure 2 ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?"). M2M-100 model achieves reasonable performance for all selected pairs as shown in the last column.
|
| 396 |
+
|
| 397 |
+
Appendix D Relative spBLEU based on Resource Type of Target and Source
|
| 398 |
+
----------------------------------------------------------------------
|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
|
| 402 |
+
(a) Source Resource Type
|
| 403 |
+
|
| 404 |
+

|
| 405 |
+
|
| 406 |
+
(b) Target Resource Type
|
| 407 |
+
|
| 408 |
+
Figure 11: Relative spBLEU difference(%percent\%%) between compressed models and M2M-100 model grouped by the resource type of source or target languages.
|
| 409 |
+
|
| 410 |
+
Appendix E ChrF Difference Analysis
|
| 411 |
+
-----------------------------------
|
| 412 |
+
|
| 413 |
+
### E.A Pruned 30%percent\%% Model
|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
|
| 417 |
+
(a) Absolute number of sentences.
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
|
| 421 |
+
(b) Normalized distribution of sentences.
|
| 422 |
+
|
| 423 |
+

|
| 424 |
+
|
| 425 |
+
(c) Normalized distribution of sentences in each bin for different categories.
|
| 426 |
+
|
| 427 |
+
Figure 12: ChrF analysis of pruned 30%percent\%% M2M-100 model.
|
| 428 |
+
|
| 429 |
+
### E.B Pruned 45%percent\%% Model
|
| 430 |
+
|
| 431 |
+

|
| 432 |
+
|
| 433 |
+
(a) Absolute number of sentences.
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
|
| 437 |
+
(b) Normalized distribution of sentences.
|
| 438 |
+
|
| 439 |
+

|
| 440 |
+
|
| 441 |
+
(c) Normalized distribution of sentences in each bin for different categories.
|
| 442 |
+
|
| 443 |
+
Figure 13: ChrF analysis of pruned 45%percent\%% M2M-100 model.
|
| 444 |
+
|
| 445 |
+
### E.C Quantized Model
|
| 446 |
+
|
| 447 |
+

|
| 448 |
+
|
| 449 |
+
(a) Absolute number of sentences.
|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
|
| 453 |
+
(b) Normalized distribution of sentences.
|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
|
| 457 |
+
(c) Normalized distribution of sentences in each bin for different categories.
|
| 458 |
+
|
| 459 |
+
Figure 14: ChrF analysis of quantized M2M-100 model.
|
| 460 |
+
|
| 461 |
+
Appendix F Languages with Two Scripts in M2M-100 Training
|
| 462 |
+
---------------------------------------------------------
|
| 463 |
+
|
| 464 |
+
ISO Language sr Serbian cy Welsh az Azerbaijani uz Uzbek ja Japanese bn Bengali lo Lao zh Chinese
|
| 465 |
+
|
| 466 |
+
Table 10: Languages for which M2M-100 training data contains two scripts, while FLORES-101 provides one script for the evaluation.
|
| 467 |
+
|
| 468 |
+
Appendix G Most Affected Language Pairs After Compression
|
| 469 |
+
---------------------------------------------------------
|
| 470 |
+
|
| 471 |
+
Language pairs are selected, if both quantization and pruning have significant effect on them(based on spBLEU performance shown in Figure[3](https://arxiv.org/html/2205.10828#S4.F3 "Figure 3 ‣ 4.1 Compression Impact Across Languages ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?")).
|
| 472 |
+
|
| 473 |
+
Source Target Catalan Cebuano Latvian Igbo Arabic Igbo Danish Xhosa French Zulu
|
| 474 |
+
|
| 475 |
+
(a) Most losing language pairs
|
| 476 |
+
|
| 477 |
+
Source Target Latvian Vietnamese Bulgarian Latvian Arabic Urdu Thai Vietnamese Latvian Italian
|
| 478 |
+
|
| 479 |
+
(b) Most winning language pairs
|
| 480 |
+
|
| 481 |
+
Table 11: Most affected language pairs after the compression.
|
| 482 |
+
|
| 483 |
+
Appendix H Proposed Metrics for MT-Gender Benchmark
|
| 484 |
+
---------------------------------------------------
|
| 485 |
+
|
| 486 |
+
Equation[3](https://arxiv.org/html/2205.10828#S4.E3 "3 ‣ 4.2 Gender Bias Analysis ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") considers the range of F1 scores for female and male subsets, while the simple difference between F1 scores does not reflect the range of F1 scores. The range is crucial since a model with the same F1 score difference but higher individual F1 scores should have a lower fairness score, as lied in Equation[3](https://arxiv.org/html/2205.10828#S4.E3 "3 ‣ 4.2 Gender Bias Analysis ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?").
|
| 487 |
+
|
| 488 |
+
We also believe equation[4](https://arxiv.org/html/2205.10828#S4.E4 "4 ‣ 4.2 Gender Bias Analysis ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") is a better metric than the simple difference between accuracies of the model in pro-stereotypical and anti-stereotypical subsets since it again considers the range of scores, and ignores missed translations and wrongly aligned genders. Additionally, it exactly reflects the difference in the behavior of the model in these two subsets. If the compressed model has a contrary performance in pro- and anti-stereotypical subsets, e.g. amplifying the bias in the anti-stereotypical subset more than the pro-stereotypical one or decreasing the bias more in one subset, then ψ*\psi*italic_ψ * becomes higher. We suggest using Equation[3](https://arxiv.org/html/2205.10828#S4.E3 "3 ‣ 4.2 Gender Bias Analysis ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") and Equation[4](https://arxiv.org/html/2205.10828#S4.E4 "4 ‣ 4.2 Gender Bias Analysis ‣ 4 Results and Discussion ‣ What Do Compressed Multilingual Machine Translation Models Forget?") for comparing models on MT-Gender benchmark Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)); Kocmi et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib26)).
|
| 489 |
+
|
| 490 |
+
Appendix I MT-Gender Results per Language
|
| 491 |
+
-----------------------------------------
|
| 492 |
+
|
| 493 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 21.01 15.09 Pruned 30%percent\%% M2M-100 20.71 16.87 Pruned 45%percent\%% M2M-100 28.58 17.33 Quantized M2M-100 18.07 12.55
|
| 494 |
+
|
| 495 |
+
(a) Arabic
|
| 496 |
+
|
| 497 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 39.02 11.39 Pruned 30%percent\%% M2M-100 45.19 7.15 Pruned 45%percent\%% M2M-100 45.56 18.54 Quantized M2M-100 40.93 2.54
|
| 498 |
+
|
| 499 |
+
(b) Ukrainian
|
| 500 |
+
|
| 501 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 7.98 20.09 Pruned 30%percent\%% M2M-100 10.38 16.30 Pruned 45%percent\%% M2M-100 8.89 2.75 Quantized M2M-100 10.39 21.26
|
| 502 |
+
|
| 503 |
+
(c) Hebrew
|
| 504 |
+
|
| 505 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 29.06 3.93 Pruned 30%percent\%% M2M-100 29.10 2.30 Pruned 45%percent\%% M2M-100 30.28 8.08 Quantized M2M-100 32.65 8.74
|
| 506 |
+
|
| 507 |
+
(d) Russian
|
| 508 |
+
|
| 509 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 22.46 2.03 Pruned 30%percent\%% M2M-100 30.17 13.81 Pruned 45%percent\%% M2M-100 48.59 4.61 Quantized M2M-100 24.71 2.6
|
| 510 |
+
|
| 511 |
+
(e) Italian
|
| 512 |
+
|
| 513 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 13.86 28.71 Pruned 30%percent\%% M2M-100 29.03 40.20 Pruned 45%percent\%% M2M-100 38.44 32.83 Quantized M2M-100 15.43 25.86
|
| 514 |
+
|
| 515 |
+
(f) French
|
| 516 |
+
|
| 517 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 5.77 15.72 Pruned 30%percent\%% M2M-100 4.89 14.62 Pruned 45%percent\%% M2M-100 22.53 34.01 Quantized M2M-100 6.01 15.11
|
| 518 |
+
|
| 519 |
+
(g) Spanish
|
| 520 |
+
|
| 521 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 6.48 16.93 Pruned 30%percent\%% M2M-100 13.16 26.83 Pruned 45%percent\%% M2M-100 22.14 18.12 Quantized M2M-100 6.23 14.96
|
| 522 |
+
|
| 523 |
+
(h) German
|
| 524 |
+
|
| 525 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 18.20 39.01 Pruned 30%percent\%% M2M-100 21.82 42.60 Pruned 45%percent\%% M2M-100 25.95 45.01 Quantized M2M-100 18.24 38.42
|
| 526 |
+
|
| 527 |
+
(i) Polish
|
| 528 |
+
|
| 529 |
+
Model ψ 𝜓\psi italic_ψ(%percent\%%)ψ∗superscript 𝜓∗\psi^{\ast}italic_ψ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT(%percent\%%)Original M2M-100 7.91 12.14 Pruned 30%percent\%% M2M-100 11.65 14.43 Pruned 45%percent\%% M2M-100 19.31 27.23 Quantized M2M-100 9.78 13.26
|
| 530 |
+
|
| 531 |
+
(j) Czech
|
| 532 |
+
|
| 533 |
+
Table 12: MT-Gender Stanovsky et al. ([2019](https://arxiv.org/html/2205.10828#bib.bib43)); Kocmi et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib26)) results for M2M-100 12B Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)), and compressed models.
|
| 534 |
+
|
| 535 |
+
Appendix J Detailed DiBiMT Results
|
| 536 |
+
----------------------------------
|
| 537 |
+
|
| 538 |
+
Model SFII SPDI MFS MFS+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT Avg Original M2M-100 89.14 80.59 41.8 92.59 76.03 Pruned 30%percent\%% M2M-100 87.32 80.56 39.55 93.04 75.11 Pruned 45%percent\%% M2M-100 86.78 82.9 39.93 92.41 75.50 Quantized M2M-100 88.86 81.26 43.32 92.51 76.48
|
| 539 |
+
|
| 540 |
+
(a) Chinese
|
| 541 |
+
|
| 542 |
+
Model SFII SPDI MFS MFS+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT Avg Original M2M-100 80 71.61 60.63 89.76 75.5 Pruned 30%percent\%% M2M-100 78.96 73.79 61.44 88.56 75.68 Pruned 45%percent\%% M2M-100 81.28 77.05 62.5 91.67 78.12 Quantized M2M-100 82.32 74.42 61.07 91.22 77.25
|
| 543 |
+
|
| 544 |
+
(b) German
|
| 545 |
+
|
| 546 |
+
Model SFII SPDI MFS MFS+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT Avg Original M2M-100 75.99 70.53 61.23 88.41 74.04 Pruned 30%percent\%% M2M-100 75.91 71.86 60.92 87.74 74.10 Pruned 45%percent\%% M2M-100 83.38 75.08 62.22 86.67 76.83 Quantized M2M-100 81.73 75.81 63.33 88.33 77.3
|
| 547 |
+
|
| 548 |
+
(c) Italian
|
| 549 |
+
|
| 550 |
+
Model SFII SPDI MFS MFS+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT Avg Original M2M-100 68.16 66.42 47.06 83.82 66.36 Pruned 30%percent\%% M2M-100 68.2 64.73 48.21 87.18 67.08 Pruned 45%percent\%% M2M-100 70.92 66.41 50 85.29 68.15 Quantized M2M-100 68.16 69.03 44.19 86.51 66.97
|
| 551 |
+
|
| 552 |
+
(d) Russian
|
| 553 |
+
|
| 554 |
+
Model SFII SPDI MFS MFS+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT Avg Original M2M-100 75.08 68.92 53.44 83.61 70.26 Pruned 30%percent\%% M2M-100 71.58 70.26 54.58 82.71 69.78 Pruned 45%percent\%% M2M-100 78.39 72.46 52.33 83.15 71.58 Quantized M2M-100 76.45 69.72 56.88 85.63 72.17
|
| 555 |
+
|
| 556 |
+
(e) Spanish
|
| 557 |
+
|
| 558 |
+
Table 13: DiBiMT Campolungo et al. ([2022](https://arxiv.org/html/2205.10828#bib.bib8)) evaluation for M2M-100 12B Fan et al. ([2020](https://arxiv.org/html/2205.10828#bib.bib14)), and compressed models.
|