| { |
| "title": "Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation", |
| "abstract": "Knowledge distillation (KD) is a promising technique for model compression in neural machine translation.\nHowever, where the knowledge hides in KD is still not clear, which may hinder the development of KD.\nIn this work, we first unravel this mystery from an empirical perspective and show that the knowledge comes from the top-1 predictions of teachers, which also helps us build a potential connection between word- and sequence-level KD.\nFurther, we point out two inherent issues in vanilla word-level KD based on this finding.\nFirstly, the current objective of KD spreads its focus to whole distributions to learn the knowledge, yet lacks special treatment on the most crucial top-1 information.\nSecondly, the knowledge is largely covered by the golden information due to the fact that most top-1 predictions of teachers overlap with ground-truth tokens, which further restricts the potential of KD.\nTo address these issues, we propose a novel method named Top-1 Information Enhanced Knowledge Distillation (TIE-KD).\nSpecifically, we design a hierarchical ranking loss to enforce the learning of the top-1 information from the teacher.\nAdditionally, we develop an iterative KD procedure to infuse more additional knowledge by distilling on the data without ground-truth targets.\nExperiments on WMT\u201914 English-German, WMT\u201914 English-French and WMT\u201916 English-Romanian demonstrate that our method can respectively boost Transformerbase students by +1.04, +0.60 and +1.11 BLEU scores and significantly outperform the vanilla word-level KD baseline.\nBesides, our method shows higher generalizability on different teacher-student capacity gaps than existing KD techniques.", |
| "sections": [ |
| { |
| "section_id": "1", |
| "parent_section_id": null, |
| "section_name": "Introduction", |
| "text": "In recent years, neural machine translation (NMT) has made marvelous progress in generating high-quality translations Bahdanau et al. (2014 ###reference_b2###); Gehring et al. (2017 ###reference_b9###); Vaswani et al. (2017 ###reference_b39###); Liang et al. (2021b ###reference_b25###, 2022 ###reference_b24###), especially with some exquisite and deep model architectures Wei et al. (2020 ###reference_b42###); Li et al. (2020 ###reference_b21###); Liu et al. (2020 ###reference_b26###); Wang et al. (2022 ###reference_b41###).\nDespite their amazing performance on translation tasks, high computational and deployment costs still prevent these models from being applied in real life.\nOn this problem, knowledge distillation (KD) Liang et al. (2008 ###reference_b22###); Hinton et al. (2015 ###reference_b14###); Kim and Rush (2016 ###reference_b17###); Wu et al. (2020 ###reference_b43###); Chen et al. (2020 ###reference_b5###); Wang et al. (2021 ###reference_b40###); Liang et al. (2021a ###reference_b23###) is regarded as a promising solution for model compression, which aims to transfer the knowledge from these strong teacher models into compact student models.\nGenerally, there are two categories of KD techniques, i.e., word-level KD Hinton et al. (2015 ###reference_b14###); Kim and Rush (2016 ###reference_b17###); Wang et al. (2021 ###reference_b40###) and sequence-level KD Kim and Rush (2016 ###reference_b17###).\n(1) Word-level KD is conducted on each target token, where it shrinks the Kullback-Leibler (KL) divergence Kullback and Leibler (1951 ###reference_b20###) between the predicted distributions from the student and the soft targets from the teacher.\nIn these soft targets, the knowledge was previously deemed to come from the probability relationship between negative candidates (i.e., the correlation information) Hinton et al. (2015 ###reference_b14###); Tang et al. (2020 ###reference_b38###); Jafari et al. (2021 ###reference_b15###).\n(2) Sequence-level KD instead requires no soft target and directly encourages students to maximize the sequence probability of the final translation decoded by the teacher.\nAlthough both techniques work quite differently, they still achieve similarly superior effectiveness.\nTherefore, we raise two heuristic questions on KD in NMT:\nQ1: Where does the knowledge actually come from during KD in NMT?\nQ2: Is there any connection between the word- and the sequence-level KD techniques?\nTo answer these two questions, we conduct an empirical study that starts from word-level KD to find out where the knowledge hides in the teacher\u2019s soft targets and then explore whether the result can be expanded to sequence-level KD.\nAs a result, we summarize several intriguing findings:\nCompared to the correlation information, the information of the teacher\u2019s top-1 predictions (i.e., the top-1 information) actually determines the benefit of word-level KD (\u00a73.1 ###reference_###).\nThe correlation information can be successfully learned by students during KD but fails to improve their final performance (\u00a73.2 ###reference_###).\nExtending the top-1 information to top- does not lead to further improvement (\u00a73.3 ###reference_###).\nThe top-1 information is important even when the teacher is under-confident in its top-1 predictions (\u00a73.4 ###reference_###).\nSimilar importance of the top-1 information can also be verified on sequence-level KD (\u00a73.5 ###reference_###).\nThese findings sufficiently prove that 1) the knowledge actually comes from the top-1 information of the teacher during KD in NMT, and 2) the two kinds of KD techniques can be connected from the perspective of the top-1 information.\nOn these grounds, we further point out that there are two inherent issues in vanilla word-level KD.\nFirstly, as the source of teachers\u2019 knowledge, the top-1 information receives no special treatment in the training objective of vanilla word-level KD since the KL divergence directly optimizes the entire distribution.\nSecondly, since most top-1 predictions of strong teachers overlap with ground-truth tokens (see the first row of Tab.2 ###reference_te2###), the additional knowledge from teachers beyond the golden information is poor and the potential of word-level KD is largely limited (see the second row of Tab.2 ###reference_te2###).\nTo address these issues, we propose a new KD method named Top-1 Information Enhanced Knowledge Distillation (TIE-KD) for NMT.\nSpecifically, we first design a hierarchical ranking loss that can enforce the student model to learn the top-1 information through ranking the top-1 predictions of the teacher as its own top-1 predictions.\nMoreover, we develop an iterative KD procedure to expose more input data without ground-truth targets for KD to exploit more knowledge from the teacher.\nWe evaluate our TIE-KD method on three WMT benchmarks, i.e., WMT\u201914 English-German (En-De), WMT\u201914 English-French (En-Fr) and WMT\u201916 English-Romanian (En-Ro).\nExperimental results show that our method can boost Transformerbase students by +1.04, +0.60, +1.11 BLEU scores and significantly outperforms the vanilla word-level KD approach.\nBesides, we test the performance of existing KD techniques in NMT and our TIE-KD under different teacher-student capacity gaps and show the stronger generalizability of our method on various gaps.\nOur contributions are summarized as follows333The code is publicly available at: https://github.com/songmzhang/NMT-KD ###reference_###.:\nTo the best of our knowledge, we are the first to explore where the knowledge hides in KD for NMT and unveil that it comes from the top-1 information of the teacher, which also helps us build a connection between word- and sequence-level KD.\nFurther, we point two issues in vanilla word-level KD and propose a novel KD method named Top-1 Information Enhanced Knowledge Distillation (TIE-KD) to address them. Experiments on three WMT benchmarks demonstrate its effectiveness and superiority.\nWe investigate the effects of current KD techniques in NMT under different teacher-student capacity gaps and show the stronger generalizability of our approach to various gaps.\n###figure_1### ###figure_2### ###figure_3### ###figure_4###" |
| }, |
| { |
| "section_id": "2", |
| "parent_section_id": null, |
| "section_name": "Background", |
| "text": "" |
| }, |
| { |
| "section_id": "2.1", |
| "parent_section_id": "2", |
| "section_name": "Neural Machine Translation", |
| "text": "Given a source sentence with tokens and the corresponding target sentence with tokens , NMT models are trained to maximize the probability of each target token conditioning on the source sentence by the cross-entropy (CE) loss:\nwhere and denote the ground-truth target and the target-side previous context at time step , respectively. And is the model parameter." |
| }, |
| { |
| "section_id": "2.2", |
| "parent_section_id": "2", |
| "section_name": "Word-level Knowledge Distillation", |
| "text": "Word-level KD Kim and Rush (2016 ###reference_b17###) aims to minimize the KL divergence between the output distributions of the teacher model and the student model on each target token. Formally, given the probability distribution from the teacher model, the KL divergence-based loss is formulated as follows:\nwhere and denote the model parameters of the teacher and the student, respectively.\nThen, the overall loss function of word-level KD is the linear interpolation between the CE loss and the KL divergence loss:" |
| }, |
| { |
| "section_id": "2.3", |
| "parent_section_id": "2", |
| "section_name": "Sequence-level Knowledge Distillation", |
| "text": "Sequence-level KD Kim and Rush (2016 ###reference_b17###) encourages the student model to imitate the sequence probabilities of the translations from the teacher model.\nTo this end, it optimizes the student model through the following approximation:\nwhere denotes the hypothesis space of the teacher and is the approximate result through the teacher\u2019s beam search." |
| }, |
| { |
| "section_id": "3", |
| "parent_section_id": null, |
| "section_name": "Probing the Knowledge of KD in NMT", |
| "text": "In this section, we start from word-level KD and offer exhaustive empirical analyses on 1) the determining information in word-level KD (\u00a73.1 ###reference_###); 2) whether the correlation information has been learned (\u00a73.2 ###reference_###); 3) whether there are more benefits when extending the top-1 to top- information (\u00a73.3 ###reference_###) and 4) the importance of the top-1 information on soft targets with different confidence (\u00a73.4 ###reference_###).\nThen we expand the conclusion to sequence-level KD (\u00a73.5 ###reference_###) and lastly revisit KD for NMT from a novel view (\u00a73.6 ###reference_###)." |
| }, |
| { |
| "section_id": "3.1", |
| "parent_section_id": "3", |
| "section_name": "Which Information Determines the Performance of Word-level KD?", |
| "text": "In word-level KD, the relative probabilities between negative candidates in the soft targets from the teacher contain rich correlation information, which is previously deemed to carry knowledge from the teacher Hinton et al. (2015 ###reference_b14###); Tang et al. (2020 ###reference_b38###); Jafari et al. (2021 ###reference_b15###).\nHowever, in practice, strong teachers usually have high confidence in their top-1 predictions while retaining little probability mass for other candidates.\nHence, to study the mystery of KD, it is necessary to first investigate the real effects of the correlation information and the top-1 prediction information during KD and then figure out which one actually determines the performance of KD.\nTo this end, during word-level KD, we separately remove the top-1 information and the correlation information from the original soft targets of the teacher (as depicted in Fig.1 ###reference_###) and then observe the corresponding performance.\nBesides the BLEU score, we also introduce a new metric, namely the Top-1 Agreement (TA) rate, which calculates the overlap rate of the top-1 predictions between the student and the teacher on each position under the teacher-forcing mode.\nAs shown in Tab.2 ###reference_###, the performance slightly increases when we remove the probabilities of all other candidates except for the top-1 ones in soft targets (see Fig.1 ###reference_###(b))444Considering the regularization effect, we do not add a uniform distribution to complement the removed probability. Please refer to Appendix B ###reference_### for more detailed explanations..\nHowever, when we only remove the top-1 information and keep the remaining correlation information (see Fig.1 ###reference_###(c))555Note that we do not simply remove the probability of the top-1 prediction, but add this probability to the ground-truth token to maintain the correctness of the distribution, i.e., the soft target is unchanged if its top-1 prediction is correct., the performance of KD drops close to the baseline without any KD.\nMoreover, we observe that the TA rates are well correlated with the final BLEU scores among these students.\nTherefore, we conjecture that the top-1 information is the one that actually determines the performance of word-level KD (answer to Q1)." |
| }, |
| { |
| "section_id": "3.2", |
| "parent_section_id": "3", |
| "section_name": "Can Student Models Really Learn the Correlation Information?", |
| "text": "To further confirm the above conjecture, we examine whether the student models have successfully learned the correlation information of the teacher during KD.\nTo achieve this, we design two metrics to measure the ranking similarities between token rankings from the student and the teacher, named top- edit distance and top- ranking distance.\nGiven the top- predictions of the teacher at time step as and the ones of the student as , the top- edit distance can be expressed as:\nwhere calculates the edit distance.\nFor each in , this metric measures the average ranking distance between its original rank from the teacher, and the corresponding rank from the student, denoted as :\nWe compare the students above based on these two metrics and list the results in Tab.7 ###reference_te7###.\nClearly, the students perform better on both and when the soft targets contain correlation information ((a),(c) vs. (b),(d)), indicating that student models can successfully learn the correlation information from the teacher.\nHowever, this ranking performance fails to bring better performance of KD, as measured by BLEU scores.\nThus, these results negate the previous perception that the correlation information carries the knowledge during KD, which also supports our conjecture in Sec.3.1 ###reference_###." |
| }, |
| { |
| "section_id": "3.3", |
| "parent_section_id": "3", |
| "section_name": "Does Knowledge Increase with Top- Information?", |
| "text": "As the importance of the top-1 information for transferring knowledge in word-level KD has been validated, we further investigate whether more knowledge can be exploited by extending top-1 information to top- information888Equivalent to vanilla word-level KD when ..\nSimilar to Fig.1 ###reference_###(b), we keep the top- probabilities in the original soft target and remove others to extract its top- information.\nHowever, the results in Tab.4 ###reference_### give a negative answer that more information does not bring significantly more knowledge.\nThus, we can believe that almost all the knowledge of the teacher in word-level KD comes from the teacher\u2019s top-1 information, even though the whole distribution is distilled to the student." |
| }, |
| { |
| "section_id": "3.4", |
| "parent_section_id": "3", |
| "section_name": "Does Top-1 Information Work in All Soft Targets?", |
| "text": "###figure_5### Although the previous results have coarsely located the knowledge in word-level KD on the top-1 information of the teacher, it is still not clear whether this holds for all types of soft targets, especially when the teacher is under-confident in its top-1 predictions.\nTowards this end, we divide the soft targets of the teacher into three intervals Wang et al. (2021 ###reference_b40###) based on their top-1 probabilities: , , and .\nThen we separately conduct the same KD processes as described in Fig.1 ###reference_###, only using the soft targets in one of these intervals.\nSurprisingly, the results in Fig.2 ###reference_### show that even when the teacher is not so confident (i.e., ) in its top-1 predictions, using only the top-1 information (i.e., the blue bars) still achieves better performance than using the full information in corresponding soft targets.\nHowever, in these cases, removing the top-1 information in soft targets largely degrades the performance of the students.\nWe conjecture that these under-confident top-1 predictions of the teacher can serve as hints for students to learn the difficult ground-truth labels, while the correlation information in these cases carries more noise than real knowledge for students." |
| }, |
| { |
| "section_id": "3.5", |
| "parent_section_id": "3", |
| "section_name": "Expanding to Sequence-level KD", |
| "text": "Inspired by the analyses on word-level KD, we move on to sequence-level KD and decompose its loss function in Eq.(2.3 ###reference_###) into a word-level form:\nwhere is the teacher-decoded target for students at time step .\nConsidering the similar word-level form, it is intuitive to speculate that the top-1 information may also matter in sequence-level KD.\nTo verify this, we divide the targets into the top-1 and the non-top-1 predictions of the teacher999There are about 70% top-1 predictions and 30% non-top-1 predictions selected by the teacher\u2019s beam search during decoding. and investigate the respective effects of these targets by separately using them during sequence-level KD.\nAs shown in Tab.5 ###reference_###, there is only a negligible performance change when we only use the top-1 targets for KD (row 1 vs. row 2).\nHowever, if we only use the non-top-1 targets, the BLEU score drastically drops (row 1 vs. row 3).\nMoreover, considering the different proportions of the two kinds of targets in the teacher\u2019s translations (i.e.,70% vs. 30%), we also use a fixed part (the same amount as the non-top-1 targets) of the top-1 targets for a fair comparison, and the performance is still steady (row 2 vs. row 4) and much better than using only the non-top-1 targets (row 3 vs. row 4).\nInterestingly, by adding additional word-level top-1 information to the non-top-1 part, the performance of sequence-level KD further improves (row 1 vs. row 5).\nTherefore, we can also confirm the importance of the top-1 information in sequence-level KD." |
| }, |
| { |
| "section_id": "3.6", |
| "parent_section_id": "3", |
| "section_name": "Rethinking KD in NMT from the Perspective of the Top-1 Information", |
| "text": "Through the above analyses, we verify the importance of the teacher\u2019s top-1 information on both KD techniques, which actually reflects a potential connection between them.\nA brief theoretical analysis on this connection is provided in Appendix A ###reference_###.\nIn short, the two kinds of techniques share a unified objective that imparts the teachers\u2019 top-1 predictions to student models at each time step.\nThus, we believe that they are well connected on their similar working mechanisms (answer to Q2).\nFurther, we revisit word-level KD from this perspective and find two inherent issues.\nFirstly, the KL divergence-based objective in vanilla word-level KD directly optimizes whole distributions of students, while lacking specialized learning of the most important top-1 information.\nSecondly, since the top-1 predictions of the teacher mostly overlap with the ground-truth targets, the knowledge from the teacher is largely covered by the ground-truth information, which largely limits the potential of word-level KD.\nTherefore, we claim that the performance of the current word-level KD approach is far from perfect and the solutions to these problems are urgently needed." |
| }, |
| { |
| "section_id": "4", |
| "parent_section_id": null, |
| "section_name": "Top-1 Information Enhanced Knowledge Distillation for NMT", |
| "text": "To address the aforementioned issues in word-level KD, in this section, we introduce our method named Top-1 Information Enhanced Knowledge Distillation (TIE-KD), which includes a hierarchical ranking loss to boost the learning of the top-1 information from the teacher (\u00a74.1 ###reference_###) and an iterative knowledge distillation procedure to exploit more knowledge from the teacher (\u00a74.2 ###reference_###)." |
| }, |
| { |
| "section_id": "4.1", |
| "parent_section_id": "4", |
| "section_name": "Hierarchical Ranking Loss", |
| "text": "To help student models better grasp the top-1 information during distillation, we design a new loss named hierarchical ranking loss.\nTo gently achieve this goal, we first encourage the student to rank the teacher\u2019s top- predictions as its own top- predictions and then rank the teacher\u2019s top-1 prediction over these top- predictions.\nFormally, given the student\u2019s top- predictions as and the teacher\u2019s top- predictions as , the hierarchical ranking loss can be expressed as:\nwhere and are the probabilities from the student model and the teacher model, respectively. And is an indicator function.\nIn this way, the student model can be enforced to rank the top-1 predictions of the teacher to its own top-1 places, and thus it can explicitly enhance the learning of the knowledge from the teacher.\nThen, we add this loss to the original KL divergence loss, i.e., Eq.(2 ###reference_###), forming a new loss for KD:" |
| }, |
| { |
| "section_id": "4.2", |
| "parent_section_id": "4", |
| "section_name": "Iterative Knowledge Distillation", |
| "text": "Given that the large overlap between the top-1 predictions and ground-truth targets limits the amount of additional knowledge from the teacher during word-level KD, introducing data without ground-truth targets for KD could be helpful to mitigate this issue.\nInspired by previous studies on decoder-side data manipulation Zhang et al. (2019 ###reference_b46###); Goodman et al. (2020 ###reference_b10###); Liu et al. (2021a ###reference_b27###, b ###reference_b28###); Xie et al. (2021 ###reference_b44###), we design an iterative knowledge distillation procedure to expose more target-side data for KD.\nSpecifically, as shown in Algorithm 1 ###reference_###, at each training step, we conduct KD for iterations (line 3), by using the predictions of the student in the current iteration as the decoder-side inputs for KD in the next iteration (line 8).\nGenerally, these predictions can be regarded as similar but new inputs compared to the original target inputs.\nMeanwhile, there is no ideal ground-truth target for these inputs since they are usually not well-formed sentences.\nThen during each iteration, we collect the loss of KD according to Eq.(7 ###reference_###) (lines 47) and average it across all the iterations (line 10).\nSince all the supervision signals are from the teacher after the first iteration, the knowledge of the teacher model will be more exploited during the following iterations and thus the potential of word-level KD can be more released." |
| }, |
| { |
| "section_id": "5", |
| "parent_section_id": null, |
| "section_name": "Experiments", |
| "text": "" |
| }, |
| { |
| "section_id": "5.1", |
| "parent_section_id": "5", |
| "section_name": "Datasets", |
| "text": "We conduct experiments on three commonly-used WMT tasks, i.e., the WMT\u201914 English to German (En-De), WMT\u201914 English to French (En-Fr) and WMT\u201916 English to Romanian (En-Ro).\nFor all these tasks, we share the source and the target vocabulary and segment words into subwords using byte pair encoding (BPE) Sennrich et al. (2016 ###reference_b35###) with 32k merge operations.\nMore statistics of the datasets can be found in Appendix C.1 ###reference_###." |
| }, |
| { |
| "section_id": "5.2", |
| "parent_section_id": "5", |
| "section_name": "Implementation Details", |
| "text": "All our experiments are conducted based on the open-source toolkit fairseq Ott et al. (2019 ###reference_b31###) with FP16 training Ott et al. (2018 ###reference_b32###).\nBy default, we follow the big/base setting Vaswani et al. (2017 ###reference_b39###) to implement the teacher/student models in our experiments.\nMore training and evaluation details can be referred to Appendix C.2 ###reference_###.\nFor word-level KD-based methods, we set the in Eq.(3 ###reference_###) to 0.5 following Kim and Rush (2016 ###reference_b17###).\nFor our method, we set top- in Sec.4.1 ###reference_### to 5 and iteration time in Sec.4.2 ###reference_### to 3 on all three tasks.\nThe selection of top- and are shown in Appendix D ###reference_###." |
| }, |
| { |
| "section_id": "5.3", |
| "parent_section_id": "5", |
| "section_name": "Main Results", |
| "text": "We compare our proposed method with existing KD techniques in NMT (the detailed description of these compared techniques can be referred to Appendix C.3 ###reference_###) on three WMT tasks.\nTo make the results more convincing, we report both BLEU and COMET Rei et al. (2020 ###reference_b34###) scores in Tab.6 ###reference_###.\nUsing Transformerbig as the teacher, our method can boost the Transformerbase students by +1.04/+0.60/+1.11 BLEU scores and +4.52/+2.57/+4.80 COMET scores on three tasks, respectively.\nCompared to the vanilla Word-KD baseline, our method can outperform it significantly on all translation tasks, which verifies the effectiveness of our proposed solutions.\nAdditionally, as a word-level KD method, our TIE-KD can outperform Seq-KD on all three tasks and even achieves fully competitive results with the teacher on En-Ro, which demonstrates that the potential of Word-KD can be largely released by our method." |
| }, |
| { |
| "section_id": "6", |
| "parent_section_id": null, |
| "section_name": "Analysis", |
| "text": "" |
| }, |
| { |
| "section_id": "6.1", |
| "parent_section_id": "6", |
| "section_name": "Ablation Study", |
| "text": "To separately verify the effectiveness of our solutions for the two issues in vanilla word-level KD, we conduct an ablation study on WMT\u201914 En-De task and record the results in Tab.7 ###reference_###.\nWhen only adding hierarchical ranking loss to vanilla word-level KD, the BLEU scores and the TA rates gain by +0.3/+0.22 and +0.32/+0.47 on the validation/test set, respectively.\nIt reflects that KL divergence only provides a loose constraint on the learning of the top-1 information from the teacher, while our hierarchical ranking loss helps to explicitly grasp this core information.\nWhen only using iterative KD, the student also improves by +0.36/+0.25 BLEU scores and +0.18/+0.28 TA rates.\nIt indicates that our iterative KD can effectively release the potential of word-level KD by introducing data without ground-truth targets.\nWhen combined together, the two solutions finally compose our TIE-KD and can yield further improvement on both metrics.\nTherefore, the two issues in word-level KD are orthogonal and our proposed solutions are complementary to each other." |
| }, |
| { |
| "section_id": "6.2", |
| "parent_section_id": "6", |
| "section_name": "Combination With Sequence-Level KD", |
| "text": "According to Kim and Rush (2016 ###reference_b17###), word-level KD can be well combined with sequence-level KD and yields better performance.\nAs a word-level KD approach, our TIE-KD can also theoretically be combined with sequence-level KD.\nWe verify this on the WMT\u201914 En-De task and list the results in Tab.8 ###reference_###.\nLike Word-KD, our TIE-KD can also achieve better performance when combined with Seq-KD and is also better than \u201cWord-KD + Seq-KD\u201d, indicating the superiority of our method and its high compatibility with sequence-level KD.\n###table_1###" |
| }, |
| { |
| "section_id": "6.3", |
| "parent_section_id": "6", |
| "section_name": "Can a Stronger Teacher Teach a Better Student in NMT?", |
| "text": "###figure_6### Among the prior literature on KD Cho and Hariharan (2019 ###reference_b6###); Jin et al. (2019 ###reference_b16###); Mirzadeh et al. (2020 ###reference_b30###); Guo et al. (2020 ###reference_b13###); Jafari et al. (2021 ###reference_b15###); Qiu et al. (2022 ###reference_b33###), a general consensus is that a large teacher-student capacity gap may harm the quality of KD.\nWe also check this problem in NMT by using teachers of three model sizes.\nBesides the default configuration (i.e., Transformerbig) in our experiments above, we also add Transformerbase setting as the weaker teacher and Transformer setting with 18 encoder layers and 6 decoder layers as the stronger teacher101010To stably train a deeper Transformer, we use Admin Liu et al. (2020 ###reference_b26###) in layer normalization..\nWe compare our method with word- and sequence-level KD under these teachers in Fig.3 ###reference_### and draw several conclusions:\nThe stronger teacher can bring improvement to sequence-level KD but fails to word-level KD, where the reason may be the less additional knowledge from the stronger teacher due to its higher top-1 accuracy (68%70%).\nAs a word-level KD method, our TIE-KD instead brings conspicuous improvement with the stronger teacher, indicating that our method can exploit more knowledge from the teacher.\nUnder the weaker teacher, the student from our method even significantly surpasses the teacher, while other methods are largely limited by the performance of the teacher, demonstrating the high generalizability of our TIE-KD to different teacher-student capacity gaps." |
| }, |
| { |
| "section_id": "6.4", |
| "parent_section_id": "6", |
| "section_name": "Why is the Top-1 Information Important in KD?", |
| "text": "The decoding process of language generation models can be regarded as a sequential decision-making process Yu et al. (2017 ###reference_b45###); Arora et al. (2022 ###reference_b1###).\nAs mentioned in Sec.3.5 ###reference_###, during decoding, beam search tends to pick the top-1 predictions of the NMT model on each beam and finally selects the most probable beam.\nThus, the top-1 information (including both the top-1 word index and its corresponding probability) of the teacher model largely represents its decision on each decoding step, which is exactly what we expect the student model to learn from the teacher through KD in NMT.\nTherefore, the top-1 information can be seen as the embodiment of the knowledge of the teacher model in NMT tasks and should be emphatically learned by the student models." |
| }, |
| { |
| "section_id": "7", |
| "parent_section_id": null, |
| "section_name": "Related Work", |
| "text": "Kim and Rush (2016 ###reference_b17###) first introduce word-level KD for NMT and further propose sequence-level KD for better performance.\nAfterward, Wang et al. (2021 ###reference_b40###) investigate the effectiveness of different types of tokens in KD and propose selective KD strategies.\nMoreover, Wu et al. (2020 ###reference_b43###) distill the internal hidden states of the teacher models into the students and also obtain promising results.\nIn the field of non-autoregressive machine translation (NAT), KD from autoregressive models has become a de facto standard to improve the performance of NAT models Gu et al. (2017 ###reference_b11###); Zhou et al. (2019 ###reference_b48###); Gu et al. (2019 ###reference_b12###).\nAlso, KD has been used to enhance the performance of multilingual NMT Tan et al. (2019 ###reference_b37###); Sun et al. (2020 ###reference_b36###).\nBesides, similar ideas can be found when introducing external information to NMT models.\nFor example, Baziotis et al. (2020 ###reference_b3###) use language models as teachers for low-resource NMT models.\nChen et al. (2020 ###reference_b5###) distill the knowledge from fine-tuned BERT into NMT models.\nFeng et al. (2021 ###reference_b8###) and Zhou et al. (2022 ###reference_b47###) leverage KD to introduce future information to the teacher-forcing training of NMT models.\nDifferently, in this work, 1) we aim to explore where the knowledge hides in KD and unveil that it comes from the top-1 information of the teacher and further improve KD from this perspective; 2) we try to build a connection between two kinds of KD techniques in NMT and reveal their common essence, providing new directions for future work." |
| }, |
| { |
| "section_id": "8", |
| "parent_section_id": null, |
| "section_name": "Conclusion", |
| "text": "In this paper, we explore where the knowledge hides in KD for NMT and unveil that it comes from the top-1 information of the teacher.\nThis finding reflects the connection between word- and sequence-level KD and reveals the common essence of both KD techniques in NMT.\nFrom this perspective, we further propose a top-1 information enhanced knowledge distillation (TIE-KD) to address the two issues in vanilla word-level KD.\nExperiments on three WMT tasks prove the effectiveness of our method.\nBesides, we investigate the performance of the existing KD techniques in NMT and our method under different teacher-student capacity gaps and show the stronger generalizability of our method on various gaps." |
| } |
| ], |
| "appendix": [ |
| { |
| "section_id": "Appendix 1", |
| "parent_section_id": null, |
| "section_name": "Appendix A A Theoretical Analysis on the Connection Between Word- and Sequence-level KD", |
| "text": "We can directly consider the KL divergence loss of word-level KD in Eq.(2 ###reference_###) as its training objective and convert it into the equivalent form of the cross-entropy loss. For simplicity, we omit the in and in in following formulas:\nwhere denotes the whole target-side vocabulary.\nThen we can further separate the cross-entropy loss into the loss on the top-1 prediction and the losses on other candidates in the vocabulary:\nwhere represents the cross-entropy loss on the remaining candidates except for the top-1 prediction and can be regarded as a regularization term for the former one.\nAs empirically verified in Sec.3 ###reference_###, we can do the following approximation by omitting in Eq.(A ###reference_###):\nThus, we obtain the approximate form of the training objective of word-level KD.\nNow we consider the training objective of sequence-level KD in Eq.(3.5 ###reference_###). According to the results in Sec.3.5 ###reference_###, we can also assume that optimizing using all targets is approximately equal to optimizing using top-1 targets:\nwhere is an indicator function.\nLastly, if we replace the different weight functions before the function in Eq.(A ###reference_4###) and Eq.(A ###reference_6###) with one function :\nthen we can derive a unified form of the objective for these two kinds of KD techniques:\nwhere is the golden context in word-level KD and the model-generated context in sequence-level KD.\nIn this unified form, the only two variables are the weight function and the target-side previous context in the condition of the probability .\nFrom this expression, it is clear that student models are encouraged to learn the top-1 predictions of the teacher to obtain teachers\u2019 knowledge at each time step in both KD techniques.\nTherefore, we claim that the working mechanisms behind the two kinds of KD techniques are the same to some extent, although they look quite distinct on the surface.\nNotably, we also conjecture that the context difference may explain why sequence-level KD generally outperforms word-level KD.\nAutoregressive models trained with teacher-forcing suffer from exposure bias due to the gap between the golden context in training and the model-generated context in inference Bengio et al. (2015 ###reference_b4###); Zhang et al. (2019 ###reference_b46###).\nAccording to the above analysis, the same thing also happens in word-level KD.\nHowever, sequence-level KD circumvents this problem by conditioning on model-generated contexts during distillation, thus leaving no gap between training and inference.\nThis conjecture can also be verified by the performance of sequence-level KD on WMT\u201916 En-Ro, where the teacher\u2019s translations achieve considerably high similarities (BLEU score 62) with the original target sentences, and the improvement brought by sequence-level KD is much less than the one on other datasets since the model-generated context is too close to the golden context." |
| }, |
| { |
| "section_id": "Appendix 2", |
| "parent_section_id": null, |
| "section_name": "Appendix B Why Not Re-normalize the Soft Target in \u201cw/o correlation info\u201d?", |
| "text": "We would like to explain this from the perspective of the loss function. As we analyzed in Eq.(A ###reference_###), the loss of vanilla word-level KD is:\nBased on this, we remove all other probabilities in the soft target of the teacher except for the top-1 one to remove the \u201ccorrelation information\", i.e., the second term of the loss in Eq.(B ###reference_8###) is discarded:\nIn this objective, the effect of KD is fully dominated by the top-1 information of the teacher.\nIf we try to re-normalize the soft target with an additional uniform distribution, the result of KD will be affected by the regularization term of this uniform distribution:\nwhere .\nAnother way to re-normalize the distribution is to directly let as 1, but the original top-1 probability information from the teacher will be lost.\nTherefore, we keep the modified soft target in \u201cw/o correlation info\" unnormalized." |
| }, |
| { |
| "section_id": "Appendix 3", |
| "parent_section_id": null, |
| "section_name": "Appendix C Experimental Details", |
| "text": "For the En-De task, the training data contains nearly 4.5M sentence pairs.\nWe choose newstest2013 and newstest2014 as the validation set and the test set, respectively.\nFor the En-Fr task, there totally remains 35.8M sentence pairs after the cleaning procedure.\nThen we choose newstest2013 and newstest2014 as the validation set and the test set, respectively.\nFor the En-Ro task, we directly use the pre-processed data from Mehta et al. (2020 ###reference_b29###) and there are about 608K sentence pairs in the training data.\nThen newsdev2016 is selected as the validation set and newstest2016 is the test set.\nThe overall statistics of the datasets are listed in Table 9 ###reference_###." |
| }, |
| { |
| "section_id": "Appendix 4", |
| "parent_section_id": null, |
| "section_name": "Appendix D Hyperparameter Selection", |
| "text": "In this section, we investigate the effect of in hierarchical ranking loss on our method.\nWe search in [3, 5, 10, 20] and compare their performance on the validation set of the WMT\u201914 En-De task.\nAs shown in Fig.4 ###reference_###, our method performs best when is set to 5.\nThus, we keep to 5 for all three tasks in our experiments.\n###figure_7### Since our method includes several iterations of KD, we further investigate the effects of the iteration times on the performance of our method.\nIntuitively, with more iteration times, more knowledge will be exploited from the teacher, while the computational cost will also increase.\nTo check this, we try each iteration time in [1, 2, 3, 4] and record the corresponding performance and training time in Fig.5 ###reference_###.\nIt is obvious that the performance of our method gradually improves with increasing, while the training time per step also linearly increases.\nBalancing the cost and the performance, we choose 3 as the final iteration time.\n###figure_8###" |
| } |
| ], |
| "tables": { |
| "1": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S1.T1.1\" style=\"width:433.6pt;height:98.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(98.4pt,-22.4pt) scale(1.83111995113701,1.83111995113701) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.1.1\">\n<tr class=\"ltx_tr\" id=\"S1.T1.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.1.1.2.1\">Datasets</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.1.1.2.2\">En-De</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.1.1.2.3\">En-Fr</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.1.1.2.4\">En-Ro</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.3.1\">Top-1 Overlap Rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.3.2\">68%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.3.3\">78%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.1.3.4\">94%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.1.1\">\n from Word-level KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.1.2\">+0.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.1.3\">+0.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S1.T1.1.1.1.4\">+0.18</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The overlap rates between the top-1 predictions of teachers and ground-truth tokens on WMT\u201914 English-German (En-De), WMT\u201914 English-French (En-Fr) and WMT\u201916 English-Romanian (En-Ro) and the corresponding improvement () of BLEU scores brought by word-level KD on the test set of these tasks<span class=\"ltx_note ltx_role_footnote\" id=\"footnote2\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span>We random sample 3000 target sentences in the training set of each task to calculate the approximate overlap rates.</span></span></span>.</figcaption>\n</figure>", |
| "capture": "Table 1: The overlap rates between the top-1 predictions of teachers and ground-truth tokens on WMT\u201914 English-German (En-De), WMT\u201914 English-French (En-Fr) and WMT\u201916 English-Romanian (En-Ro) and the corresponding improvement () of BLEU scores brought by word-level KD on the test set of these tasks222We random sample 3000 target sentences in the training set of each task to calculate the approximate overlap rates.." |
| }, |
| "2": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.1\" style=\"width:433.6pt;height:415.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(94.7pt,-90.8pt) scale(1.77611726864498,1.77611726864498) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.1.1\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S3.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.1\">Task</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S3.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.2.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.3.1\">TA</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.4.1\">BLEU</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S3.T2.1.1.2.1.1\">En-De</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.2\">(a) vanilla word-level KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.3\" style=\"background-color:#BFBFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.2.3.1\" style=\"background-color:#BFBFFF;\">88.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.2.4\" style=\"background-color:#BFFFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.2.4.1\" style=\"background-color:#BFFFFF;\">26.66</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.3.1\">(b) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.3.1.1\">w/o</span> correlation info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.3.2\" style=\"background-color:#CCCCFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.3.2.1\" style=\"background-color:#CCCCFF;\">88.69</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.3.3\" style=\"background-color:#A6FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.3.3.1\" style=\"background-color:#A6FFFF;\">26.76</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.4.1\">(c) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.4.1.1\">w/o</span> top-1 info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.4.2\" style=\"background-color:#E6E6FF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.4.2.1\" style=\"background-color:#E6E6FF;\">87.49</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.3\" style=\"background-color:#E6FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.4.3.1\" style=\"background-color:#E6FFFF;\">26.43</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.5.1\">(d) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.5.1.1\">w/o</span> KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.5.2\" style=\"background-color:#F2F2FF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.5.2.1\" style=\"background-color:#F2F2FF;\">87.22</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3\" style=\"background-color:#F2FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.5.3.1\" style=\"background-color:#F2FFFF;\">26.37</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.6.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S3.T2.1.1.6.1.1\">En-Fr</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.6.2\">(a) vanilla word-level KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.6.3\" style=\"background-color:#BFBFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.6.3.1\" style=\"background-color:#BFBFFF;\">89.31</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.6.4\" style=\"background-color:#BFFFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.6.4.1\" style=\"background-color:#BFFFFF;\">34.94</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.7.1\">(b) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.7.1.1\">w/o</span> correlation info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.7.2\" style=\"background-color:#CCCCFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.7.2.1\" style=\"background-color:#CCCCFF;\">89.19</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.3\" style=\"background-color:#A6FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.7.3.1\" style=\"background-color:#A6FFFF;\">35.09</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.8.1\">(c) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.8.1.1\">w/o</span> top-1 info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.8.2\" style=\"background-color:#E6E6FF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.8.2.1\" style=\"background-color:#E6E6FF;\">88.34</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.3\" style=\"background-color:#E6FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.8.3.1\" style=\"background-color:#E6FFFF;\">34.33</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.9.1\">(d) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.9.1.1\">w/o</span> KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.9.2\" style=\"background-color:#F2F2FF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.9.2.1\" style=\"background-color:#F2F2FF;\">88.33</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.3\" style=\"background-color:#F2FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.9.3.1\" style=\"background-color:#F2FFFF;\">34.69</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.10.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S3.T2.1.1.10.1.1\">En-Ro</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.10.2\">(a) vanilla word-level KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.10.3\" style=\"background-color:#CCCCFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.10.3.1\" style=\"background-color:#CCCCFF;\">83.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.10.4\" style=\"background-color:#BFFFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.10.4.1\" style=\"background-color:#BFFFFF;\">34.29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.11.1\">(b) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.11.1.1\">w/o</span> correlation info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.11.2\" style=\"background-color:#BFBFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.11.2.1\" style=\"background-color:#BFBFFF;\">84.27</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.3\" style=\"background-color:#A6FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.11.3.1\" style=\"background-color:#A6FFFF;\">34.30</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.1.12.1\">(c) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.12.1.1\">w/o</span> top-1 info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.12.2\" style=\"background-color:#E6E6FF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.12.2.1\" style=\"background-color:#E6E6FF;\">83.73</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.3\" style=\"background-color:#E6FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.12.3.1\" style=\"background-color:#E6FFFF;\">34.02</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S3.T2.1.1.13.1\">(d) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.1.1.13.1.1\">w/o</span> KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T2.1.1.13.2\" style=\"background-color:#F2F2FF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.13.2.1\" style=\"background-color:#F2F2FF;\">83.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.1.13.3\" style=\"background-color:#F2FFFF;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.13.3.1\" style=\"background-color:#F2FFFF;\">34.04</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Top-1 Agreement rates (%) and BLEU scores (%) of different soft targets during KD on the validation sets of the three tasks. Deeper colors represent better performance on the corresponding metrics.</figcaption>\n</figure>", |
| "capture": "Table 2: Top-1 Agreement rates (%) and BLEU scores (%) of different soft targets during KD on the validation sets of the three tasks. Deeper colors represent better performance on the corresponding metrics." |
| }, |
| "3": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T3.2\" style=\"width:433.6pt;height:370.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(80.0pt,-68.5pt) scale(1.58521607721034,1.58521607721034) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.2.2\">\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S3.T3.2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.3.1\">Task</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S3.T3.2.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.4.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T3.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T3.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.5.1\">BLEU</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.3.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S3.T3.2.2.3.1.1\">En-De</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.3.2\">(a) vanilla Word-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.3.3\" style=\"background-color:#FFD9D9;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.3.3.1\" style=\"background-color:#FFD9D9;\">2.506</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.3.4\" style=\"background-color:#FFD9D9;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.3.4.1\" style=\"background-color:#FFD9D9;\">1.571</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.3.5\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.3.5.1\" style=\"background-color:#CCFFFF;\">26.66</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.4.1\">(b) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.4.1.1\">w/o</span> correlation info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.4.2\" style=\"background-color:#FFF2F2;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.4.2.1\" style=\"background-color:#FFF2F2;\">2.697</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.4.3\" style=\"background-color:#FFF2F2;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.4.3.1\" style=\"background-color:#FFF2F2;\">1.791</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.4.4\" style=\"background-color:#99FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.4.4.1\" style=\"background-color:#99FFFF;\">26.76</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.5.1\">(c) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.5.1.1\">w/o</span> top-1 info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.5.2\" style=\"background-color:#FFE6E6;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.5.2.1\" style=\"background-color:#FFE6E6;\">2.601</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.5.3\" style=\"background-color:#FFE6E6;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.5.3.1\" style=\"background-color:#FFE6E6;\">1.656</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.5.4\" style=\"background-color:#E6FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.5.4.1\" style=\"background-color:#E6FFFF;\">26.43</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.6.1\">(d) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.6.1.1\">w/o</span> KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.6.2\" style=\"background-color:#FFFCFC;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.6.2.1\" style=\"background-color:#FFFCFC;\">2.739</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.6.3\" style=\"background-color:#FFFCFC;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.6.3.1\" style=\"background-color:#FFFCFC;\">1.820</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.6.4\" style=\"background-color:#F2FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.6.4.1\" style=\"background-color:#F2FFFF;\">26.37</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.7.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S3.T3.2.2.7.1.1\">En-Fr</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.7.2\">(a) vanilla Word-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.7.3\" style=\"background-color:#FFE6E6;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.7.3.1\" style=\"background-color:#FFE6E6;\">2.515</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.7.4\" style=\"background-color:#FFE6E6;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.7.4.1\" style=\"background-color:#FFE6E6;\">1.588</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.7.5\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.7.5.1\" style=\"background-color:#CCFFFF;\">34.94</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.8.1\">(b) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.8.1.1\">w/o</span> correlation info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.8.2\" style=\"background-color:#FFFCFC;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.8.2.1\" style=\"background-color:#FFFCFC;\">2.616</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.8.3\" style=\"background-color:#FFFCFC;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.8.3.1\" style=\"background-color:#FFFCFC;\">1.696</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.8.4\" style=\"background-color:#99FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.8.4.1\" style=\"background-color:#99FFFF;\">35.09</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.9.1\">(c) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.9.1.1\">w/o</span> top-1 info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.9.2\" style=\"background-color:#FFD9D9;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.9.2.1\" style=\"background-color:#FFD9D9;\">2.495</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.9.3\" style=\"background-color:#FFD9D9;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.9.3.1\" style=\"background-color:#FFD9D9;\">1.563</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.9.4\" style=\"background-color:#F2FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.9.4.1\" style=\"background-color:#F2FFFF;\">34.33</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.10.1\">(d) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.10.1.1\">w/o</span> KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.10.2\" style=\"background-color:#FFF2F2;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.10.2.1\" style=\"background-color:#FFF2F2;\">2.587</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.10.3\" style=\"background-color:#FFF2F2;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.10.3.1\" style=\"background-color:#FFF2F2;\">1.657</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.10.4\" style=\"background-color:#E6FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.10.4.1\" style=\"background-color:#E6FFFF;\">34.69</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.11.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S3.T3.2.2.11.1.1\">En-Ro</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.11.2\">(a) vanilla Word-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.11.3\" style=\"background-color:#FFE6E6;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.11.3.1\" style=\"background-color:#FFE6E6;\">2.915</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.11.4\" style=\"background-color:#FFE6E6;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.11.4.1\" style=\"background-color:#FFE6E6;\">2.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.11.5\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.11.5.1\" style=\"background-color:#CCFFFF;\">34.29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.12.1\">(b) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.12.1.1\">w/o</span> correlation info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.12.2\" style=\"background-color:#FFFCFC;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.12.2.1\" style=\"background-color:#FFFCFC;\">3.025</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.12.3\" style=\"background-color:#FFFCFC;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.12.3.1\" style=\"background-color:#FFFCFC;\">2.138</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.12.4\" style=\"background-color:#99FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.12.4.1\" style=\"background-color:#99FFFF;\">34.30</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T3.2.2.13.1\">(c) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.13.1.1\">w/o</span> top-1 info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.13.2\" style=\"background-color:#FFD9D9;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.13.2.1\" style=\"background-color:#FFD9D9;\">2.893</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.13.3\" style=\"background-color:#FFD9D9;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.13.3.1\" style=\"background-color:#FFD9D9;\">1.998</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.13.4\" style=\"background-color:#F2FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.13.4.1\" style=\"background-color:#F2FFFF;\">34.02</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S3.T3.2.2.14.1\">(d) <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.14.1.1\">w/o</span> KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T3.2.2.14.2\" style=\"background-color:#FFF2F2;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.14.2.1\" style=\"background-color:#FFF2F2;\">2.967</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T3.2.2.14.3\" style=\"background-color:#FFF2F2;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.14.3.1\" style=\"background-color:#FFF2F2;\">2.083</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.2.14.4\" style=\"background-color:#E6FFFF;\"><span class=\"ltx_text\" id=\"S3.T3.2.2.14.4.1\" style=\"background-color:#E6FFFF;\">34.04</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Ranking similarities between the students and the teachers and the corresponding BLEU scores (%)<span class=\"ltx_note ltx_role_footnote\" id=\"footnote7\"><sup class=\"ltx_note_mark\">7</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">7</sup><span class=\"ltx_tag ltx_tag_note\">7</span>Here we set to 5 for both and since different does not change the conclusion in our experiments.</span></span></span>.</figcaption>\n</figure>", |
| "capture": "Table 3: Ranking similarities between the students and the teachers and the corresponding BLEU scores (%)777Here we set to 5 for both and since different does not change the conclusion in our experiments.." |
| }, |
| "4": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T4.2\" style=\"width:433.6pt;height:120pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(86.7pt,-24.0pt) scale(1.66616419753217,1.66616419753217) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.2.2\">\n<tr class=\"ltx_tr\" id=\"S3.T4.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T4.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.2.2.2.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.2.2.2.4\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.2.2.2.5\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.2.2.2.6\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.3.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S3.T4.2.2.3.1.1\">BLEU</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.3.2\">En-De</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.3.3\">26.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.3.4\">26.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.3.5\">26.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.3.6\">26.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.3.7\">26.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.4.1\">En-Fr</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.4.2\">35.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.4.3\">34.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.4.4\">34.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.4.5\">34.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.4.6\">34.94</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.5.1\">En-Ro</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.2.2.5.2\">34.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.2.2.5.3\">34.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.2.2.5.4\">34.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.2.2.5.5\">34.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T4.2.2.5.6\">34.29</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>BLEU scores (%) of word-level KD with top- information on the validation set of the three tasks. is the vocabulary size.</figcaption>\n</figure>", |
| "capture": "Table 4: BLEU scores (%) of word-level KD with top- information on the validation set of the three tasks. is the vocabulary size." |
| }, |
| "5": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S3.T5\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T5.2\" style=\"width:433.6pt;height:167.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(77.2pt,-29.9pt) scale(1.55310857870583,1.55310857870583) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T5.2.2\">\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T5.2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.2.2.2.3.1\">ID</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T5.1.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.1.1.1.1.1\">top-1</span> (70%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T5.2.2.2.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.2.2.2.2.1\">non-top-1</span> (30%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T5.2.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.2.2.2.4.1\">BLEU</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T5.2.2.3.1\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T5.2.2.3.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T5.2.2.3.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T5.2.2.3.4\">26.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.4.1\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.4.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.4.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.2.2.4.4\">26.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.5.1\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.5.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.5.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.2.2.5.4\">2.36</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.6.1\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.6.2\">\u2713(use fixed 30%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T5.2.2.6.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.2.2.6.4\">26.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T5.2.2.7.1\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T5.2.2.7.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T5.2.2.7.3\">\u2713+ word-level top-1 info</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T5.2.2.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.2.2.7.4.1\">26.96</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>BLEU scores (%) of sequence-level KD on the validation set of the WMT\u201914 En-De task when we separately use the top-1 and the non-top-1 targets of the teacher in the teacher\u2019s translations during KD.</figcaption>\n</figure>", |
| "capture": "Table 5: BLEU scores (%) of sequence-level KD on the validation set of the WMT\u201914 En-De task when we separately use the top-1 and the non-top-1 targets of the teacher in the teacher\u2019s translations during KD." |
| }, |
| "6": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T6.46\" style=\"width:433.6pt;height:142.1pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-114.4pt,37.3pt) scale(0.654503110060465,0.654503110060465) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T6.46.46\">\n<tr class=\"ltx_tr\" id=\"S4.T6.46.46.47\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T6.46.46.47.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.46.46.47.1.1\">Methods</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T6.46.46.47.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.46.46.47.2.1\">WMT\u201914 En-De</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T6.46.46.47.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.46.46.47.3.1\">WMT\u201914 En-Fr</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T6.46.46.47.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.46.46.47.4.1\">WMT\u201916 En-Ro</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.46.46.48\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.48.1\">BLEU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.48.2\">COMET</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.48.3\">BLEU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.48.4\">COMET</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.48.5\">BLEU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.46.46.48.6\">COMET</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.7.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T6.1.1.1.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.1.1.1.1.2\">Student</span> (<span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.1.1.1.1.1\">Transformer<sub class=\"ltx_sub\" id=\"S4.T6.1.1.1.1.1.1\">base</sub></span>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T6.2.2.2.2\">27.42<sub class=\"ltx_sub\" id=\"S4.T6.2.2.2.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.2.2.2.2.1.1\">\u00b10.01</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T6.3.3.3.3\">48.11<sub class=\"ltx_sub\" id=\"S4.T6.3.3.3.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.3.3.3.3.1.1\">\u00b11.04</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T6.4.4.4.4\">40.97<sub class=\"ltx_sub\" id=\"S4.T6.4.4.4.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.4.4.4.4.1.1\">\u00b10.14</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T6.5.5.5.5\">62.19<sub class=\"ltx_sub\" id=\"S4.T6.5.5.5.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.5.5.5.5.1.1\">\u00b10.11</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T6.6.6.6.6\">33.59<sub class=\"ltx_sub\" id=\"S4.T6.6.6.6.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.6.6.6.6.1.1\">\u00b10.15</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.7.7.7.7\">50.96<sub class=\"ltx_sub\" id=\"S4.T6.7.7.7.7.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.7.7.7.7.1.1\">\u00b10.43</span></sub>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.13.13.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.13.13.13.7\">\u00a0\u00a0\u2003+ Word-KD <cite class=\"ltx_cite ltx_citemacro_cite\">Kim and Rush (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib17\" title=\"\">2016</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.8.8.8.1\">28.03<sub class=\"ltx_sub\" id=\"S4.T6.8.8.8.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.8.8.8.1.1.1\">\u00b10.10</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.9.9.9.2\">51.59<sub class=\"ltx_sub\" id=\"S4.T6.9.9.9.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.9.9.9.2.1.1\">\u00b10.23</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.10.10.10.3\">41.10<sub class=\"ltx_sub\" id=\"S4.T6.10.10.10.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.10.10.10.3.1.1\">\u00b10.11</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.11.11.11.4\">63.81<sub class=\"ltx_sub\" id=\"S4.T6.11.11.11.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.11.11.11.4.1.1\">\u00b10.14</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.12.12.12.5\">33.77<sub class=\"ltx_sub\" id=\"S4.T6.12.12.12.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.12.12.12.5.1.1\">\u00b10.01</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.13.13.13.6\">53.15<sub class=\"ltx_sub\" id=\"S4.T6.13.13.13.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.13.13.13.6.1.1\">\u00b10.26</span></sub>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.19.19.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.19.19.19.7\">\u00a0\u00a0\u2003+ Seq-KD <cite class=\"ltx_cite ltx_citemacro_cite\">Kim and Rush (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib17\" title=\"\">2016</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.14.14.14.1\">28.22<sub class=\"ltx_sub\" id=\"S4.T6.14.14.14.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.14.14.14.1.1.1\">\u00b10.02</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.15.15.15.2\">51.23<sub class=\"ltx_sub\" id=\"S4.T6.15.15.15.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.15.15.15.2.1.1\">\u00b10.15</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.16.16.16.3\">41.44<sub class=\"ltx_sub\" id=\"S4.T6.16.16.16.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.16.16.16.3.1.1\">\u00b10.02</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.17.17.17.4\">63.12<sub class=\"ltx_sub\" id=\"S4.T6.17.17.17.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.17.17.17.4.1.1\">\u00b10.14</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.18.18.18.5\">33.69<sub class=\"ltx_sub\" id=\"S4.T6.18.18.18.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.18.18.18.5.1.1\">\u00b10.02</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.19.19.19.6\">50.63<sub class=\"ltx_sub\" id=\"S4.T6.19.19.19.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.19.19.19.6.1.1\">\u00b10.11</span></sub>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.20.20.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.20.20.20.1\">\u00a0\u00a0\u2003+ BERT-KD <cite class=\"ltx_cite ltx_citemacro_cite\">Chen et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib5\" title=\"\">2020</a>)</cite><sup class=\"ltx_sup\" id=\"S4.T6.20.20.20.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.20.20.20.1.1.1\">\u2020</span></sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.20.20.20.2\">27.53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.20.20.20.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.20.20.20.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.20.20.20.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.20.20.20.6\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.20.20.20.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.26.26.26\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.26.26.26.7\">\u00a0\u00a0\u2003+ Seer Forcing <cite class=\"ltx_cite ltx_citemacro_cite\">Feng et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib8\" title=\"\">2021</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.21.21.21.1\">27.56<sub class=\"ltx_sub\" id=\"S4.T6.21.21.21.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.21.21.21.1.1.1\">\u00b10.10</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.22.22.22.2\">50.60<sub class=\"ltx_sub\" id=\"S4.T6.22.22.22.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.22.22.22.2.1.1\">\u00b10.12</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.23.23.23.3\">40.97<sub class=\"ltx_sub\" id=\"S4.T6.23.23.23.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.23.23.23.3.1.1\">\u00b10.01</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.24.24.24.4\">62.95<sub class=\"ltx_sub\" id=\"S4.T6.24.24.24.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.24.24.24.4.1.1\">\u00b10.39</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.25.25.25.5\">33.77<sub class=\"ltx_sub\" id=\"S4.T6.25.25.25.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.25.25.25.5.1.1\">\u00b10.09</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.26.26.26.6\">51.41<sub class=\"ltx_sub\" id=\"S4.T6.26.26.26.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.26.26.26.6.1.1\">\u00b10.60</span></sub>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.27.27.27\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.27.27.27.1\">\u00a0\u00a0\u2003+ CBBGCA <cite class=\"ltx_cite ltx_citemacro_cite\">Zhou et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib47\" title=\"\">2022</a>)</cite><sup class=\"ltx_sup\" id=\"S4.T6.27.27.27.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.27.27.27.1.1.1\">\u2020</span></sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.27.27.27.2\">28.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.27.27.27.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.27.27.27.4\">41.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.27.27.27.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.27.27.27.6\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.27.27.27.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.33.33.33\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.33.33.33.7\">\u00a0\u00a0\u2003+ Annealing KD <cite class=\"ltx_cite ltx_citemacro_cite\">Jafari et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib15\" title=\"\">2021</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.28.28.28.1\">27.91<sub class=\"ltx_sub\" id=\"S4.T6.28.28.28.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.28.28.28.1.1.1\">\u00b10.10</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.29.29.29.2\">51.58<sub class=\"ltx_sub\" id=\"S4.T6.29.29.29.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.29.29.29.2.1.1\">\u00b10.03</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.30.30.30.3\">41.20<sub class=\"ltx_sub\" id=\"S4.T6.30.30.30.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.30.30.30.3.1.1\">\u00b10.13</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.31.31.31.4\">63.59<sub class=\"ltx_sub\" id=\"S4.T6.31.31.31.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.31.31.31.4.1.1\">\u00b10.09</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.32.32.32.5\">33.67<sub class=\"ltx_sub\" id=\"S4.T6.32.32.32.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.32.32.32.5.1.1\">\u00b10.09</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.33.33.33.6\">52.22<sub class=\"ltx_sub\" id=\"S4.T6.33.33.33.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.33.33.33.6.1.1\">\u00b11.02</span></sub>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.39.39.39\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.39.39.39.7\">\u00a0\u00a0\u2003+ Selective-KD <cite class=\"ltx_cite ltx_citemacro_cite\">Wang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib40\" title=\"\">2021</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.34.34.34.1\">28.24<sub class=\"ltx_sub\" id=\"S4.T6.34.34.34.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.34.34.34.1.1.1\">\u00b10.21</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.35.35.35.2\">52.15<sub class=\"ltx_sub\" id=\"S4.T6.35.35.35.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.35.35.35.2.1.1\">\u00b10.42</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.36.36.36.3\">41.25<sub class=\"ltx_sub\" id=\"S4.T6.36.36.36.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.36.36.36.3.1.1\">\u00b10.04</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.37.37.37.4\">64.24<sub class=\"ltx_sub\" id=\"S4.T6.37.37.37.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.37.37.37.4.1.1\">\u00b10.01</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.38.38.38.5\">33.74<sub class=\"ltx_sub\" id=\"S4.T6.38.38.38.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.38.38.38.5.1.1\">\u00b10.02</span></sub>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.39.39.39.6\">53.05<sub class=\"ltx_sub\" id=\"S4.T6.39.39.39.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.39.39.39.6.1.1\">\u00b10.28</span></sub>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.45.45.45\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T6.45.45.45.7\">\u00a0\u00a0\u2003+ TIE-KD (ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.40.40.40.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.40.40.40.1.1\">28.46</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.41.41.41.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.41.41.41.2.1\">52.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.42.42.42.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.42.42.42.3.1\">41.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.43.43.43.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.43.43.43.4.1\">65.06</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T6.44.44.44.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.44.44.44.5.1\">34.70</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.45.45.45.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.45.45.45.6.1\">55.76</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.46.46.46\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.46.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.1.2\">Teacher</span> (<span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.1.1\">Transformer<sub class=\"ltx_sub\" id=\"S4.T6.46.46.46.1.1.1\">big</sub></span>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.46.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.2.1\">28.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.46.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.3.1\">53.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.46.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.4.1\">42.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.46.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.5.1\">69.58</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T6.46.46.46.6\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.6.1\">34.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T6.46.46.46.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.46.46.46.7.1\">57.04</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>BLEU scores (%) and COMET <cite class=\"ltx_cite ltx_citemacro_cite\">Rei et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib34\" title=\"\">2020</a>)</cite> scores (%) on three translation tasks. Results with <sup class=\"ltx_sup\" id=\"S4.T6.54.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T6.54.1.1\">\u2020</span></sup> are taken from the original papers. Others are our re-implementation results using the released code with the same setting in Sec.<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#S5.SS2\" title=\"5.2 Implementation Details \u2023 5 Experiments \u2023 Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation\"><span class=\"ltx_text ltx_ref_tag\">5.2</span></a> for a fair comparison. We report average results over 3 runs with random initialization. Results with are statistically <cite class=\"ltx_cite ltx_citemacro_cite\">Koehn (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.08096v2#bib.bib19\" title=\"\">2004</a>)</cite> better than the vanilla Word-KD with .</figcaption>\n</figure>", |
| "capture": "Table 6: BLEU scores (%) and COMET Rei et\u00a0al. (2020) scores (%) on three translation tasks. Results with \u2020 are taken from the original papers. Others are our re-implementation results using the released code with the same setting in Sec.5.2 for a fair comparison. We report average results over 3 runs with random initialization. Results with are statistically Koehn (2004) better than the vanilla Word-KD with ." |
| }, |
| "7": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S6.T7\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T7.1\" style=\"width:433.6pt;height:182.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(88.6pt,-37.3pt) scale(1.69125592195364,1.69125592195364) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T7.1.1\">\n<tr class=\"ltx_tr\" id=\"S6.T7.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S6.T7.1.1.2.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T7.1.1.2.1.1\">Methods</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S6.T7.1.1.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T7.1.1.2.2.1\">Validation Set</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S6.T7.1.1.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T7.1.1.2.3.1\">Test Set</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T7.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T7.1.1.3.1\">BLEU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T7.1.1.3.2\">TA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T7.1.1.3.3\">BLEU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T7.1.1.3.4\">TA</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T7.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S6.T7.1.1.4.1\">vanilla Word-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T7.1.1.4.2\">26.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T7.1.1.4.3\">88.98</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T7.1.1.4.4\">28.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T7.1.1.4.5\">88.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T7.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T7.1.1.1.1\">\u00a0\u00a0\u2003+ \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T7.1.1.1.2\">26.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T7.1.1.1.3\">89.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T7.1.1.1.4\">28.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T7.1.1.1.5\">88.93</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T7.1.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T7.1.1.5.1\">\u00a0\u00a0\u2003+ iterative KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T7.1.1.5.2\">27.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T7.1.1.5.3\">89.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T7.1.1.5.4\">28.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T7.1.1.5.5\">88.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T7.1.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S6.T7.1.1.6.1\">\u00a0\u00a0\u2003+ both (TIE-KD)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S6.T7.1.1.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T7.1.1.6.2.1\">27.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S6.T7.1.1.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T7.1.1.6.3.1\">89.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S6.T7.1.1.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T7.1.1.6.4.1\">28.46</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T7.1.1.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T7.1.1.6.5.1\">89.11</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Ablation study on the WMT\u201914 En-De task.</figcaption>\n</figure>", |
| "capture": "Table 7: Ablation study on the WMT\u201914 En-De task." |
| }, |
| "8": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S6.T8\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T8.3\">\n<tr class=\"ltx_tr\" id=\"S6.T8.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S6.T8.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T8.1.1.2.1\">Methods</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T8.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T8.1.1.3.1\">BLEU</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T8.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T8.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S6.T8.2.2.1\">Student (Transformer<sub class=\"ltx_sub\" id=\"S6.T8.2.2.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T8.2.2.1.1.1\">base</span></sub>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T8.2.2.2\">27.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T8.2.2.3\">ref.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T8.3.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S6.T8.3.4.1\">Word-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T8.3.4.2\">28.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T8.3.4.3\">+0.61</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T8.3.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T8.3.5.1\">Seq-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T8.3.5.2\">28.22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T8.3.5.3\">+0.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T8.3.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T8.3.6.1\">TIE-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T8.3.6.2\">28.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T8.3.6.3\">+1.04</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T8.3.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S6.T8.3.7.1\">Word-KD + Seq-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T8.3.7.2\">28.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T8.3.7.3\">+1.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T8.3.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T8.3.8.1\">TIE-KD + Seq-KD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T8.3.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T8.3.8.2.1\">28.66</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T8.3.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T8.3.8.3.1\">+1.24</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T8.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T8.3.3.1\">Teacher (Transformer<sub class=\"ltx_sub\" id=\"S6.T8.3.3.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T8.3.3.1.1.1\">big</span></sub>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T8.3.3.2\">28.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T8.3.3.3\">+1.39</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 8: </span>Combination with sequence-level KD and word-level KD methods on the WMT\u201914 En-DE task.</figcaption>\n</figure>", |
| "capture": "Table 8: Combination with sequence-level KD and word-level KD methods on the WMT\u201914 En-DE task." |
| }, |
| "9": { |
| "table_html": "<figure class=\"ltx_table\" id=\"A3.T9\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A3.T9.1\" style=\"width:433.6pt;height:122.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(89.5pt,-25.3pt) scale(1.70362302523938,1.70362302523938) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A3.T9.1.1\">\n<tr class=\"ltx_tr\" id=\"A3.T9.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"A3.T9.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T9.1.1.1.1.1\">Dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A3.T9.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T9.1.1.1.2.1\">#Train</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A3.T9.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T9.1.1.1.3.1\">#Valid</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A3.T9.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T9.1.1.1.4.1\">#Test</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A3.T9.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T9.1.1.1.5.1\">Vocab</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T9.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"A3.T9.1.1.2.1\">WMT\u201914 En-De</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T9.1.1.2.2\">4.5M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T9.1.1.2.3\">3000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T9.1.1.2.4\">3003</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T9.1.1.2.5\">37184</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T9.1.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T9.1.1.3.1\">WMT\u201914 En-Fr</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T9.1.1.3.2\">35.8M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T9.1.1.3.3\">3000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T9.1.1.3.4\">3003</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T9.1.1.3.5\">36528</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T9.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"A3.T9.1.1.4.1\">WMT\u201916 En-Ro</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T9.1.1.4.2\">608K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T9.1.1.4.3\">1999</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T9.1.1.4.4\">1999</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T9.1.1.4.5\">34976</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 9: </span>Statistics of the datasets for three WMT tasks.</figcaption>\n</figure>", |
| "capture": "Table 9: Statistics of the datasets for three WMT tasks." |
| }, |
| "10": { |
| "table_html": "<figure class=\"ltx_table\" id=\"A3.T10\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A3.T10.1\" style=\"width:433.6pt;height:276.9pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-23.6pt,15.0pt) scale(0.902016944106743,0.902016944106743) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A3.T10.1.1\">\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"A3.T10.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T10.1.1.1.1.1\">Hyperparameters</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"A3.T10.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T10.1.1.1.2.1\">WMT\u201914 En-De</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"A3.T10.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T10.1.1.1.3.1\">WMT\u201914 En-Fr</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A3.T10.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T10.1.1.1.4.1\">WMT\u201916 En-Ro</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.2.1\">Student</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.2.2\">Teacher</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.2.3\">Student</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.2.4\">Teacher</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.2.5\">Student</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T10.1.1.2.6\">Teacher</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.3.1\">Embedding Dim</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.3.2\">512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.3.3\">1024</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.3.4\">512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.3.5\">1024</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T10.1.1.3.6\">512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T10.1.1.3.7\">1024</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.4.1\">FFN Dim</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.4.2\">2048</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.4.3\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.4.4\">2048</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.4.5\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.4.6\">2048</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.4.7\">4096</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.5.1\">Encoder Layers</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.5.2\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.5.3\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.5.4\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.5.5\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.5.6\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.5.7\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.6.1\">Decoder Layers</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.6.2\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.6.3\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.6.4\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.6.5\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.6.6\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.6.7\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.7.1\">Attention Heads</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.7.2\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.7.3\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.7.4\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.7.5\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.7.6\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.7.7\">16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.8.1\">Residual Dropout</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.8.2\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.8.3\">0.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.8.4\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.8.5\">0.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.8.6\">0.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.8.7\">0.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.9.1\">Attention Dropout</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.9.2\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.9.3\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.9.4\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.9.5\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.9.6\">0.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.9.7\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.10.1\">Activation Dropout</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.10.2\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.10.3\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.10.4\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.10.5\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.10.6\">0.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.10.7\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.11.1\">Label Smoothing</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.11.2\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.11.3\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.11.4\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.11.5\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.11.6\">0.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.11.7\">0.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.12.1\">Learning Rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.12.2\">7e-4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.12.3\">5e-4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.12.4\">7e-4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.12.5\">5e-4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.12.6\">7e-4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.12.7\">5e-4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.13.1\">Learning Rate Decay</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.13.2\">inverse sqrt</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.13.3\">inverse sqrt</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.13.4\">inverse sqrt</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.13.5\">inverse sqrt</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.13.6\">inverse sqrt</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.13.7\">inverse sqrt</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.14.1\">Warmup Steps</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.14.2\">4000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.14.3\">4000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.14.4\">4000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.14.5\">4000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.14.6\">4000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.14.7\">4000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.15.1\">Layer Normalization</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.15.2\">PostNorm</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.15.3\">PostNorm</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.15.4\">PostNorm</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.15.5\">PostNorm</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.15.6\">PostNorm</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.15.7\">PostNorm</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"A3.T10.1.1.16.1\">Model Parameters</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.16.2\">63.2M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.16.3\">214.4M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.16.4\">62.8M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.16.5\">213.8M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T10.1.1.16.6\">62.0M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T10.1.1.16.7\">212.2M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T10.1.1.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"A3.T10.1.1.17.1\">Training Steps</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T10.1.1.17.2\">200K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T10.1.1.17.3\">300K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T10.1.1.17.4\">200K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T10.1.1.17.5\">300K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T10.1.1.17.6\">20 epochs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T10.1.1.17.7\">30 epochs</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 10: </span>Training hyperparameters and model configurations of our experiments.</figcaption>\n</figure>", |
| "capture": "Table 10: Training hyperparameters and model configurations of our experiments." |
| } |
| }, |
| "image_paths": { |
| "1(a)": { |
| "figure_path": "2305.08096v2_figure_1(a).png", |
| "caption": "(a) vanilla word-level KD\nFigure 1: Removing different information from the original soft targets provided by the teacher during word-level KD. Note that the soft target in \u201cw/o KD\u201d is equivalent to the soft target of label smoothing.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/probe_analysis_v3_a.png" |
| }, |
| "1(b)": { |
| "figure_path": "2305.08096v2_figure_1(b).png", |
| "caption": "(b) w/o correlation info\nFigure 1: Removing different information from the original soft targets provided by the teacher during word-level KD. Note that the soft target in \u201cw/o KD\u201d is equivalent to the soft target of label smoothing.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/probe_analysis_v3_c.png" |
| }, |
| "1(c)": { |
| "figure_path": "2305.08096v2_figure_1(c).png", |
| "caption": "(c) w/o top-1 info\nFigure 1: Removing different information from the original soft targets provided by the teacher during word-level KD. Note that the soft target in \u201cw/o KD\u201d is equivalent to the soft target of label smoothing.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/probe_analysis_v3_b.png" |
| }, |
| "1(d)": { |
| "figure_path": "2305.08096v2_figure_1(d).png", |
| "caption": "(d) w/o KD\nFigure 1: Removing different information from the original soft targets provided by the teacher during word-level KD. Note that the soft target in \u201cw/o KD\u201d is equivalent to the soft target of label smoothing.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/probe_analysis_v3_d.png" |
| }, |
| "2": { |
| "figure_path": "2305.08096v2_figure_2.png", |
| "caption": "Figure 2: BLEU scores (%) of KD with different information in three intervals of soft targets on the validation set of the WMT\u201914 En-De task.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/interval_probe_v5.png" |
| }, |
| "3": { |
| "figure_path": "2305.08096v2_figure_3.png", |
| "caption": "Figure 3: Performance of KD techniques with different teacher models on the test set of the WMT\u201914 En-De task.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/diff_teacher.png" |
| }, |
| "4": { |
| "figure_path": "2305.08096v2_figure_4.png", |
| "caption": "Figure 4: BLEU scores of our method with different k\ud835\udc58kitalic_k on the validation set of the WMT\u201914 En-De task.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/hyper_k.png" |
| }, |
| "5": { |
| "figure_path": "2305.08096v2_figure_5.png", |
| "caption": "Figure 5: BLEU scores of our method with different iteration times N\ud835\udc41Nitalic_N on the validation set of the WMT\u201914 En-De task and the corresponding training costs.", |
| "url": "http://arxiv.org/html/2305.08096v2/extracted/5737145/hyper_N.png" |
| } |
| }, |
| "validation": true, |
| "references": [ |
| { |
| "1": { |
| "title": "Why\nexposure bias matters: An imitation learning perspective of error\naccumulation in language generation.", |
| "author": "Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Cheung. 2022.", |
| "venue": "In Findings of the Association for Computational Linguistics:\nACL 2022, pages 700\u2013710, Dublin, Ireland. Association for Computational\nLinguistics.", |
| "url": "https://doi.org/10.18653/v1/2022.findings-acl.58" |
| } |
| }, |
| { |
| "2": { |
| "title": "Neural machine translation by jointly learning to align and\ntranslate.", |
| "author": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014.", |
| "venue": "arXiv preprint arXiv:1409.0473.", |
| "url": null |
| } |
| }, |
| { |
| "3": { |
| "title": "Language\nmodel prior for low-resource neural machine translation.", |
| "author": "Christos Baziotis, Barry Haddow, and Alexandra Birch. 2020.", |
| "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 7622\u20137634, Online. Association\nfor Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2020.emnlp-main.615" |
| } |
| }, |
| { |
| "4": { |
| "title": "Scheduled sampling for sequence prediction with recurrent neural\nnetworks.", |
| "author": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015.", |
| "venue": "Advances in neural information processing systems, 28.", |
| "url": null |
| } |
| }, |
| { |
| "5": { |
| "title": "Distilling\nknowledge learned in BERT for text generation.", |
| "author": "Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, and Jingjing Liu. 2020.", |
| "venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics, pages 7893\u20137905, Online. Association for\nComputational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2020.acl-main.705" |
| } |
| }, |
| { |
| "6": { |
| "title": "On the efficacy of knowledge distillation.", |
| "author": "Jang Hyun Cho and Bharath Hariharan. 2019.", |
| "venue": "In Proceedings of the IEEE/CVF international conference on\ncomputer vision, pages 4794\u20134802.", |
| "url": null |
| } |
| }, |
| { |
| "7": { |
| "title": "Bert: Pre-training of deep bidirectional transformers for language\nunderstanding.", |
| "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018.", |
| "venue": "arXiv preprint arXiv:1810.04805.", |
| "url": null |
| } |
| }, |
| { |
| "8": { |
| "title": "Guiding\nteacher forcing with seer forcing for neural machine translation.", |
| "author": "Yang Feng, Shuhao Gu, Dengji Guo, Zhengxin Yang, and Chenze Shao. 2021.", |
| "venue": "In Proceedings of the 59th Annual Meeting of the Association\nfor Computational Linguistics and the 11th International Joint Conference on\nNatural Language Processing (Volume 1: Long Papers), pages 2862\u20132872,\nOnline. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2021.acl-long.223" |
| } |
| }, |
| { |
| "9": { |
| "title": "Convolutional sequence to sequence learning.", |
| "author": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin.\n2017.", |
| "venue": "In Proceedings of the 34th International Conference on Machine\nLearning - Volume 70, ICML\u201917, page 1243\u20131252. JMLR.org.", |
| "url": null |
| } |
| }, |
| { |
| "10": { |
| "title": "Teaforn: Teacher-forcing with n-grams.", |
| "author": "Sebastian Goodman, Nan Ding, and Radu Soricut. 2020.", |
| "venue": "arXiv preprint arXiv:2010.03494.", |
| "url": null |
| } |
| }, |
| { |
| "11": { |
| "title": "Non-autoregressive neural machine translation.", |
| "author": "Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher.\n2017.", |
| "venue": "arXiv preprint arXiv:1711.02281.", |
| "url": null |
| } |
| }, |
| { |
| "12": { |
| "title": "Levenshtein transformer.", |
| "author": "Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019.", |
| "venue": "Advances in Neural Information Processing Systems, 32.", |
| "url": null |
| } |
| }, |
| { |
| "13": { |
| "title": "Reducing the teacher-student gap via spherical knowledge\ndisitllation.", |
| "author": "Jia Guo, Minghao Chen, Yao Hu, Chen Zhu, Xiaofei He, and Deng Cai. 2020.", |
| "venue": "arXiv preprint arXiv:2010.07485.", |
| "url": null |
| } |
| }, |
| { |
| "14": { |
| "title": "Distilling the knowledge in a neural network.", |
| "author": "Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. 2015.", |
| "venue": "arXiv preprint arXiv:1503.02531, 2(7).", |
| "url": null |
| } |
| }, |
| { |
| "15": { |
| "title": "Annealing\nknowledge distillation.", |
| "author": "Aref Jafari, Mehdi Rezagholizadeh, Pranav Sharma, and Ali Ghodsi. 2021.", |
| "venue": "In Proceedings of the 16th Conference of the European Chapter\nof the Association for Computational Linguistics: Main Volume, pages\n2493\u20132504, Online. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2021.eacl-main.212" |
| } |
| }, |
| { |
| "16": { |
| "title": "Knowledge distillation via route constrained optimization.", |
| "author": "Xiao Jin, Baoyun Peng, Yichao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Junjie Yan,\nand Xiaolin Hu. 2019.", |
| "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision, pages 1345\u20131354.", |
| "url": null |
| } |
| }, |
| { |
| "17": { |
| "title": "Sequence-level knowledge distillation.", |
| "author": "Yoon Kim and Alexander M Rush. 2016.", |
| "venue": "arXiv preprint arXiv:1606.07947.", |
| "url": null |
| } |
| }, |
| { |
| "18": { |
| "title": "Adam: A method for stochastic optimization.", |
| "author": "Diederik Kingma and Jimmy Ba. 2014.", |
| "venue": "International Conference on Learning Representations.", |
| "url": null |
| } |
| }, |
| { |
| "19": { |
| "title": "Statistical significance\ntests for machine translation evaluation.", |
| "author": "Philipp Koehn. 2004.", |
| "venue": "In Proceedings of the 2004 Conference on Empirical Methods in\nNatural Language Processing, pages 388\u2013395, Barcelona, Spain. Association\nfor Computational Linguistics.", |
| "url": "https://aclanthology.org/W04-3250" |
| } |
| }, |
| { |
| "20": { |
| "title": "On information and sufficiency.", |
| "author": "Solomon Kullback and Richard A Leibler. 1951.", |
| "venue": "The annals of mathematical statistics, 22(1):79\u201386.", |
| "url": null |
| } |
| }, |
| { |
| "21": { |
| "title": "Shallow-to-deep training for neural machine translation.", |
| "author": "Bei Li, Ziyang Wang, Hui Liu, Yufan Jiang, Quan Du, Tong Xiao, Huizhen Wang,\nand Jingbo Zhu. 2020.", |
| "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 995\u20131005, Online. Association\nfor Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2020.emnlp-main.72" |
| } |
| }, |
| { |
| "22": { |
| "title": "Structure compilation: trading structure for features.", |
| "author": "Percy Liang, Hal Daum\u00e9 III, and Dan Klein. 2008.", |
| "venue": "In Proceedings of the 25th international conference on Machine\nlearning, pages 592\u2013599.", |
| "url": null |
| } |
| }, |
| { |
| "23": { |
| "title": "Modeling\nbilingual conversational characteristics for neural chat translation.", |
| "author": "Yunlong Liang, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou.\n2021a.", |
| "venue": "In Proceedings of the 59th Annual Meeting of the Association\nfor Computational Linguistics and the 11th International Joint Conference on\nNatural Language Processing (Volume 1: Long Papers), pages 5711\u20135724,\nOnline. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2021.acl-long.444" |
| } |
| }, |
| { |
| "24": { |
| "title": "Scheduled\nmulti-task learning for neural chat translation.", |
| "author": "Yunlong Liang, Fandong Meng, Jinan Xu, Yufeng Chen, and Jie Zhou. 2022.", |
| "venue": "In Proceedings of the 60th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), pages 4375\u20134388,\nDublin, Ireland. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2022.acl-long.300" |
| } |
| }, |
| { |
| "25": { |
| "title": "Towards making\nthe most of dialogue characteristics for neural chat translation.", |
| "author": "Yunlong Liang, Chulun Zhou, Fandong Meng, Jinan Xu, Yufeng Chen, Jinsong Su,\nand Jie Zhou. 2021b.", |
| "venue": "In Proceedings of the 2021 Conference on Empirical Methods in\nNatural Language Processing, pages 67\u201379, Online and Punta Cana, Dominican\nRepublic. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2021.emnlp-main.6" |
| } |
| }, |
| { |
| "26": { |
| "title": "Understanding the difficulty of training transformers.", |
| "author": "Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. 2020.", |
| "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 5747\u20135763, Online. Association\nfor Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2020.emnlp-main.463" |
| } |
| }, |
| { |
| "27": { |
| "title": "Confidence-aware scheduled sampling for neural machine translation.", |
| "author": "Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou.\n2021a.", |
| "venue": "In Findings of the Association for Computational Linguistics:\nACL-IJCNLP 2021, pages 2327\u20132337, Online. Association for Computational\nLinguistics.", |
| "url": "https://doi.org/10.18653/v1/2021.findings-acl.205" |
| } |
| }, |
| { |
| "28": { |
| "title": "Scheduled\nsampling based on decoding steps for neural machine translation.", |
| "author": "Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou.\n2021b.", |
| "venue": "In Proceedings of the 2021 Conference on Empirical Methods in\nNatural Language Processing, pages 3285\u20133296, Online and Punta Cana,\nDominican Republic. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2021.emnlp-main.264" |
| } |
| }, |
| { |
| "29": { |
| "title": "Delight: Deep and light-weight transformer.", |
| "author": "Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, and\nHannaneh Hajishirzi. 2020.", |
| "venue": "arXiv preprint arXiv:2008.00623.", |
| "url": null |
| } |
| }, |
| { |
| "30": { |
| "title": "Improved knowledge distillation via teacher assistant.", |
| "author": "Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa,\nand Hassan Ghasemzadeh. 2020.", |
| "venue": "In Proceedings of the AAAI conference on artificial\nintelligence, volume 34, pages 5191\u20135198.", |
| "url": null |
| } |
| }, |
| { |
| "31": { |
| "title": "fairseq: A fast, extensible toolkit for sequence modeling.", |
| "author": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng,\nDavid Grangier, and Michael Auli. 2019.", |
| "venue": "In Proceedings of NAACL-HLT 2019: Demonstrations.", |
| "url": null |
| } |
| }, |
| { |
| "32": { |
| "title": "Scaling neural machine\ntranslation.", |
| "author": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018.", |
| "venue": "In Proceedings of the Third Conference on Machine Translation:\nResearch Papers, pages 1\u20139, Brussels, Belgium. Association for\nComputational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/W18-6301" |
| } |
| }, |
| { |
| "33": { |
| "title": "Better teacher better student: Dynamic prior knowledge for knowledge\ndistillation.", |
| "author": "Zengyu Qiu, Xinzhu Ma, Kunlin Yang, Chunya Liu, Jun Hou, Shuai Yi, and Wanli\nOuyang. 2022.", |
| "venue": "arXiv preprint arXiv:2206.06067.", |
| "url": null |
| } |
| }, |
| { |
| "34": { |
| "title": "COMET: A\nneural framework for MT evaluation.", |
| "author": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020.", |
| "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 2685\u20132702, Online. Association\nfor Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2020.emnlp-main.213" |
| } |
| }, |
| { |
| "35": { |
| "title": "Neural machine\ntranslation of rare words with subword units.", |
| "author": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016.", |
| "venue": "In Proceedings of the 54th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), pages 1715\u20131725,\nBerlin, Germany. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/P16-1162" |
| } |
| }, |
| { |
| "36": { |
| "title": "Knowledge\ndistillation for multilingual unsupervised neural machine translation.", |
| "author": "Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, and Tiejun\nZhao. 2020.", |
| "venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics, pages 3525\u20133535, Online. Association for\nComputational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2020.acl-main.324" |
| } |
| }, |
| { |
| "37": { |
| "title": "Multilingual neural machine translation with knowledge distillation.", |
| "author": "Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2019.", |
| "venue": "arXiv preprint arXiv:1902.10461.", |
| "url": null |
| } |
| }, |
| { |
| "38": { |
| "title": "Understanding and improving knowledge distillation.", |
| "author": "Jiaxi Tang, Rakesh Shivanna, Zhe Zhao, Dong Lin, Anima Singh, Ed H Chi, and\nSagar Jain. 2020.", |
| "venue": "arXiv preprint arXiv:2002.03532.", |
| "url": null |
| } |
| }, |
| { |
| "39": { |
| "title": "Attention is all you need.", |
| "author": "A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,\nL. Kaiser, and I. Polosukhin. 2017.", |
| "venue": "arXiv.", |
| "url": null |
| } |
| }, |
| { |
| "40": { |
| "title": "Selective\nknowledge distillation for neural machine translation.", |
| "author": "Fusheng Wang, Jianhao Yan, Fandong Meng, and Jie Zhou. 2021.", |
| "venue": "In Proceedings of the 59th Annual Meeting of the Association\nfor Computational Linguistics and the 11th International Joint Conference on\nNatural Language Processing (Volume 1: Long Papers), pages 6456\u20136466,\nOnline. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2021.acl-long.504" |
| } |
| }, |
| { |
| "41": { |
| "title": "Deepnet: Scaling transformers to 1,000 layers.", |
| "author": "Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei.\n2022.", |
| "venue": "arXiv preprint arXiv:2203.00555.", |
| "url": null |
| } |
| }, |
| { |
| "42": { |
| "title": "Multiscale\ncollaborative deep models for neural machine translation.", |
| "author": "Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, and Weihua Luo.\n2020.", |
| "venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics, pages 414\u2013426, Online. Association for\nComputational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2020.acl-main.40" |
| } |
| }, |
| { |
| "43": { |
| "title": "Why skip if you can combine: A simple knowledge distillation\ntechnique for intermediate layers.", |
| "author": "Yimeng Wu, Peyman Passban, Mehdi Rezagholizade, and Qun Liu. 2020.", |
| "venue": "arXiv preprint arXiv:2010.03034.", |
| "url": null |
| } |
| }, |
| { |
| "44": { |
| "title": "Target-side input augmentation for sequence to sequence generation.", |
| "author": "Shufang Xie, Ang Lv, Yingce Xia, Lijun Wu, Tao Qin, Tie-Yan Liu, and Rui Yan.\n2021.", |
| "venue": "In International Conference on Learning Representations.", |
| "url": null |
| } |
| }, |
| { |
| "45": { |
| "title": "Seqgan: Sequence generative\nadversarial nets with policy gradient.", |
| "author": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017.", |
| "venue": null, |
| "url": "http://arxiv.org/abs/1609.05473" |
| } |
| }, |
| { |
| "46": { |
| "title": "Bridging the gap\nbetween training and inference for neural machine translation.", |
| "author": "Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019.", |
| "venue": "In Proceedings of the 57th Annual Meeting of the Association\nfor Computational Linguistics, pages 4334\u20134343, Florence, Italy.\nAssociation for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/P19-1426" |
| } |
| }, |
| { |
| "47": { |
| "title": "Confidence\nbased bidirectional global context aware training framework for neural\nmachine translation.", |
| "author": "Chulun Zhou, Fandong Meng, Jie Zhou, Min Zhang, Hongji Wang, and Jinsong Su.\n2022.", |
| "venue": "In Proceedings of the 60th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), pages 2878\u20132889,\nDublin, Ireland. Association for Computational Linguistics.", |
| "url": "https://doi.org/10.18653/v1/2022.acl-long.206" |
| } |
| }, |
| { |
| "48": { |
| "title": "Understanding knowledge distillation in non-autoregressive machine\ntranslation.", |
| "author": "Chunting Zhou, Graham Neubig, and Jiatao Gu. 2019.", |
| "venue": "arXiv preprint arXiv:1911.02727.", |
| "url": null |
| } |
| } |
| ], |
| "url": "http://arxiv.org/html/2305.08096v2" |
| } |