Context: The Premier of Victoria is the leader of the political party or coalition with the most seats in the Legislative Assembly. The Premier is the public face of government and, with cabinet, sets the legislative and political agenda. Cabinet consists of representatives elected to either house of parliament. It is responsible for managing areas of government that are not exclusively the Commonwealth's, by the Australian Constitution, such as education, health and law enforcement. The current Premier of Victoria is Daniel Andrews.
+
+# Single-best summaries
+
+PEGASUS: How many seats does the Premier of Victoria have in the Legislative Assembly?
+
+FROST: [CONTENT] Premier | Victoria | Legislative Assembly [SUMMARY] What does the Premier of Victoria have in the Legislative Assembly?
+
+# Nucleus Sampling: PEGASUS
+
+$\mathbf{s_1} \rightarrow$ The Premier of Victoria would have how many seats in the Legislative Assembly?
+
+$\mathbf{s}_2 \rightarrow$ What is the politician MP expect to have in Legislative Assembly?
+
+$\mathbf{s}_3 \rightarrow$ Aside from being the leader of a political party or coalition, how is the Premier of Victoria Geometry of the Legislative Assembly?
+
+$\mathbf{s}_4 \rightarrow$ How many Legislative Assembly seats is the Premier of Victoria?
+
+$\mathbf{s}_5 \rightarrow$ What are the Legislative Assembly seats?
+
+# Nucleus Sampling: FROST
+
+$\mathbf{c_1};\mathbf{s_1}\rightarrow [\mathrm{CONTENT}]$ criteria | Premier | Victoria | Coalition [SUMMARY] What is a Varied criteria for a Premier of Victoria to possess in a Coalition?
+
+$\mathbf{c_2};\mathbf{s_2}\rightarrow [\mathrm{CONTENTS}]$ Premier | Victoria | leader | party | coalition | Legislative Assembly [SUMMARY] The Premier of Victoria isThe leader of the political party or coalition with to what in the Legislative Assembly?
+
+$\mathbf{c_3};\mathbf{s_3}\rightarrow [\mathrm{CONTENT}]$ number | Legislative Assembly | seats | Premier [SUMMARY] What is the number of Legislative Assembly seats that the Premier holds?
+
+$\mathbf{c_4};\mathbf{s_4}\rightarrow [\mathrm{CONTENT}]$ piece | legislature | leader | party | mixture | members [SUMMARY] What piece of the legislature does the leader of the party have a mixture of members?
+
+$\mathbf{c}_5; \mathbf{s}_5 \rightarrow [\text{CONTENT}]$ Premier | Victoria | Legislative Assembly [SUMMARY] What does the Premier of Victoria have in the Legislative Assembly
+
+# Composition Sampling: FROST
+
+$\mathbf{c}_1; \mathbf{s}_1 \rightarrow [\text{CONTENT}]$ Premier | Victoria | Legislative Assembly [SUMMARY] What does the Premier of Victoria have in the Legislative Assembly?
+
+$\mathbf{c_2};\mathbf{s_2}\rightarrow [CONTENT]$ Premier | party | coalition | Legislative Assembly [SUMMARY] The Premier of the political party or coalition has what in the Legislative Assembly?
+
+$\mathbf{c}_3; \mathbf{s}_3 \rightarrow [\text{CONTENT}]$ Premier | Victoria | leader | party | Legislative Assembly [SUMMARY] The Premier of Victoria is the leader of the political party with what in the Legislative Assembly?
+
+$\mathbf{c}_4; \mathbf{s}_4 \rightarrow [\text{CONTENT}]$ Premier | Victoria | party | coalition [SUMMARY] What does the Premier of Victoria have in his political party or coalition?
+
+$\mathbf{c}_5; \mathbf{s}_5 \rightarrow [\text{CONTENT}]$ Premier | Victoria | leader | party | coalition | Legislative Assembly [SUMMARY] The Premier of Victoria is the leader of the political party or coalition with what in the Legislative Assembly?
+
+Figure 11: Example input passage with answer in boldface, human written question, and model predictions including diverse questions for the SQuAD Question Generation dataset. We highlight spans in orange that are not accurate with respect to the input context. We use $c*$ and $s*$ to denote different compositions and their corresponding questions.
\ No newline at end of file
diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/images.zip b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2d395811ccb2b070081ab653f532c7bfac3f7f59
--- /dev/null
+++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e95e9582aae0d7ac94632ff71fb3664042d9fe25d3cbfb6bd5f823a0e0092bc
+size 921495
diff --git a/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/layout.json b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0fd939fa46961f651a19d502c2580736b28e7d4b
--- /dev/null
+++ b/awellcomposedtextishalfdonecompositionsamplingfordiverseconditionalgeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:264ca0e70520a15590566a29c148d0013877587f4de2d1009e5f76e42f74cd28
+size 719099
diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_content_list.json b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..83e6bb3535891923d8f4f66d493a8a9664a3db1a
--- /dev/null
+++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:696501a2df40f7fbf2526d5fe96c711e22d92c676b7d90d31fba64d3bec3bae3
+size 45323
diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_model.json b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e11e410eb0ab0de779320a5f9a72e79c6d2168a1
--- /dev/null
+++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd34147319e367b640bf6ab6046843f75af6578549e7bf5bfbffae36cde45c8c
+size 54825
diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_origin.pdf b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e35dc111cc171a4c2ef11b7a0f164cf96a7ce315
--- /dev/null
+++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/a77a4ac0-32d4-45f8-957b-39785af43b28_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20fd7fc0975167052c90009debf57a29594a58913e9fd3fbbb9f9daf78c8070d
+size 446210
diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/full.md b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa88eeddcd50dc55cf1c6fe8c4237970c4ba29f4
--- /dev/null
+++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/full.md
@@ -0,0 +1,149 @@
+# "Is Whole Word Masking Always Better for Chinese BERT?": Probing on Chinese Grammatical Error Correction
+
+Yong Dai $^{1*}$ , Linyang Li $^{2*}$ , Cong Zhou $^{1*}$ , Zhangyin Feng $^{1}$ , Enbo Zhao $^{1}$ , Xipeng Qiu $^{2}$ , Piji Li $^{1}$ , Duyu Tang $^{1\dagger}$
+
+Tencent AI Lab, China
+
+$^{2}$ Fudan University
+
+{yongdai,brannzhou,enbozhao,aifeng,duyutang} $@$ Tencent.com,
+
+{linyangli19, xpqiu} $@$ fudan.edu.cn
+
+# Abstract
+
+Whole word masking (WWM), which masks all subwords corresponding to a word at once, makes a better English BERT model (Sennrich et al., 2016). For the Chinese language, however, there is no subword because each token is an atomic character. The meaning of a word in Chinese is different in that a word is a compositional unit consisting of multiple characters. Such difference motivates us to investigate whether WWM leads to better context understanding ability for Chinese BERT. To achieve this, we introduce two probing tasks related to grammatical error correction and ask pretrained models to revise or insert tokens in a masked language modeling manner. We construct a dataset including labels for 19,075 tokens in 10,448 sentences. We train three Chinese BERT models with standard character-level masking (CLM), WWM, and a combination of CLM and WWM, respectively. Our major findings are as follows: First, when one character needs to be inserted or replaced, the model trained with CLM performs the best. Second, when more than one character needs to be handled, WWM is the key to better performance. Finally, when being fine-tuned on sentence-level downstream tasks, models trained with different masking strategies perform comparably.
+
+# 1 Introduction
+
+BERT (Devlin et al., 2018) is a Transformer-based pretrained model, whose prosperity starts from English language and gradually spreads to many other languages. The original BERT model is trained with character-level masking (CLM). A certain percentage (e.g. $15\%$ ) of tokens in the input se
+
+quence is masked and the model is learned to predict the masked tokens.
+
+It is helpful to note that a word in the input sequence of BERT can be broken into multiple wordpiece tokens (Wu et al., 2016). For example, the input sentence "She is undeniably brilliant" is converted to a wordpiece sequence "She is un ##deni ##ably brilliant", where "##" is a special prefix added to indicate that the token should be attached to the previous one. In this case the word "undeniably" is broken into three wordpieces {"un", "##deni", "#ably)}. In standard masked language modeling, CLM may mask any one of them. In this case, if the token "#ably" is masked, it is easier for the model to complete the prediction task because "un" and "##deni" are informative prompts. To address this, Whole word masking (WWM) masks all three subtokens (i.e., {"un", "##deni", "#ably"}) within a word at once. For Chinese, however, each token is an atomic character that cannot be broken into smaller pieces. Many Chinese words are compounds that consisting of multiple characters (Wood and Connelly, 2009). For example, "手机" (cellphone) is a word consisting of two characters "手" (hand) and "机" (machine). Here, learning with WWM would lose the association among characters corresponding to a word.
+
+In this work, we introduce two probing tasks to study Chinese BERT model's ability on character-level understanding. The first probing task is character replacement. Given a sentence and a position where the corresponding character is erroneous, the task is to replace the erroneous character with the correct one. The second probing task is character insertion. Given a sentence and the positions where
+
+a given number of characters should be inserted, the task is to insert the correct characters. We leverage the benchmark dataset on grammatical error correction (Rao et al., 2020a) and create a dataset including labels for 19,075 tokens in 10,448 sentences.
+
+We train three baseline models based on the same text corpus of 80B characters using CLM, WWM, and both CLM and WWM, separately. We have the following major findings. (1) When one character needs to be inserted or replaced, the model trained with CLM performs the best. Moreover, the model initialized from RoBERTa (Cui et al., 2019) and trained with WWM gets worse gradually with more training steps. (2) When more than one character needs to be handled, WWM is the key to better performance. (3) When evaluating sentence-level downstream tasks, the impact of these masking strategies is minimal and the model trained with them performs comparably.
+
+# 2 Our Probing Tasks
+
+In this work, we present two probing tasks with the goal of diagnosing the language understanding ability of Chinese BERT models. We present the tasks and dataset in this section.
+
+The first probing task is character replacement, which is a subtask of grammatical error correction. Given a sentence $s = \{x_{1}, x_{2}, \ldots, x_{i}, \ldots, x_{n}\}$ of $n$ characters and an erroneous span $es = [i, i + 1, \ldots, i + k]$ of $k$ characters, the task is to replace $es$ with a new span of $k$ characters.
+
+The second probing task is character insertion, which is also a subtask of grammatical error correction. Given a sentence $s = \{x_{1}, x_{2}, \ldots, x_{i}, \ldots, x_{n}\}$ of $n$ characters, a position $i$ , and a fixed number $k$ , the task is to insert a span of $k$ characters between the index $i$ and $i + 1$ .
+
+We provide two examples of these two probing tasks with $k = 1$ in Figure 1. For the character replacement task, the original meaning of the sentence is "these are all my ideas". Due to the misuse of a character at the 7th position, its meaning changed significantly to "these are all my attention". Our character replacement task is to replace the misused character "主" with "注". For the character insertion task, what the writer wants to express is "Human is the most important factor. However, due to the lack of one character between the 5th and 6th position, its meaning changed to "Human is the heaviest factor". The task is to
+
+
+Figure 1: Illustrative examples of two probing tasks. For character replacement (upper box), the highlighted character at 7th position should be replaced with another one. For character insertion (bottom box), one character should be inserted after the 5th position. Translations in English are given in parentheses.
+
+insert "要" after the 5th position. Both tasks are also extended to multiple characters (i.e., $k \geq 2$ ). Examples can be found at Section 3.2.
+
+We build a dataset based on the benchmark of Chinese Grammatical Error Diagnosis (CGED) in years of 2016, 2017, 2018 and 2020 (Lee et al., 2016; Rao et al., 2017, 2018, 2020b). The task of CGED seeks to identify grammatical errors from sentences written by non-native learners of Chinese (Yu et al., 2014). It includes four kinds of errors, including insertion, replacement, redundant, and ordering. The dataset of CGED composes of sentence pairs, of which each sentence pair includes an erroneous sentence and an error-free sentence corrected by annotators. However, these sentence pairs do not provide information about erroneous positions, which are indispensable for the character replacement and character insertion. To obtain such position information, we implement a modified character alignment algorithm (Bryant et al., 2017) tailored for the Chinese language. Through this algorithm, we obtain a dataset for the insertion and replacement, both of which are suitable to examine the language learning ability of the pretrained model. We leave redundant and ordering types to future work. The statistic of our dataset is detailed in Appendix A.
+
+# 3 Experiments
+
+In this section, we first describe the BERT-style models that we examined, and then report numbers.
+
+# 3.1 Chinese BERT Models
+
+We describe the publicly available BERT models as well as the models we trained.
+
+ | Length = 1 | Length = 2 | Length > 3 | Average |
| Insertion | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 |
| BERT-base | 76.0 | 97.0 | 37.2 | 76.0 | 14.4 | 50.1 | 42.5 | 74.4 |
| Ours-clm | 77.2 | 97.3 | 36.7 | 74.4 | 13.3 | 49.3 | 42.4 | 73.7 |
| Ours-wwm | 56.6 | 80.1 | 42.9 | 79.1 | 19.3 | 54.0 | 39.6 | 71.1 |
| Ours-clm-wwm | 71.3 | 95.1 | 42.6 | 80.9 | 20.6 | 53.0 | 44.8 | 76.3 |
| Replacement | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 |
| BERT-base | 66.0 | 95.1 | 21.0 | 58.2 | 10.1 | 46.1 | 32.4 | 66.5 |
| Ours-clm | 67.4 | 96.6 | 20.4 | 58.3 | 7.4 | 36.9 | 31.7 | 63.9 |
| Ours-wwm | 34.8 | 68.2 | 25.7 | 65.3 | 7.4 | 35.2 | 22.6 | 56.2 |
| Ours-clm-wwm | 59.2 | 93.7 | 26.5 | 66.4 | 12.4 | 41.6 | 32.7 | 67.2 |
+
+Table 1: Probing results on character replacement and insertion.
+
+| Character Replacement |
| Input: 我没有权利破害别人的生活 (En: I have no right to destroy other people's lives.) | Label: 坏 | Prediction: 坏 (99.97%) | |
| Input: 代沟问题越来越深刻。 (En: The problem of generation gap is getting worse.) | Label: 严重 | Prediction: 严 (79.94%) 重 (91.85%) | |
| Character Insertion |
| Input: 吸烟不但对自己的健康好,而且对非吸烟者带来不好的影响。 Label: 不 (En: Smoking is not only bad for your health, but also bad to non-smokers.) | Prediction: 不 (99.98%) | |
| Input: 我下次去北京的时候,一定要吃北京烤鸭,我们在北京吃过的 是越南料理等外国的 Label: 饭菜 (En: Next time I go to Beijing, I can not miss the Peking Duck. What we have eaten in Beijing are Vietnamese cuisine and other foreign dishes.) | Prediction: 美 (40.66%) 食 (33.55%) | |
+
+Figure 2: Top predictions of Ours-clm-wwm for replacement and insertion types. For each position, probability of the top prediction is given in parenthesis. The model makes the correct prediction for top three examples. For the bottom example, the prediction also makes sense, although it is different from the ground truth.
+
+As mentioned earlier, BERT-base (Devlin et al., 2018) $^4$ is trained with the standard MLM objective. $^5$ To make a fair comparison of CLM and WWM, we train three simple Chinese BERT baselines from scratch $^6$ : (1) Ours-clm: we train this model using CLM. (2) Ours-wwm: this model only differs in that it is trained with WWM. (3) Ours-clm-wwm: this model is trained with both CLM and WWM objectives. We train these three models on a text corpus of 80B characters consisting of news, wiki, and novel texts. For the WWM task, we use a public word segmentation tool Texsmart (Zhang et al., 2020) to tokenize the raw data first. The mask rate is $15\%$ which is commonly used in existing works. We use a max sequence length of 512, use the ADAM optimizer (Kingma and Ba, 2014) with a batch size of 8,192. We set the learning rate to 1e-4 with a linear optimizer with
+
+5k warmup steps and 100k training steps in total. Models are trained on 64 Tesla V100 GPUs for about 7 days.
+
+# 3.2 Probing Results
+
+We present the results on two probing tasks here. Models are evaluated by Prediction $@\mathrm{k}$ , denoting whether the ground truth for each position is covered in the top-k predictions. From Table 1, we can make the following conclusions. First, Ours-clm consistently performs better than Ours-wwm on probing tasks that one character needs to be replaced or inserted. We suppose this is because WWM would lose the association between characters corresponding to a word. Second, WWM is crucial for better performance when there is more than one character that needs to be corrected. This phenomenon can be observed from the results of Ours-wwm and Ours-clm-wwm, which both adopt WWM and perform better than Ours-clm. Third, pretrained with a mixture of CLM and WWM, Ours-clm-wwm performs better than Ours-wwm in the one-character setting and does better than
+
+
+
+
+Figure 3: Model performance at different training steps on the probing task of character insertion. The top and bottom figures give the results evaluated on spans with one and two characters, respectively.
+
+Ours-clm when more than one characters need to be handled. For each probing task, two examples with predictions produced by Ours-clm-wwm are given in Figure 2.
+
+# 3.3 Analysis
+
+To further analyze how CLM and WWM affect the performance on probing tasks, we initialized our model from RoBERTa (Cui et al., 2019) and further trained baseline models. We show the performance of these models with different training steps on the insertion task. From Figure 3 (top), we can observe that as the number of training steps increases, the performance of Ours-wwm decreases.
+
+In addition, we also evaluate the performance of trained BERT models on downstream tasks with model parameters fine-tuned. The performance of Ours-clm-wwm is comparable with Ours-wwm and Ours-clm. More information can be found in Appendix C.
+
+# 4 Related Work
+
+We describe related studies on Chinese BERT model and probing of BERT, respectively.
+
+The authors of BERT (Devlin et al., 2018) provided the first Chinese BERT model which was trained on Chinese Wikipedia data. On top of that, Cui et al. (2019) trained RoBERTa-wwm-ext with WWM on extended data. Cui et al. (2020) further trained a Chinese ELECTRA model and MacBERT, both of which did not have [MASK] tokens. ELECTRA was trained with a token-level binary classification task, which determined whether a token was the original one or artificially replaced. In MacBERT, [MASK] tokens were replaced with synonyms and the model was trained with WWM and ngram masking. ERNIE (Sun et al., 2019) was trained with entity masking, similar to WWM yet tokens corresponding to an entity were masked at once. Language features are considered in more recent works. For example, AMBERT (Zhang and Li, 2020) and Lattice-BERT (Lai et al., 2021) both take word information into consideration. Chinese-BERT (Sun et al., 2021) utilizes pinyin and glyph of characters.
+
+Probing aims to examine the language understanding ability of pretrained models like BERT when model parameters are clamped, i.e., without being fine-tuned on downstream tasks. Petroni et al. (2019) study how well pretrained models learn factual knowledge. The idea is to design a natural language template with a [MASK] token, such as "the wife of Barack Obama is [MASK]". If the model predicts the correct answer "Micheal Obama", it shows that pretrained models learn factual knowledge to some extent. Similarly, Davison et al. (2019) study how pretrained models learn commonsense knowledge and Talmor et al. (2020) examine on tasks that require symbolic understanding. Wang and Hu (2020) propose to probe Chinese BERT models in terms of linguistic and world knowledge.
+
+# 5 Conclusion
+
+In this work, we present two Chinese probing tasks, including character insertion and replacement. We provide three simple pretrained models dubbed Ours-clm, Ours-wwm, and Ours-clm-wwm, which are pretrained with CLM, WWM, and a combination of CLM and WWM, respectively. Ours-wwm is prone to lose the association between words and result in poor performance on probing tasks when one character needs to be inserted or replaced. Moreover, WWM plays a key role when two or more characters need to be corrected.
+
+# References
+
+Christopher Bryant, Mariano Felice, and Edward Briscoe. 2017. Automatic annotation and evaluation of error types for grammatical error correction. Association for Computational Linguistics.
+Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657-668, Online. Association for Computational Linguistics.
+Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pretraining with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.
+Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173-1178.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Yuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2021. Lattice-bert: Leveraging multi-granularity representations in Chinese pre-trained language models. arXiv preprint arXiv:2104.07204.
+Lung-Hao Lee, Gaoqi Rao, Liang-Chih Yu, Endong Xun, Baolin Zhang, and Li-Ping Chang. 2016. Overview of NLP-TEA 2016 shared task for Chinese grammatical error diagnosis. In Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016), pages 40–48, Osaka, Japan. The COLING 2016 Organizing Committee.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066.
+Gaoqi Rao, Qi Gong, Baolin Zhang, and Endong Xun. 2018. Overview of NLPTEA-2018 share task Chinese grammatical error diagnosis. In Proceedings
+
+of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 42-51, Melbourne, Australia. Association for Computational Linguistics.
+Gaoqi Rao, Erhong Yang, and Baolin Zhang. 2020a. Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pages 25-35.
+Gaoqi Rao, Erhong Yang, and Baolin Zhang. 2020b. Overview of NLPTEA-2020 shared task for Chinese grammatical error diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pages 25-35, Suzhou, China. Association for Computational Linguistics.
+Gaoqi Rao, Baolin Zhang, Endong Xun, and Lung-Hao Lee. 2017. IJCNLP-2017 task 1: Chinese grammatical error diagnosis. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 1-8, Taipei, Taiwan. Asian Federation of Natural Language Processing.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223.
+Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu, and Jiwei Li. 2021. Chinesebert: Chinese pretraining enhanced by glyph and pinyin information. arXiv preprint arXiv:2106.16038.
+Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. olympics-on what language model pre-training captures. Transactions of the Association for Computational Linguistics, 8:743-758.
+Zhiruo Wang and Renfen Hu. 2020. Intrinsic knowledge evaluation on chinese language models. arXiv preprint arXiv:2011.14277.
+C. Wood and V. Connelly. 2009. Contemporary perspectives on reading and spelling.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
+
+Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020a. CLUE: A Chinese language understanding evaluation benchmark. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4762-4772, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020b. Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986.
+Liang-Chih Yu, Lung-Hao Lee, and Liping Chang. 2014. Overview of grammatical error diagnosis for learning chinese as a foreign language. In Proceedings of the 1st Workshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14), pages 42-47.
+Haisong Zhang, Lemao Liu, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, Xiao Feng, Tao Chen, Tao Yang, Dong Yu, Feng Zhang, Zhanhui Kang, and Shuming Shi. 2020. Texsmart: A text understanding system for fine-grained ner and enhanced semantic analysis. arXiv preprint arXiv:2012.15639.
+Xinsong Zhang and Hang Li. 2020. Ambert: A pretrained language model with multi-grained tokenization. arXiv preprint arXiv:2008.11869.
+
+# A The statistic of dataset
+
+ | Replacement | Insertion | Total |
| Length = 1 | 5,522 | 4,555 | 10,077 |
| Length = 2 | 2,004 | 1,337 | 3,341 |
| Length ≥ 3 | 305 | 383 | 688 |
| No. sentences | 5,727 | 4,721 | 10,448 |
| No. spans | 7,831 | 6,275 | 14,106 |
| No. chars | 10,542 | 8,533 | 19,075 |
+
+Table 2: The statistic of our dataset.
+
+# B Probing results from models with different initialization
+
+We also verify the performance of models initialized from BERT (Devlin et al., 2018) and RoBERTa (Cui et al., 2019) on probing tasks. The results are detailed in Table 3, from which we can obtain consistent conclusions with the previous section.
+
+# C The evaluation on downstream tasks
+
+We test the performance of BERT-style models on tasks including text classification (TNEWS, IFLY-TEK), sentence-pair semantic similarity (AFQMC), coreference resolution (WSC), key word recognition (CSL), and natural language inference (OCNLI) (Xu et al., 2020a). We follow the standard fine-tuning hyper-parameters used in Devlin et al. (2018); Xu et al. (2020b); Lai et al. (2021) and report results on the development sets. The detailed results is shown in Table 4.
+
+ | Initialization | Length = 1 | Length = 2 | Length > 3 | Average |
| Insertion | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 |
| BERT-base | | 76.0 | 97.0 | 37.2 | 76.0 | 14.4 | 50.1 | 42.5 | 74.4 |
| Ours-clm | from scratch | 77.2 | 97.3 | 36.7 | 74.4 | 13.3 | 49.3 | 42.4 | 73.7 |
| Ours-wwm | 56.6 | 80.1 | 42.9 | 79.1 | 19.3 | 54.0 | 39.6 | 71.1 |
| Ours-clm-wwm | 71.3 | 95.1 | 42.6 | 80.9 | 20.6 | 53.0 | 44.8 | 76.3 |
| Ours-clm | from BERT | 79.2 | 97.7 | 40.0 | 77.6 | 16.2 | 53.5 | 45.1 | 76.3 |
| Ours-wwm | 61.2 | 87.7 | 43.4 | 79.4 | 20.1 | 56.4 | 41.6 | 74.5 |
| Ours-clm-wwm | 73.1 | 96.1 | 41.8 | 80.6 | 20.6 | 56.7 | 45.2 | 77.8 |
| Ours-clm | from RoBERTa | 79.4 | 97.9 | 42.0 | 80.4 | 20.6 | 52.3 | 47.3 | 76.9 |
| Ours-wwm | 61.4 | 87.9 | 44.3 | 79.9 | 20.1 | 59.3 | 41.9 | 75.7 |
| Ours-clm-wwm | 77.3 | 97.5 | 46.8 | 83.3 | 22.5 | 58.7 | 48.9 | 79.8 |
| Replacement | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 | p@1 | p@10 |
| BERT-base | | 66.0 | 95.1 | 21.0 | 58.2 | 10.1 | 46.1 | 32.4 | 66.5 |
| Ours-clm | from scratch | 67.4 | 96.6 | 20.4 | 58.3 | 7.4 | 36.9 | 31.7 | 63.9 |
| Ours-wwm | 34.8 | 68.2 | 25.7 | 65.3 | 7.4 | 35.2 | 22.6 | 56.2 |
| Ours-clm-wwm | 59.2 | 93.7 | 26.5 | 66.4 | 12.4 | 41.6 | 32.7 | 67.2 |
| Ours-clm | from BERT | 69.0 | 96.9 | 24.5 | 64.7 | 8.4 | 47.3 | 34.0 | 69.6 |
| Ours-wwm | 40.6 | 81.6 | 27.2 | 67.9 | 8.4 | 39.4 | 25.4 | 63.0 |
| Ours-clm-wwm | 61.6 | 94.9 | 27.6 | 67.8 | 10.4 | 47.0 | 33.2 | 69.9 |
| Ours-clm | from RoBERTa | 69.7 | 96.8 | 26.7 | 68 | 12.1 | 51.7 | 36.2 | 72.2 |
| Ours-wwm | 41.7 | 80.9 | 28.2 | 68.2 | 12.4 | 47.2 | 27.4 | 65.4 |
| Ours-clm-wwm | 67.3 | 96.7 | 28.4 | 69.7 | 15.7 | 54.2 | 37.1 | 73.5 |
+
+Table 3: Probing results from models with different initialization.
+
+| Model | TNEWS | IFLYTEK | AFQMC | OCNLI | WSC | CSL | Average |
| BERT-base | | 57.1 | 61.4 | 74.2 | 75.2 | 78.6 | 81.8 | 71.4 |
| Ours-clm | | 57.3 | 60.3 | 72.8 | 73.9 | 79.3 | 68.7 | 68.7 |
| Ours-wwm | from scratch | 57.6 | 60.9 | 73.8 | 75.4 | 81.9 | 75.4 | 70.8 |
| Ours-clm-wwm | 57.3 | 60.3 | 72.3 | 75.6 | 79.0 | 79.5 | 70.7 |
| Ours-clm | | 57.6 | 60.6 | 72.8 | 75.5 | 79.3 | 80.1 | 71.0 |
| Ours-wwm | from BERT | 58.3 | 60.8 | 71.73 | 76.1 | 79.9 | 80.7 | 71.3 |
| Ours-clm-wwm | 58.1 | 60.8 | 72.3 | 75.8 | 80.3 | 79.9 | 71.2 |
| Ours-clm | | 57.9 | 60.8 | 74.7 | 75.7 | 83.1 | 82.1 | 72.4 |
| Ours-wwm | from RoBERTa | 58.1 | 61.1 | 73.9 | 76.0 | 82.6 | 81.7 | 72.2 |
| Ours-clm-wwm | 58.1 | 61.0 | 74.0 | 75.9 | 84.0 | 81.8 | 72.5 |
+
+Table 4: Evaluation results on the dev set of each downstream task. Model parameters are fine-tuned.
\ No newline at end of file
diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/images.zip b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4d69c7acba2bb158de8c3c3749a26954741da844
--- /dev/null
+++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d69e943e78bb6f43c19f6adb724375de22b6ca31e458d11ed722e05ba756169
+size 460438
diff --git a/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/layout.json b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c09e403c14f4042857f3dcdc73b98bccfb07cc11
--- /dev/null
+++ b/iswholewordmaskingalwaysbetterforchinesebertprobingonchinesegrammaticalerrorcorrection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9f4c05eea80baea8f758bad829d46b7d3b27f36868e75f2d59de9406341f493
+size 192875
diff --git a/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_content_list.json b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..06b709f549698a8c3124d3e44f102e252560eb57
--- /dev/null
+++ b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b1c3ca6d1c23df0550d574819b283f9733e378a56b1a894fb9b2c66138eeafd
+size 74529
diff --git a/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_model.json b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5c76bd8a79651e14b5898840f1f6fefd006332a2
--- /dev/null
+++ b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18af5c97172c5581e038ea0b3136c47e969febf307bae05edecc699efb7720a8
+size 90770
diff --git a/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_origin.pdf b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5650af0448e2b98369ba6ae421923700852501d3
--- /dev/null
+++ b/translationerrordetectionasrationaleextraction/3554ad1e-a5ce-42b3-b353-99800f422f87_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2927709e88c789090e9b899d9f7dfc638d16ed308e48315a3c87b4247bcc77e4
+size 449340
diff --git a/translationerrordetectionasrationaleextraction/full.md b/translationerrordetectionasrationaleextraction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ae5f62cb41f9fdbb250d7b8f7fa5979c76d37960
--- /dev/null
+++ b/translationerrordetectionasrationaleextraction/full.md
@@ -0,0 +1,284 @@
+# Translation Error Detection as Rationale Extraction
+
+Marina Fomicheva
+
+University of Sheffield
+
+m.fomicheva@sheffield.ac.uk
+
+Lucia Specia
+
+Imperial College London
+
+l.specia@imperial.ac.uk
+
+Nikolaos Aletras
+
+University of Sheffield
+
+n. aletras@sheffield.ac.uk
+
+# Abstract
+
+Recent Quality Estimation (QE) models based on multilingual pre-trained representations have achieved very competitive results in predicting the overall quality of translated sentences. However, detecting specifically which translated words are incorrect is a more challenging task, especially when dealing with limited amounts of training data. We hypothesize that, not unlike humans, successful QE models rely on translation errors to predict overall sentence quality. By exploring a set of feature attribution methods that assign relevance scores to the inputs to explain model predictions, we study the behaviour of state-of-the-art sentence-level QE models and show that explanations (i.e. rationales) extracted from these models can indeed be used to detect translation errors. We therefore (i) introduce a novel semi-supervised method for word-level QE; and (ii) propose to use the QE task as a new benchmark for evaluating the plausibility of feature attribution, i.e. how interpretable model explanations are to humans.
+
+# 1 Introduction
+
+Quality Estimation (QE) is the task of predicting Machine Translation (MT) quality at inference time, when no gold standard human translation is available (Blatz et al., 2004; Specia et al., 2009). QE can be framed as a word-level or a sentence-level task. Both tasks have numerous practical applications, such as deciding whether a given MT output can be published without editing, highlighting potential critical errors. Current QE approaches fine-tune powerful representations from pre-trained multilingual encoders such as BERT (Devlin et al., 2018) or XLM-R (Conneau et al., 2019). In the recent Shared Task on QE at WMT2020 (Specia et al., 2020) these approaches have achieved very high performance at predicting sentence-level translation quality (up to 0.9 Pearson correlation with human judgements for some language pairs). How-
+
+ever, as evidenced by these results, the accuracy of word-level prediction still leaves room for improvement. This is partly due to the limited amount of training data. Word-level error annotation is especially time-consuming and expensive, as it requires work from bilingual experts. In this work we introduce a new semi-supervised approach to word-level QE that removes the need of training data at word level. To achieve this, we propose addressing QE as a rationale extraction task (Lei et al., 2016).
+
+Explainability is a broad area aimed at explaining predictions of machine learning models (Lipton, 2016). Rationale extraction methods achieve this by selecting a portion of the input that justifies model output for a given data point. In translation, human perception of quality is guided by the presence of translation errors (Freitag et al., 2021). We hypothesize that sentence-level QE models also rely on translation errors to make predictions. If that is the case, explanations for sentence-level predictions can be used to detect translation errors, thus removing the need for word-level labeled training data. To extract model explanations, we use post hoc rationale extraction methods (Sundararajan et al., 2017) which try to explain the predictions of a given model (as opposed to modifying its architecture or introducing constraints during training), since one of our goals is to study to what extent existing QE models rely on the same information as humans to make predictions.
+
+At the same time, by using word-level errors as explanations for sentence-level QE scores, we introduce a new benchmark for evaluating explainability methods. Recent work has introduced various datasets for measuring the agreement between rationales extracted from NLP models and those provided by humans (DeYoung et al., 2019). QE is different from these datasets in various important aspects. First, it is a regression task, as opposed to binary or multiclass text classification mainly explored in previous work. Second, it is a multi-
+
+lingual task where the output score captures the relationship between source and target sentences. Finally, manual annotation of translation errors is a practical task with a long tradition in MT research and translation studies (Lommel et al., 2014), and thus offers an interesting alternative to human explanations collected specifically for evaluating rationale extraction methods.
+
+# Our main contributions are:
+
+- We introduce a novel semi-supervised approach for word-level QE. We provide practical recipes on how feature attribution methods can be used to derive information on translation errors from sentence-level models.
+- We provide insights into the behaviour of state-of-the-art (SOTA) QE models by analysing attributions to different parts of the input sequence (source vs. target sentence, correct words vs. errors) at different hidden layers.
+- We propose to use the QE task as a new benchmark for evaluating the plausibility of feature attribution explanations, i.e. how interpretable model explanations are to humans (Jacovi and Goldberg, 2020).
+
+# 2 Background and Related Work
+
+Quality Estimation Current SOTA models in sentence-level QE, which is typically framed as a regression task, mainly use multilingual representations from pre-trained transformers (Devlin et al., 2018), notably XLM-R (Conneau et al., 2019). The input to a sentence-level QE model is a concatenation of the source and translated sentences, separated by the [SEP] token. The sequence is encoded by the pre-trained Transformer model, and the [CLS] token is passed through a multilayer perceptron (MLP) layer to obtain a sentence-level score. During fine-tuning both the parameters of the pre-trained model and the parameters corresponding to the MLP layer are updated.
+
+Word-level QE is typically addressed as a binary classification task, where the QE model needs to predict a binary label indicating whether a word is correct or wrong for each word in the MT output (Lee, 2020). As illustrated in Figure 1 (left), some supervised approaches use both sentence-level and word-level objectives in a multi-task setting, which results in superior performance (Kim et al., 2017;
+
+Lee, 2020). Methods that do not require word-level training data either need access to the MT model (Rikters and Fishel, 2017; Fomicheva et al., 2020b), or still treat the problem as a supervised task but use synthetically generated data for supervision (Tuan et al., 2021).
+
+Rationale Extraction for NLP SOTA NLP models based on deep neural networks achieve high performance in a variety of tasks, often at the cost of interpretability (Lipton, 2016). Recent work aims to address this issue by focusing on two different goals. On the one hand, the aim is to produce justifications for model predictions that are plausible to the users, in order to increase users' trust (Ribeiro et al., 2016). On the other hand, the aim is to reveal the inner workings of the model and faithfully explain its predictions, so the explanation can be useful to model developers (Jacovi and Goldberg, 2020).
+
+Typically, explainability methods operate by selecting a portion of the input that justifies model prediction for a single data point. This can be done either by modifying the model architecture, or by trying to explain the predictions of a given model. The first type of approaches (a.k.a. rationalization by construction) involves imposing restrictions on the generated rationales to satisfy certain constraints, e.g. compactness (Yu et al., 2019; Chalkidis et al., 2021). Note that such restrictions often result in lower performance and indeed are not guaranteed to explain the behaviour of an unconstrained model (Jain et al., 2020). The second type of approaches (the so called post hoc) usually rely on feature attribution methods, which assign an importance value to each input feature of a network (Sundararajan et al., 2017; Schulz et al., 2020). These methods do not allow for introducing useful biases during training, but focus on faithfully explaining model behaviour.
+
+Feature attribution has a long tradition in image recognition tasks (Simonyan et al., 2013) and only recently have been applied to some NLP tasks, most commonly text classification (DeYoung et al., 2019). QE is fundamentally different from text classification where clues are typically separate words or phrases (Zaidan et al., 2007) which often can be considered independently of the rest of the text. This independence assumption does not hold for the task of evaluating translation quality where a word cannot be identified as a clue (e.g. translation error) without considering the surrounding context.
+
+
+Figure 1: Fully supervised word-level QE (left) and semi-supervised word-level QE as rationale extraction (right). Dashed and solid lines represent training and test time, respectively.
+
+
+
+Furthermore, SOTA NLP models based on contextualized representations for input words make rationale extraction especially challenging, as the representation for a given word can encode not only the word identity but also its interactions with other words in the text. Recent work has revealed various interesting properties that characterize the information flow through hidden layers in deep transformer models (Voita et al., 2019; De Cao et al., 2020; Yun et al., 2021). We provide additional insights on this topic in Section 5.2.
+
+# 3 Translation Error Prediction as Rationale Extraction
+
+We propose framing semi-supervised word-level QE as rationale extraction from sentence-level QE models. Instead of training a dedicated supervised model for word-level prediction, we propose deriving word-level scores from a strong sentence-level QE model by extracting explanations for model predictions (see Figure 1 (right)). Given a trained sentence-level QE model and the test data, rationale extraction methods detect the parts of the input that are relevant for model predictions on a sample-by-sample basis. We hypothesize that words with the highest relevance scores should correspond to actual translation errors on word-level.
+
+# 3.1 Approach
+
+More formally, given the source sequence $\mathbf{x}^S = x_1^S,\dots,x_{|S|}^S$ , the target sequence $\mathbf{x}^T = x_1^T,\dots,x_{|T|}^T$ and the QE model $M(\mathbf{x}^S,\mathbf{x}^T) = \hat{y}$ that predicts sentence MT quality, a feature attribution method produces a vector of attribution scores $\mathbf{a} = a_1,\dots,a_{|S + T|}$ , which represent the contribution of each source and target word to the prediction $\hat{y}$ .
+
+Crucially, no word-level labels are required for training. For evaluation, the attribution scores
+
+are compared against binary gold labels $\mathbf{w} = w_{1},\dots,w_{|T|}\in \{0,1\}$ indicating whether each given word in the target sequence is an error or correct.
+
+The predictive models for QE explored in our experiments are built by fine-tuning multilingual representations from pre-trained transformers. Transformer model starts from context-agnostic representations consisting of positional and token embeddings. These representations are passed through a set of hidden layers where at each layer the representations are iteratively updated via multi-head attention. This allows the hidden representation for each token to encode information on other words in the sentence.
+
+We note that attribution to the input tokens or to the embedding layer can hardly succeed in detecting translation errors, as those cannot be identified independently from the context given by the source and target sentence. In this work, we perform feature attribution to hidden states at different layers and analyse which layer results in attribution scores that best correspond to translation errors.
+
+# 3.2 Feature Attribution Methods
+
+Feature attribution methods can be divided into those providing explanations by simplification, such as LIME (Ribeiro et al., 2016); gradient-based explanations (Sundararajan et al., 2017); and perturbation-based explanations (Schulz et al., 2020).
+
+We select three popular methods for rationale extraction, which (i) do not require modifying the model architecture or re-training the model and (ii) allow attribution to hidden states. For comparison, we also use LIME which operates directly on the input text. We note that this set is not exhaustive of SOTA rationale extraction methods. Our main goal is not to conduct a comparative study of feature
+
+attribution methods but rather testing whether it is possible to address word-level QE as a rationale extraction task without any word-level supervision.
+
+LIME (Ribeiro et al., 2016) is a simplification-based explanation technique, which fits a sparse linear model in the vicinity of each test instance, to approximate the decision boundary of the complex model. The data for fitting the linear model is produced by perturbing the given instance and computing model predictions. Linear model coefficients are then used as attribution scores for each input feature. For NLP tasks features correspond to input tokens and perturbation is achieved by randomly removing words from the sequence.
+
+Information Bottleneck is a perturbation-based method originally proposed by Schulz et al. (2020) for the task of image recognition. The method applies the idea of information bottleneck (Tishby and Zaslavsky, 2015) for feature attribution. Specifically, it injects noise into an intermediate layer representation. The amount of noise injected at the position corresponding to each input feature is optimized to minimize the loss of the main task while at the same time maximizing the overall amount of injected noise.
+
+Integrated Gradients (Sundararajan et al., 2017) is a gradient-based method similar to the traditional salience and input*gradients approaches (Simonyan et al., 2013). The latter takes the signed partial derivatives of the output with respect to the input and multiply them by the input itself. Intuitively, this is analogous to inspecting the products of model coefficients and feature values in linear models (Sundararajan et al., 2017). Integrated gradients improves on that by defining a baseline input and computing the average gradient while the input varies along a linear path from baseline input to the actual input. The baseline is defined by the user depending on the task. For image recognition, black image is used as baseline. It is not clear what such baseline representation should be in the case of language tasks. Here, we select a zero baseline for simplicity. Better results can be achieved with a more informed choice of a baseline and we leave this to future work.
+
+Attention Finally, we test attention as an attribution method. Self-attention mechanisms have been widely studied in the context of explainability (Jain and Wallace, 2019; Serrano and Smith, 2019; Bujel et al., 2021). To compute a single attention score for a transformer-based model with multi-head attention, we average the weights across the different attention heads.
+
+# 4 Experimental Setup
+
+# 4.1 Evaluation Metrics
+
+Given a test set with both sentence-level and word-level gold labels, we want to measure to what extent the words with the highest attributions according to the QE model correspond to human annotations for MT errors. Note that we cannot use the evaluation metrics traditionally employed for assessing the performance of word-level QE, such as F1 score and Matthews correlation coefficient (Specia et al., 2020), as they require binary predictions while feature attribution methods return continuous scores. Instead, we rely on metrics based on class probabilities (Atanasova et al., 2020). Since attribution methods proceed on instance-by-instance basis and the scores produced for different instances are not necessarily comparable, we compute the evaluation metrics for each instance separately and average the results across all instances in the test set.
+
+AUC score For each instance, we compute the area under the receiver operating characteristic curve (AUC score) to compare the continuous attribution scores a against binary gold labels w. For a test set with $N$ instances:
+
+$$
+A U C = \frac {1}{N} \sum_ {n} A U C _ {n} \left(\mathbf {w} _ {n}, \mathbf {a} _ {n} ^ {\mathbf {x} ^ {T}}\right) \tag {1}
+$$
+
+Average Precision AUC score can be overly optimistic for imbalanced data. Therefore, we also use Average Precision (AP).
+
+Recall at Top-K In addition, we report the Recall-at-Top-K commonly used in information retrieval. Applied to our setting, this metric computes the proportion of words with the highest attribution that correspond to translation errors against the total number of errors in the MT output. Thus, for a given instance (we omit the instance index $n$ here for simplicity):
+
+$$
+\operatorname {R e c} @ \operatorname {T o p K} = \frac {1}{k} \sum_ {j \in \mathbf {e} _ {1: k}} \mathbf {w} _ {j} \tag {2}
+$$
+
+ | Ro-En | Et-En | Ne-En |
| Pearson r | 0.84 | 0.66 | 0.66 |
| Average DA | 68.9 | 55.2 | 36.6 |
| Num. sentences (all data) | 1,000 | 1,000 | 1,000 |
| Num. sentences (DA < 70) | 438 | 640 | 935 |
| Error rate (all data) | 0.21 | 0.28 | 0.65 |
| Error rate (DA < 70) | 0.35 | 0.36 | 0.66 |
+
+Table 1: General statistics for MLQE-PE test sets: performance of sentence-level QE models (Pearson r), average DA score, total number of sentences in the test set, number of sentences with DA $< 70$ , as well as error rate in the full test set and in the subset of selected sentences.
+
+Where $\mathbf{e} = \text{argsort}(\mathbf{a}^{\mathbf{x}^T})$ is a sequence of indices corresponding to target words sorted by attribution score from highest to lowest and $k$ is the number of errors in the sentence. We then average the result across all instances in the test set.
+
+Accuracy at Top-1 Finally, we report the proportion of sentences where the word with the highest attribution in the target corresponds to a translation error.
+
+$$
+\operatorname {A c c} @ \operatorname {T o p} 1 = \frac {1}{N} \sum I [ \mathbf {a} _ {\mathbf {e} _ {1}} = 1 ] \tag {3}
+$$
+
+We note that the above metrics are not defined for sentences where all words are labelled as errors or correct. We exclude such sentences from evaluation.
+
+# 4.2 Sentence-level QE
+
+For sentence-level QE, we rely on TransQuest (Ranasinghe et al., 2020b), which was one of the top submissions to the WMT20 QE Shared Task (Specia et al., 2020). To facilitate the use of feature attribution methods described above, we use our own implementation of the approach proposed by (Ranasinghe et al., 2020b,a). It achieves comparable results to the ones reported by the authors. Due to limited computational resources we use the XLM-R-base as the underlying pre-trained Transformer model. We expect that using a more powerful sentence-level model would result in higher performance.
+
+# 4.3 Data
+
+We use the MLQE-PE (Multilingual Quality Estimation and Post-Editing) dataset described in
+
+Fomicheva et al. (2020a).3 MLQE-PE provides various types of manual MT evaluation for multiple language pairs. The MT outputs were assigned a sentence-level score inspired by the Direct Assessment (DA) annotation (Graham et al., 2015; Guzmán et al., 2019) on a continuous [0, 100] scale capturing overall translation quality. In addition, the MT outputs were independently post-edited by professional translators. MT outputs and their corresponding post-edited versions were automatically aligned in order to derive word-level binary labels ("BAD" if the word was corrected, and "OK" otherwise), as well as their HTER score that corresponds to the average number of "BAD" labels in a sentence (Snover et al., 2006). We use these labels to evaluate the performance of different feature attribution approaches. We treat "BAD" labels as the positive class and "OK" labels as negative class in all of our experiments.4 We do not evaluate attribution to source words.
+
+It is worth noting that word-level labels derived from post-editing do not capture error severity and do not always correspond to translation errors. However, due to the costs of collecting detailed error annotations for the substantially large amounts of data required to train SOTA models, this is a standard way of approximating error annotation in QE (Specia et al., 2020).
+
+To circumvent the above limitation, we leverage both types of sentence-level annotation (DA and HTER scores) in our experiments. We train sentence-level QE models with (i) DA scores and (ii) HTER scores. We evaluate both types of models using the word-labels derived from post-editing as described above. We then conduct evaluation as follows:
+
+1. We first evaluate explanations for DA-based models on the sentences with a sentence-level DA score lower than 70. $^6$
+
+3https://github.com/sheffieldnlp/mlqe-pe
+4The tokenization used internally by XLM-R model is different from the tokenization used for producing word-level error labels. To map the attribution scores to the word labels we take their maximum value.
+Despite the limitations, we have chosen this dataset because it provides (i) sufficient amount of word-level training data, which allows us to compare our approach to a SOTA supervised approach; and (ii) access to the neural MT models that were used to produce the translations, thus enabling a comparison to an unsupervised glass-box approach.
+This threshold is selected based on the annotation guidelines described in Fomicheva et al. (2020a), as the sentences assigned a score lower than 70 are guaranteed to have translation errors.
+
+| Method | Romanian-English | Estonian-English | Nepalese-English |
| AUC | AP | A@1 | R@K | AUC | AP | A@1 | R@K | AUC | AP | A@1 | R@K |
| Gradients | 0.75 | 0.72 | 0.84 | 0.62 | 0.66 | 0.63 | 0.72 | 0.52 | 0.66 | 0.81 | 0.91 | 0.72 |
| Info. Bottleneck | 0.65 | 0.62 | 0.71 | 0.50 | 0.58 | 0.55 | 0.56 | 0.46 | 0.64 | 0.78 | 0.80 | 0.71 |
| Attention | 0.79 | 0.73 | 0.80 | 0.63 | 0.65 | 0.57 | 0.52 | 0.49 | 0.69 | 0.82 | 0.88 | 0.74 |
| LIME | 0.54 | 0.48 | 0.40 | 0.39 | 0.56 | 0.56 | 0.65 | 0.46 | 0.52 | 0.75 | 0.76 | 0.68 |
| Random | 0.50 | 0.43 | 0.36 | 0.33 | 0.50 | 0.47 | 0.38 | 0.37 | 0.50 | 0.70 | 0.62 | 0.65 |
| Glass-box | 0.74 | 0.66 | 0.66 | 0.55 | 0.69 | 0.63 | 0.65 | 0.54 | 0.64 | 0.79 | 0.78 | 0.73 |
| MicroTransQuest | 0.88 | 0.81 | 0.88 | 0.70 | 0.84 | 0.80 | 0.89 | 0.70 | 0.82 | 0.89 | 0.96 | 0.82 |
+
+Table 2: AUC/AP scores, as well as accuracy at top-1 (A@1) and recall at top-K (R@K) for different rationale extraction methods on the test partition of MLQE-PE dataset. Best rationale extraction results are highlighted in bold. Attributions are computed with respect to the hidden states at layer 10.
+
+2. We also evaluate explanations for DA-based sentence-level models on the full subset of sentences that contain at least one word-level error.
+3. Finally, we evaluate explanations for HTER-based sentence-level models on the full subset of sentences that contain at least one word-level error.
+
+Interestingly, despite the discrepancy between DA training objective and word labels derived from post-editing, explanations for DA-based models achieve better accuracy. We report the results for (1) in the main body of the paper, while (2) and (3) are reported in Appendix B.
+
+We select three language pairs for our experiments: Estonian-English (Et-En), Romanian-English (Ro-En) and Nepali-English (Ne-En) with the best performance at sentence level achieved at WMT2020 Shared Task. Table 1 shows statistics for the respective test sets. These three language pairs present very different conditions for the task. Sentence-level model for Ro-En has much stronger performance in terms of Pearson correlation with human judgements. Ne-En has substantially lower translation quality where "BAD" words actually represent the majority class.
+
+# 4.4 QE Benchmarks
+
+We consider two benchmarks for word-level QE. On the one hand, we report the results for a strong supervised model based on pre-trained representations from XLM-R adapted to predict word-level binary labels derived from post-editing. To report the metrics presented in 4.1, we use the probability of the positive class as attribution scores. On the other hand, we consider a fully unsupervised
+
+approach, which however, requires access to the neural MT model, that was used to generate the translations.
+
+Black-box Supervised QE We use the word-level architecture available as part of the TransQuest toolkit (Ranasinghe et al., 2020b). Similarly to the sentence-level TransQuest model, it relies on XLM-Roberta-base pre-trained model finetuned for token classification task. We use XLM-Roberta-base to be consistent with the sentence-level settings.
+
+Glass-box Unsupervised QE Fomicheva et al. (2020b) propose to extract information from the MT system to predict translation quality in a fully unsupervised way. Following their work, we use log-probabilities from the neural MT model as attribution scores. The lower the log-probability corresponding to each word, the higher the chance that this word constitutes an error.
+
+# 5 Results
+
+# 5.1 QE as Rationale Extraction
+
+Table 2 shows the performance of our approach with different rationale extraction methods, as well as SOTA word-level QE methods for the MLQE-PE dataset. For the first three methods we compute the attributions to the hidden states at each layer on the dev set and report the results for this layer on the test set. First, our semi-supervised approach with all explanation methods substantially outperforms the random baseline.8 Among the different expla
+
+
+Figure 2: Average attribution at each hidden layer on the toy task (left) and MLQE-PE Et-En dataset (right). Attributions are computed with the information bottleneck attribution method (Schulz et al., 2020).
+
+
+
+
+Figure 3: AUC score at each hidden layer for integrated gradients method.
+
+
+Figure 4: Example of Estonian-English translation with attributions to the source (left) and target (right) sentences computed using integrated gradients method for each hidden layer. The correct post-edited version of this translation is: Evald cannot believe that Pille is so attached to her.
+
+nation methods, attention and integrated gradients achieve the best results. Second, the performance is comparable or better than the glass-box QE benchmark (MicroTransQuest) without requiring access to the neural MT model. For example, for Ro-En the AP scores achieved by the attention-based explanations and the glass-box word-level QE are 0.73 and 0.66, respectively. Third, the gap between the best-performing semi-supervised method and the supervised QE benchmark is the smallest for Ro-En, where the sentence-level QE model from which explanations are extracted is the strongest (see Table 1). Finally, on average, LIME-based explanations are substantially outperformed by the feature attribution methods. This agrees with our intuition that for the translation task where context plays a fundamental role, attribution to hidden states achieves much better performance than direct perturbation of input words.
+
+for the proposed error detection methods as most of the words in the data correspond to errors, as shown in Table 1.
+
+# 5.2 Analysis
+
+Feature Attribution per Layer Figure 2 shows attributions to tokens of different types across hidden layers. On the left, we show the results for a toy task, where we artificially introduced easy-to-detect errors in human translations and trained a QE model with near-perfect performance to predict whether a given sentence contain errors (see Appendix A). On the right, we show the results for the MLQE-PE Et-En test set. Similarly to the toy task, we observe that in the later layers the tokens corresponding to translation errors receive higher attribution scores. However, in the toy dataset, the source tokens have very low attributions. Here, in
+
+
+Figure 5: Frequency of the tokens with highest attribution in the neural MT training corpus. Y-axis shows the frequency of the source (left) and target (right) tokens with the highest attribution scores in low-quality MT sentences (red) and high-quality MT sentences (blue). X-axis corresponds to the hidden layers.
+
+
+
+contrast, the model appears to be relying on the source as well as the target. This aligns very well with human evaluation where both source and target sentences need to be considered in order to correctly determine translation quality.
+
+Figure 3 shows performance across layers for the integrated gradients method. As expected, the same layers that assign the highest attribution to the bad tokens (layers 9-11) are the ones that achieve the best performance. This finding is consistent across language pairs and attribution methods. Interestingly, this is also consistent with the findings reported in Voita et al. (2019), where they show that models trained with MLM objective encode context information in intermediate layers partially discarding the information on the identity of the input tokens which is recovered at the latest layers.
+
+So far we have studied the behavior of the QE models on the sentences that contain errors. We now look at the pattern in the attributions scores for sentences which were assigned high quality by the model. We hypothesize that higher scores will be assigned to the words that are "easy" to translate. To test this, we select high-quality and low-quality sentences (sentences with predicted scores lower than 0.25 percentile and higher then 0.75 percentile, respectively). Figure 5 shows the average frequency with which the words occur in the neural MT training dataset. Red line corresponds to the words with the highest attribution for high-quality MT sentences. Blue line corresponds to the words with the highest attribution for the low-quality MT sentences. The first plot corresponds to the source tokens and the second plot corresponds to the target tokens. As shown in the plots, when the model predicts high quality the most frequent
+
+words receive the highest attribution as the information progresses through the network. By contrast when low quality is predicted by the sentence-level model, the least frequent words receive the highest attribution.
+
+Qualitative Analysis Figure 4 shows an example. Attributions are shown for sentencepiece tokens, which is the representation used internally by XLM-R. Interestingly, both translation errors ("You" and "Pilate") and the corresponding words in the source ("Evald" and "Pille") receive higher attribution scores.
+
+# 6 Conclusion
+
+In this work, we propose a new semi-supervised approach for word-level QE by exploring feature attribution methods. We show that for well performing models our results approach performance of supervised methods. We also consider the QE as rationale extraction task as a new benchmark for plausibility-based evaluation of explainability methods. We hope this work will encourage further research on improving the efficiency of word-level QE models with lightly supervised methods. This work opens many directions for future research: from improving the achieved results by tuning linear weights to combine attributions to hidden states at different layers, to exploring different underlying architectures and sentence-level training objectives.
+
+# Acknowledgements
+
+This work was supported by funding from the Bergamot project (EU H2020 Grant No. 825303).
+
+# References
+
+Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. A diagnostic study of explainability techniques for text classification. arXiv preprint arXiv:2009.13295.
+John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In Proceedings of the 20th International Conference on Computational Linguistics, Geneva, Switzerland.
+Kamil Bujel, Helen Yannakoudakis, and Marek Rei. 2021. Zero-shot sequence labeling for transformer-based sentence classifiers. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 195–205, Online. Association for Computational Linguistics.
+Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos, and Prodromos Malakasiotis. 2021. Paragraph-level rationale extraction through regularization: A case study on european court of human rights cases. arXiv preprint arXiv:2103.13084.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
+Nicola De Cao, Michael Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. arXiv preprint arXiv:2004.14992.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429.
+Marina Fomicheva, Shuo Sun, Erick Fonseca, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, and André FT Martins. 2020a. Mlqe-pe: A multilingual quality estimation and post-editing dataset. arXiv preprint arXiv:2010.04480.
+Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised quality estimation for neural machine translation. arXiv preprint arXiv:2005.10608.
+
+Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. arXiv preprint arXiv:2104.14478.
+Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2015. Can machine translation systems be evaluated by the crowd alone. *Natural Language Engineering*, pages 1-28.
+Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The flores evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6100-6113.
+Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? CoRR, abs/2004.03685.
+Sarthak Jain and Byron C Wallace. 2019. Attention is not explanation. arXiv preprint arXiv:1902.10186.
+Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and Byron C Wallace. 2020. Learning to faithfully rationalize by construction. arXiv preprint arXiv:2005.00115.
+Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Tasks Papers, pages 562-568, Copenhagen, Denmark.
+Dongjun Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 1024–1028, Online. Association for Computational Linguistics.
+Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Austin, Texas. Association for Computational Linguistics.
+Zachary Chase Lipton. 2016. The mythos of model interpretability. CoRR, abs/1606.03490.
+Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Multidimensional quality metrics (mqm): A framework for declaring and describing translation quality metrics. Tradumàtica, (12):0455-463.
+Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020a. Transquest at wmt2020: Sentence-level direct assessment. arXiv preprint arXiv:2010.05318.
+
+Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020b. Transquest: Translation quality estimation with cross-lingual transformers. arXiv preprint arXiv:2011.01536.
+Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144.
+Matiss Rikters and Mark Fishel. 2017. Confidence through attention. arXiv preprint arXiv:1710.03743.
+Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the flow: Information bottlenecks for attribution. arXiv preprint arXiv:2001.00396.
+Sofia Serrano and Noah A Smith. 2019. Is attention interpretable? arXiv preprint arXiv:1906.03731.
+Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
+Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200. CiteSeer.
+Lucia Specia, Frédéric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzmán, and André F. T. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proceedings of the Fifth Conference on Machine Translation, pages 743-764, Online. Association for Computational Linguistics.
+Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimating the sentence-level quality of machine translation systems. In Proceedings of the 13th Annual Conference of the European Association for Machine Translation, pages 28-35, Barcelona, Spain.
+Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pages 3319-3328. PMLR.
+Naftali Tishby and Noga Zaslavsky. 2015. Deep learning and the information bottleneck principle. In 2015 IEEE Information Theory Workshop (ITW), pages 1-5. IEEE.
+Yi-Lin Tuan, Ahmed El-Kishky, Adithya Renduchintala, Vishrav Chaudhary, Francisco Guzmán, and Lucia Specia. 2021. Quality estimation without human-labeled data. arXiv preprint arXiv:2102.04020.
+
+Elena Voita, Rico Sennrich, and Ivan Titov. 2019. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. arXiv preprint arXiv:1909.01380.
+Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. arXiv preprint arXiv:1910.13294.
+Zeyu Yun, Yubei Chen, Bruno A Olshausen, and Yann LeCun. 2021. Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors. arXiv preprint arXiv:2103.15949.
+Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using "annotator rationales" to improve machine learning for text categorization. In Human language technologies 2007: The conference of the North American chapter of the association for computational linguistics; proceedings of the main conference, pages 260-267.
+
+# A Toy dataset
+
+We devise a toy task to test feature attribution performance for word-level QE. We artificially introduce easy-to-detect errors in human translations and train a QE model with near-perfect performance to predict the presence/absence of such errors in a sentence. Specifically, we sample 10K/1K/1K sentence pairs from Es-En News-Commentary dataset (train/dev/test). Next, we artificially inject errors to half of the sentences at a rate of 0.1 using the following operations: insert, delete or replace random word, or swap two words selected at random.
+
+We fine-tune an XLM-R-base model for a sentence-level binary classification task where sentences that contain errors are considered as positive class, and sentences that do not contain errors are considered as negative class. The F1-score of this sentence-level classifier is 0.97. This is expected as the task is very easy.
+
+# B Performance of Rationale Extraction Methods on HTER Data
+
+Tables 4 and 5 show the performance of the proposed methods on the full subset of sentences that contain at least one word-level error for sentence-level QE models trained with HTER and DA ground truth scores. Pearson correlation for both types of models is shown in Table 3. Interestingly, even though for Ro-En and Et-En the performance of sentence-level models is near identical, extracted rationales are more accurate for the model trained with DA judgements.
+
+ | Ro-En | Et-En | Ne-En |
| Pearson r (DA) | 0.84 | 0.66 | 0.66 |
| Pearson r (HTER) | 0.82 | 0.62 | 0.51 |
| Num. sentences (all data) | 1,000 | 1,000 | 1,000 |
| Num. sentences (with errors) | 714 | 889 | 945 |
| Error rate (all data) | 0.21 | 0.28 | 0.65 |
| Error rate (with errors) | 0.28 | 0.31 | 0.65 |
+
+Table 3: Statistics for MLQE-PE test sets: performance of sentence-level QE models (Pearson r), total number of sentences with at least one translation error, and the error rate in the full test set and in the subset of sentences with at least one error.
+
+| Method | Romanian-English | Estonian-English | Nepalese-English |
| AUC | AP | A@1 | R@K | AUC | AP | A@1 | R@K | AUC | AP | A@1 | R@K |
| Gradients | 0.73 | 0.65 | 0.72 | 0.54 | 0.64 | 0.56 | 0.61 | 0.45 | 0.66 | 0.81 | 0.90 | 0.71 |
| Info. Bottleneck | 0.59 | 0.49 | 0.50 | 0.36 | 0.54 | 0.47 | 0.42 | 0.37 | 0.62 | 0.76 | 0.78 | 0.69 |
| Attention | 0.76 | 0.65 | 0.67 | 0.53 | 0.63 | 0.51 | 0.45 | 0.41 | 0.69 | 0.81 | 0.87 | 0.73 |
| LIME | 0.51 | 0.39 | 0.29 | 0.29 | 0.55 | 0.49 | 0.54 | 0.39 | 0.52 | 0.73 | 0.72 | 0.66 |
| Random | 0.50 | 0.38 | 0.27 | 0.25 | 0.50 | 0.41 | 0.34 | 0.31 | 0.50 | 0.70 | 0.63 | 0.64 |
| Glassbox | 0.73 | 0.59 | 0.55 | 0.48 | 0.70 | 0.58 | 0.59 | 0.48 | 0.64 | 0.78 | 0.77 | 0.72 |
| MicroTransQuest | 0.86 | 0.74 | 0.76 | 0.62 | 0.83 | 0.74 | 0.79 | 0.64 | 0.82 | 0.89 | 0.96 | 0.82 |
+
+Table 4: AUC/AP scores, as well as accuracy at top-1 (A@1) and recall at top-K (R@K) for different rationale extraction methods on the MLQE-PE test set on the subset of sentences that contain at least one error for the sentence-level QE models trained to predict DA judgements.
+
+| Method | Romanian-English | Estonian-English | Nepalese-English |
| AUC | AP | A@1 | R@K | AUC | AP | A@1 | R@K | AUC | AP | A@1 | R@K |
| Gradients | 0.69 | 0.59 | 0.61 | 0.48 | 0.66 | 0.59 | 0.66 | 0.49 | 0.64 | 0.77 | 0.82 | 0.70 |
| Info. Bottleneck | 0.53 | 0.43 | 0.38 | 0.32 | 0.58 | 0.50 | 0.47 | 0.38 | 0.57 | 0.73 | 0.68 | 0.67 |
| Attention | 0.74 | 0.61 | 0.59 | 0.49 | 0.69 | 0.59 | 0.58 | 0.48 | 0.66 | 0.78 | 0.82 | 0.72 |
| LIME | 0.61 | 0.47 | 0.37 | 0.35 | 0.64 | 0.56 | 0.59 | 0.45 | 0.53 | 0.74 | 0.76 | 0.68 |
| Random | 0.50 | 0.38 | 0.27 | 0.25 | 0.50 | 0.41 | 0.33 | 0.32 | 0.50 | 0.70 | 0.63 | 0.64 |
| Glassbox | 0.73 | 0.59 | 0.55 | 0.48 | 0.70 | 0.58 | 0.59 | 0.48 | 0.64 | 0.78 | 0.77 | 0.72 |
| MicroTransQuest | 0.86 | 0.74 | 0.76 | 0.62 | 0.83 | 0.74 | 0.79 | 0.64 | 0.82 | 0.89 | 0.96 | 0.82 |
+
+Table 5: AUC/AP scores, as well as accuracy at top-1 (A@1) and recall at top-K (R@K) for different rationale extraction methods on the MLQE-PE test set on the subset of sentences that contain at least one error for the sentence-level QE models trained to predict HTER.
\ No newline at end of file
diff --git a/translationerrordetectionasrationaleextraction/images.zip b/translationerrordetectionasrationaleextraction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6ccae12a1ea1175a4a9eee82e79ad9e10287a984
--- /dev/null
+++ b/translationerrordetectionasrationaleextraction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44dcd673a9ed988a0c719c0c8c7719737a237b8dab2c8f99f7c6a01f557c0eaf
+size 429418
diff --git a/translationerrordetectionasrationaleextraction/layout.json b/translationerrordetectionasrationaleextraction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4d8fa19d1d918c88b416cd0ce81568ff854c5de3
--- /dev/null
+++ b/translationerrordetectionasrationaleextraction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c04f9210e9124a6c5f41f230c5dd48a742c02a582c94412a3f01e163faed030
+size 289889
diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_content_list.json b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f861beefb2945b3f72e1daf3d0261aa79fad63b7
--- /dev/null
+++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b91dde2e47d214e1c40dc3acf5b762ab611db9dc55ac8c2de20f258964b31a2
+size 42455
diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_model.json b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..40e886b424e4c1b70c4526b7ee0dc30d8a4ce14c
--- /dev/null
+++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b4da418131f14dcd083656cb718853992f9691e81b354aa2ceaa62b938d38382
+size 52881
diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_origin.pdf b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7a582f6abc344ff803afdb6d36c214b8e8bd15d9
--- /dev/null
+++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/f08d7529-1083-4cf2-b571-926579b421d8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74c06d315848762710b4896ad1021abcda7b571e3a14a46f6db58fb8bcaf1c4b
+size 675322
diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/full.md b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c9c068d4bee7b58913086912659639488db2541b
--- /dev/null
+++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/full.md
@@ -0,0 +1,173 @@
+# Two Birds with One Stone: Unified Model Learning for Both Recall and Ranking in News Recommendation
+
+Chuhan $\mathbf{W}\mathbf{u}^{\dagger}$ Fangzhao $\mathbf{W}\mathbf{u}^{\dagger*}$ Tao Qi† Yongfeng Huang†
+
+$^{\dagger}$ Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
+
+$^{\ddagger}$ Microsoft Research Asia, Beijing 100080, China
+
+{wuchuhan15,wufangzhao,taoqi.qt} $@$ gmail.com
+
+yfhuang@tsinghua.edu.cn
+
+# Abstract
+
+Recall and ranking are two critical steps in personalized news recommendation. Most existing news recommender systems conduct personalized news recall and ranking separately with different models. However, maintaining multiple models leads to high computational cost and poses great challenges to meeting the online latency requirement of news recommender systems. In order to handle this problem, in this paper we propose UniRec, a unified method for recall and ranking in news recommendation. In our method, we first infer user embedding for ranking from the historical news click behaviors of a user using a user encoder model. Then we derive the user embedding for recall from the obtained user embedding for ranking by using it as the attention query to select a set of basis user embeddings which encode different general user interests and synthesize them into a user embedding for recall. The extensive experiments on benchmark dataset demonstrate that our method can improve both efficiency and effectiveness for recall and ranking in news recommendation.
+
+# 1 Introduction
+
+News recommendation techniques are widely used by many online news websites and Apps to provide personalized news services (Wu et al., 2020b). Recall and ranking are two critical steps in personalized news recommender systems (Karimi et al., 2018; Wu et al., 2021a). As shown in Fig. 1, when a user visits a news platform, the recommender system first recalls a set of candidate news from a large-scale news pool, and then ranks candidate news for personalized news display (Wu et al., 2020b). Both news recall and ranking have been widely studied (Elkahky et al., 2015; Liu et al., 2019, 2020; Wu et al., 2020a; Wang et al., 2020; Wu et al., 2021c; Qi et al., 2021a,b,c,d). In online news recommender systems, recall and ranking are
+
+
+Figure 1: A typical pipeline of news recommendation.
+
+usually conducted separately with different models, as shown in Fig. 1. However, maintaining separate models for news recall and ranking in large-scale news recommender systems usually leads to heavy computation and memory cost (Tan et al., 2020), and it may be difficult to meet the latency requirement of online news services.
+
+Learning a unified model for personalized news recall and ranking would be greatly beneficial for alleviating the computation load of news recommender systems. However, it is a non-trivial task because the goals of recall and ranking are not the same (Covington et al., 2016; Malkov and Yashunin, 2018). Ranking usually aims to accurately rank candidates based on their relevance to user interests (Wu et al., 2019b; Ge et al., 2020; Wu et al., 2021b; Wang et al., 2020), while recall mainly aims to form a candidate pool that can comprehensively cover user interests (Liu et al., 2020; Qi et al., 2021d). Thus, the model needs to adapt to the different goals of recall and ranking without hurting their performance.
+
+In this paper, we propose a news recommendation method named UniRec, which can learn a unified user model for personalized news recall and ranking. In our method, we first encode news into embeddings with a news encoder, and learn a user embedding for ranking from the embeddings of historical clicked news. We further derive the user embedding for recall by using the user embedding for ranking as the attention query to select a
+
+
+Figure 2: The framework of UniRec.
+
+set of basis user embeddings that encode different general user interest aspects and synthesize them into a user embedding for recall. In the test phase, we only use the basis user embeddings with top attention weights to compose the user embedding for recall to filter noisy user interests. Extensive experiments on a real-world dataset demonstrate that our method can conduct personalized news recall and ranking with a unified model and meanwhile achieve promising recall and ranking performance.
+
+# 2 Methodology
+
+The overall framework of UniRec is shown in Fig. 2. We first learn a user embedding for ranking from the user's historical clicked news. We then derive a user embedding for recall from the user embedding for ranking and a set of basis user embeddings that encode different general interests. Their details are introduced as follows.
+
+# 2.1 Ranking for News Recommendation
+
+The ranking part aims to rank candidate news in a small candidate list according to user interests. Following (Wu et al., 2020b), UniRec uses a news encoder that learns news embeddings from news texts and a user encoder that learns user interest embedding for ranking from the embeddings of clicked news. The candidate news embedding and user embedding for ranking are used to compute a click score for personalized news ranking. More specifically, we denote a user $u$ has $N$ historical clicked news $[D_1, D_2, \dots, D_N]$ . These clicked news are encoded into a sequence of news embeddings, which is denoted as $[\mathbf{r}_1, \mathbf{r}_2, \dots, \mathbf{r}_N]$ . The user encoder further takes this sequence as input, and outputs a user embedding $\mathbf{u}_{ra}$ for ranking. For a candidate news $D_i^c$ , we use the news encoder to obtain its embedding $\mathbf{r}_i^c$ . We follow (Okura et al., 2017) to
+
+compute the probability score of the user $u$ clicking on the candidate news $D_{i}^{c}$ via inner product, i.e., $\hat{y}_{ra}^{i} = \mathbf{u}_{ra}\cdot \mathbf{r}_{i}^{c}$ . The click scores of the news in a candidate list are used for personalized ranking. Following (Wu et al., 2019c), we use multi-head self-attention networks in both news and user encoders to capture the contexts of words and click behaviors, respectively. In addition, following (Devlin et al., 2019) we add position embeddings to capture the orders of words and behaviors.
+
+# 2.2 Recall for News Recommendation
+
+The recall part aims to select candidate news from a large news pool based on their relevance to user interests. To efficiently exploit user interest information for personalized news recall, we take the user embedding for ranking as input instead of rebuilding user interest representations from original user click behaviors. However, since the goals of ranking and recall are not the same (Kang and McAuley, 2019), the user embedding for ranking may not be suitable for news recall. Thus, we propose a method to distill a user embedding for recall from the user embedding for ranking. More specifically, we maintain a basis user embedding memory that encodes different general user interest aspects. We denote the $M$ basis user embeddings in the memory as $[\mathbf{v}_1,\mathbf{v}_2,\dots,\mathbf{v}_M]$ . We use the user embedding for ranking as the attention query to select basis user embeddings. We denote the attention weight of the $i$ -th basis user embedding as $\alpha_{i}$ which is computed as:
+
+$$
+\alpha_ {i} = \frac {\exp \left(\mathbf {u} _ {r a} \cdot \mathbf {w} _ {i}\right)}{\sum_ {j = 1} ^ {M} \exp \left(\mathbf {u} _ {r a} \cdot \mathbf {w} _ {j}\right)}, \tag {1}
+$$
+
+where the parameters $\mathbf{w}_i$ are served as the attention keys. Different from additive attention (Yang et al., 2016) where the attention keys and values are equivalent, in our approach the keys (i.e., $\mathbf{w}_i$ ) are different from the values (i.e., $\mathbf{v}_i$ ). This is because we expect the basis user embeddings to have different spaces with the user embeddings for ranking to better adapt to the recall task. The basis user embeddings are further synthesized into a unified user embedding $\mathbf{u}_{re}$ for recall by $\mathbf{u}_{re} = \sum_{i=1}^{M} \alpha_i \mathbf{v}_i$ . We use a news encoder that is shared with the ranking part to obtain the embedding $\mathbf{r}^c$ of each candidate news $D^c$ in the news pool. The final recall relevance score $\hat{y}_{re}$ between user interest and candidate news is computed by $\hat{y}_{re} = \mathbf{u}_{re} \cdot \mathbf{r}^c$ .
+
+# 2.3 Model Training
+
+Then we introduce the model training details of UniRec. We use a two-stage model training strategy to first learn the ranking part and then learn the recall part. Following prior works (Huang et al., 2013; Wu et al., 2019b,c), we use negative sampling techniques to construct samples for contrastive model learning (Oord et al., 2018). For learning the ranking part, we use clicked news in each impression as positive samples, and we randomly sample $K$ non-clicked news that are displayed in the same impression as negative samples. The loss function is formulated as follows:
+
+$$
+\mathcal {L} _ {r a} = - \log \left[ \frac {\exp (\hat {y} _ {r a} ^ {+})}{\exp (\hat {y} _ {r a} ^ {+}) + \sum_ {i = 1} ^ {K} \exp (\hat {y} _ {r a} ^ {i -})} \right], (2)
+$$
+
+where $\hat{y}_{ra}^{+}$ and $\hat{y}_{ra}^{-}$ denote the predicted click scores of a positive sample and the corresponding $i$ -th negative sample, respectively. By optimizing this loss function, the parameters of news and user encoders can be tuned. Motivated by (Ying et al., 2018), we fix the news encoder after the ranking model converges. Then, to learn the recall part, we also use clicked news of each user as positive samples, while we randomly select $T$ non-clicked news from the entire news set as negative samples, which aims to simulate the news recall scenario. The loss function for recall part training is as follows:
+
+$$
+\mathcal {L} _ {r e} = - \log \left[ \frac {\exp \left(\hat {y} _ {r e} ^ {+}\right)}{\exp \left(\hat {y} _ {r e} ^ {+}\right) + \sum_ {i = 1} ^ {T} \exp \left(\hat {y} _ {r e} ^ {i -}\right)} \right], \tag {3}
+$$
+
+where $\hat{y}_{re}^{+}$ and $\hat{y}_{re}^{i - }$ represent the predicted recall relevance scores of a positive sample and the corresponding $i$ -th negative sample, respectively.
+
+However, not all basis user embeddings are relevant to the interests of a user. Thus, motivated by Principal Component Analysis (PCA), in the test phase we propose to only use the top $P$ basis user embeddings with the highest attention weights to compose the user embedding for recall. We denote these basis user embeddings as $[\mathbf{v}_{t_1},\mathbf{v}_{t_2},\dots,\mathbf{v}_{t_P}]$ . We re-normalize their attention weights as follows:
+
+$$
+\alpha_ {t _ {i}} = \frac {\exp \left(\alpha_ {t _ {i}}\right)}{\sum_ {j = 1} ^ {P} \exp \left(\alpha_ {t _ {j}}\right)}. \tag {4}
+$$
+
+The user embedding $\mathbf{u}_{re}$ for recall is built by $\mathbf{u}_{re} = \sum_{i=1}^{P} \alpha_{t_i} \mathbf{v}_{t_i}$ , which can attend more to the major interests of a user and filter noisy basis user embeddings for better news recall.
+
+# 2.4 Complexity Analysis
+
+We provide some discussions on the computational complexity. In existing news recommendation methods that conduct recall and ranking with separate models, the computational complexity of learning user embeddings for recall and ranking are both $O(N)$ at least, because they need to encode the entire user behavior sequence. UniRec has the same complexity in learning the user embedding for ranking, but the complexity of deriving the user embedding for recall is reduced to $O(M)$ , where $M$ is usually much smaller than $N$ . In addition, the attention network used for synthesizing the user embedding for recall may also be lighter-weight than the user encoder. Thus, the total computational complexity can be effectively reduced.
+
+# 3 Experiments
+
+# 3.1 Dataset and Experimental Settings
+
+We conduct experiments on a large-scale public dataset named MIND (Wu et al., 2020b) for news recommendation. It contains news impression logs of 1 million users on Microsoft News in 6 weeks. The logs in the first five weeks are for training and validation, and the rest logs are for test. The detailed statistics of MIND are shown in Table 1.
+
+| # Users | 1,000,000 | # News | 161,013 |
| # Impressions | 15,777,377 | # Click behaviors | 24,155,470 |
| Avg. news title len. | 11.52 | # Categories | 20 |
+
+Table 1: Statistics of the MIND dataset.
+
+In our experiments, following (Wu et al., 2020b) we use news titles to learn news embeddings. The number of basis user embeddings is 20, and they are randomly initialized. The hyperparameter $P$ that controls the number of basis user embeddings for composing the user embedding for recall in the test phase is 5. The number of negative samples associated with each positive one is 4 and 200 for the ranking and recall tasks, respectively. Adam (Bengio and LeCun, 2015) is used as the optimizer. The batch size is 32. These hyperparameters are selected on the validation set. Following (Wu et al., 2020b), we use AUC, MRR, nDCG@5 and nDCG@10 to evaluate news ranking performance. In addition, we use recall rate of the top 100, 200, 500 and 1000 ranked news to evaluate news recall performance. We repeat every experiment 5 times.
+
+| Methods | AUC | MRR | nDCG@5 | nDCG@10 |
| EBNR | 66.22±0.17 | 31.97±0.14 | 34.89±0.17 | 40.49±0.19 |
| DKN | 65.61±0.20 | 31.58±0.17 | 34.32±0.19 | 40.04±0.22 |
| NPA | 67.62±0.14 | 32.69±0.13 | 35.52±0.15 | 41.33±0.17 |
| NAML | 67.45±0.12 | 32.48±0.09 | 35.39±0.10 | 41.19±0.14 |
| NRMS | 68.24±0.09 | 33.38±0.10 | 36.34±0.10 | 42.12±0.13 |
| UniRec | 68.41±0.11 | 33.50±0.10 | 36.47±0.12 | 42.26±0.14 |
+
+Table 2: Ranking performance of different methods.
+
+| Methods | R@100 | R@200 | R@500 | R@1000 |
| YoutubeNet | 1.395±0.034 | 2.284±0.039 | 4.171±0.042 | 6.867±0.037 |
| Pinnersage | 1.431±0.020 | 2.340±0.018 | 4.252±0.017 | 6.927±0.019 |
| Octopus | 1.426±0.026 | 2.392±0.029 | 4.344±0.031 | 7.188±0.029 |
| UniRec(all) | 1.443±0.023 | 2.402±0.027 | 5.022±0.025 | 8.294±0.026 |
| UniRec(top) | 1.516±0.026 | 2.531±0.024 | 5.142±0.027 | 8.485±0.026 |
+
+Table 3: Recall performance of different methods.
+
+# 3.2 Performance Evaluation
+
+We first compare the ranking performance of UniRec with several baseline methods, including: (1) EBNR (Okura et al., 2017), GRU (Cho et al., 2014) network for user interest modeling in news recommendation; (2) DKN (Wang et al., 2018), deep knowledge network for news recommendation; (3) NPA (Wu et al., 2019b), news recommendation with personalized attention; (4) NAML (Wu et al., 2019a), news recommendation with attentive multi-view learning; (5) NRMS (Wu et al., 2019c), news recommendation with multi-head self-attention. The ranking performance of different methods is shown in Table 2. We find that UniRec outperforms several compared baseline methods like NAML and NPA. This may be because self-attention has stronger ability in modeling news and user interests. In addition, UniRec also slightly outperforms its basic model NRMS. This is because UniRec can capture the orders of words and behaviors via position embedding.
+
+In the news recall task, we compare UniRec with top basis user embeddings (denoted as UniRec(top)) with the following baseline methods: (1) YoutubeNet (Covington et al., 2016), using the average of clicked news embeddings for recall; (2) Pinnersage (Pal et al., 2020), an item recall method based on hierarchical clustering; (3) Octopus (Liu et al., 2020), learning elastic number of user embeddings for item recall; (4) UniRec(all), a variant of UniRec that uses all basis user embeddings to compose the user embedding for recall. We
+
+show the recall performance of different methods in Table 3. We find YoutubeNet is less performant than other recall methods. This may be because different user behaviors may have different importance in user interest modeling and simply average their embeddings may be suboptimal. In addition, both UniRec(top) and UniRec(all) outperform other baseline methods. This is because our approach can exploit the user interest information inferred from the ranking module to enhance news recall. In addition, our approach is a unified model for both recall and ranking, which has better efficiency in online systems than other methods. Besides, UniRec(top) outperforms its variant UniRec(all). It may be because selecting the basis user embeddings with top attention weights can learn accurate user interest embeddings by attending to major user interests and filtering noisy ones. The above results validate the effectiveness of our method in both news ranking and recall.
+
+# 3.3 Case Study
+
+We verify the effectiveness of UniRec in news recall via several case studies. Fig. 3 shows the clicked news of a random user and several top news recalled by UniRec. From the user's clicked news, we can infer that this user may be interested in finance, sports and TV shows. We find the recall result of UniRec covers user interest categories of clicked news, but also keeps some diversity with them. It shows that UniRec can generate accurate and diverse personalized news recall results.
+
+ | Category | Title |
| Clicked News | Finance | Chipotle customers say the chain is charging them hundreds of dollars in fake orders |
| Sports | Every touchdown from every game in week 9 |
| TV | fresh off the boat canceled after six seasons |
| UniRec Recall | Sports | The Patriots opened with a grinding 16-play drive in which nearly everything went right |
| Finance | Dean foods files for bankruptcy |
| TV | Viral Wheel of Fortune Contestant and His Wife Clarify Hilarious 'Loveless Marriage' Intro |
| TV | 8 of the best and 8 of the worst tv shows that got canceled this year, so far |
| Sports | Browns, Steelers brawl at end of cleveland's 21-7 win |
+
+
+Figure 3: The news clicked by a randomly sampled user and the top news recalled by UniRec.
+
+
+Figure 4: Influence of the basis user embedding number.
+Figure 5: Influence of the hyperparameter $P$ .
+
+# 3.4 Hyperparameter Analysis
+
+Finally, we study the influence of two important hyperparameters in our UniRec method, including the total number $M$ of basis user embeddings and the number $P$ of basis user embeddings for composing the user embeddings for recall. We first set $P = M$ and tune the value of $M$ . The recall performance is shown in Fig. 4. We find the performance is suboptimal when $M$ is too small, which may be due to the diverse user interests cannot be covered by a few basis user embeddings. However, the performance also descends when $M$ is large.
+
+This may be because it is difficult to accurately select informative basis user embeddings for user interest modeling. In addition, the computation and memory costs also increase. Thus, we set $M$ to a medium value (i.e., 20) that yields the best performance. We then tune the value of $P$ under $M = 20$ . The results are shown in Fig. 5. We find the performance is suboptimal when $P$ is very small. This is intuitive because the user interests cannot be fully covered. However, the performance also declines when $P$ is relatively large. This may be because basis user embeddings with relatively low attention weights are redundant or even noisy for user interest modeling. Thus, we choose to use 5 basis user embeddings to compose the user embedding for recall.
+
+# 4 Conclusion
+
+In this paper, we present a unified approach for recall and ranking in news recommendation. In our method, we first infer a user embedding for ranking from historical news click behaviors via a user encoder model. Then we derive a user embedding for recall from the obtained user embedding for ranking by regarding it as attention query to select a set of basis user embeddings that encode different general user interests. Extensive experiments on a benchmark dataset validate the effectiveness of our approach in both news ranking and recall.
+
+# Acknowledgments
+
+This work was supported by the National Natural Science Foundation of China under Grant numbers U1936208, U1936216, and 61862002, and the research initiation project of Zhejiang Lab (No. 2020LC0PI01).
+
+# References
+
+Yoshua Bengio and Yann LeCun. 2015. Adam: A method for stochastic optimization. In ICLR.
+Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, pages 1724-1734.
+Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In RecSys., pages 191-198. ACM.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, pages 4171–4186.
+Ali Mamdouh Elkahky, Yang Song, and Xiaodong He. 2015. A multi-view deep learning approach for cross domain user modeling in recommendation systems. In WWW, pages 278-288.
+Suyu Ge, Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2020. Graph enhanced representation learning for news recommendation. In WWW, pages 2863-2869.
+Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM, pages 2333-2338. ACM.
+Wang-Cheng Kang and Julian McAuley. 2019. Candidate generation with binary codes for large-scale top-n recommendation. In CIKM, pages 1523-1532.
+Mozhgan Karimi, Dietmar Jannach, and Michael Jugovac. 2018. News recommender systems-survey and roads ahead. Information Processing & Management, 54(6):1203-1227.
+Zheng Liu, Jianxun Lian, Junhan Yang, Defu Lian, and Xing Xie. 2020. Octopus: Comprehensive and elastic user representation for the generation of recommendation candidates. In SIGIR, pages 289-298.
+Zheng Liu, Yu Xing, Fangzhao Wu, Mingxiao An, and Xing Xie. 2019. Hi-fi ark: Deep user representation via high-fidelity archive network. In *IJCAI*, pages 3059-3065.
+Yu A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. TPAMI, 42(4):824-836.
+Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. Embedding-based news recommendation for millions of users. In KDD, pages 1933-1942. ACM.
+
+Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
+Aditya Pal, Chantat Eksombatchai, Yitong Zhou, Bo Zhao, Charles Rosenberg, and Jure Leskovec. 2020. Pinnersage: Multi-modal user embedding framework for recommendations at pinterest. In KDD, pages 2311-2320.
+Tao Qi, Fangzhao Wu, Chuhan Wu, and Yongfeng Huang. 2021a. Personalized news recommendation with knowledge-aware interactive matching. In SI-GIR, pages 61-70.
+Tao Qi, Fangzhao Wu, Chuhan Wu, and Yongfeng Huang. 2021b. Pp-rec: News recommendation with personalized user interest and time-aware news popularity. In ACL, pages 5457-5467.
+Tao Qi, Fangzhao Wu, Chuhan Wu, Yongfeng Huang, and Xing Xie. 2021c. Uni-fedrec: A unified privacy-preserving news recommendation framework for model training and online serving. In EMNLP Findings, pages 1438-1448.
+Tao Qi, Fangzhao Wu, Chuhan Wu, Peiru Yang, Yang Yu, Xing Xie, and Yongfeng Huang. 2021d. Hierec: Hierarchical user interest modeling for personalized news recommendation. In ACL, pages 5446-5456.
+Qiaoyu Tan, Ninghao Liu, Xing Zhao, Hongxia Yang, Jingren Zhou, and Xia Hu. 2020. Learning to hash with graph neural networks for recommender systems. In WWW, pages 1988-1998.
+Heyuan Wang, Fangzhao Wu, Zheng Liu, and Xing Xie. 2020. Fine-grained interest matching for neural news recommendation. In ACL, pages 836-845.
+Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018. Dkn: Deep knowledge-aware network for news recommendation. In WWW, pages 1835-1844.
+Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019a. Neural news recommendation with attentive multi-view learning. In IJCAI, pages 3863-3869.
+Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019b. Npa: Neural news recommendation with personalized attention. In KDD, pages 2576-2584.
+Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, and Xing Xie. 2019c. Neural news recommendation with multi-head self-attention. In EMNLP, pages 6390-6395.
+Chuhan Wu, Fangzhao Wu, Yongfeng Huang, and Xing Xie. 2021a. Personalized news recommendation: A survey. arXiv preprint arXiv:2106.08934.
+
+Chuhan Wu, Fangzhao Wu, Yongfeng Huang, and Xing Xie. 2021b. User-as-graph: User modeling with heterogeneous graph pooling for news recommendation. In *IJCAI*, pages 1624–1630.
+Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2020a. User modeling with click preference and reading satisfaction for news recommendation. In *IJCAI*, pages 3023–3029.
+Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021c. Feedrec: News feed recommendation with various user feedbacks. arXiv preprint arXiv:2102.04903.
+Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie Wu, et al. 2020b. Mind: A large-scale dataset for news recommendation. In ACL, pages 3597-3606.
+Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In NAACL-HLT, pages 1480-1489.
+Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In KDD, pages 974-983.
\ No newline at end of file
diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/images.zip b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..26c7ea8d4003f90d6f7c05570653ad01c8611922
--- /dev/null
+++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:917edc6f1fc1eb3b0d41f71a82511f378600b44ea05489a841571d217edadefc
+size 300936
diff --git a/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/layout.json b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5ace6b7d818ac6091f061c15cc1773f9bf18a63e
--- /dev/null
+++ b/twobirdswithonestoneunifiedmodellearningforbothrecallandrankinginnewsrecommendation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8980895ef74f88d89f91ca521f6ccd449f4b1b150be668e7222adb274138e206
+size 219224
diff --git a/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_content_list.json b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b915bb0253a626c98b9040598a7d6b91de03c0e4
--- /dev/null
+++ b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a9c8cf12164f619e1a2d3ffb779c4118fe8101ee84c7d75f113cbf83235e8ffc
+size 38487
diff --git a/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_model.json b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fe4e4f9e8f00e9e1b7866df966c12eed5b086ea5
--- /dev/null
+++ b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fcc6a39ef278cc124e61a72a131d882096a88962831c88237ec800581fd87f03
+size 47459
diff --git a/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_origin.pdf b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..af63ae7db42b54207b1cac261b4d0de12528cea6
--- /dev/null
+++ b/twostepquestionretrievalforopendomainqa/ed9f0c05-42ca-4377-bfb9-a14865bf3542_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:977ed7045cb2a90d329ec645684ae2c2ec59fc89037d58c77cb5d7c199bb0aac
+size 427293
diff --git a/twostepquestionretrievalforopendomainqa/full.md b/twostepquestionretrievalforopendomainqa/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9a0ce15649d33a93ec4827b777c43eb19db2ddc2
--- /dev/null
+++ b/twostepquestionretrievalforopendomainqa/full.md
@@ -0,0 +1,160 @@
+# Two-Step Question Retrieval for Open-Domain QA
+
+Yeon Seonwoo\*, Juhee Son\*, Jiho Jin†, Sang-Woo Lee‡§, Ji-Hoon Kim‡§, Jung-Woo Ha‡§, Alice Oh†
+
+†KAIST, ‡NAVER AI Lab, §NAVER CLOVA
+
+{yeon.seonwoo,sjh5665,jinjh0123}@kaist.ac.kr
+
+{sang.woo lee, genesisik, jungwoo.ha}@navercorp.com
+
+alice.oh@kaist.edu
+
+# Abstract
+
+The retriever-reader pipeline has shown promising performance in open-domain QA but suffers from a very slow inference speed. Recently proposed question retrieval models tackle this problem by indexing question-answer pairs and searching for similar questions. These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models. This paper proposes a two-step question retrieval model, SQuID (Sequential Question-Indexed Dense retrieval) and distant supervision for training. SQuID uses two bi-encoders for question retrieval. The first-step retriever selects top-k similar questions, and the second-step retriever finds the most similar question from the top-k questions. We evaluate the performance and the computational efficiency of SQuID. The results show that SQuID significantly increases the performance of existing question retrieval models with a negligible loss on inference speed. $^{1}$
+
+# 1 Introduction
+
+Retriever-reader models in open-domain QA require a long time for inference (Izacard and Grave, 2021; Lewis et al., 2020b; Sachan et al., 2021; Mao et al., 2021a; Karpukhin et al., 2020). This has been identified as a bottleneck in building real-time QA systems, and question retrieval and phrase-indexed QA have been proposed to resolve this problem (Seo et al., 2018, 2019; Lee et al., 2020, 2021a,b; Lewis et al., 2021a,b). These approaches directly search the answer of the input question from the corpus without conducting additional machine reading steps which are computationally inefficient. In phrase-indexed QA, retrievers pre-index all phrases in the corpus and find the most similar phrase to the input question. In question retrieval, synthetic
+
+
+Figure 1: Trade-off relation between the open-domain QA performance and the inference time of existing question retrieval models (blue dots) and SQuID (red dots) on NaturalQuestions (NQ). The x-axis represents the inference speed and the y-axis represents the QA performance.
+
+question-answer pairs are pre-indexed and referenced by retrievers (Du et al., 2017; Duan et al., 2017; Fabbri et al., 2020; Lewis et al., 2020a).
+
+Although recent question retrieval models significantly increase the inference speed, this improvement accompanies QA performance degradation. Several approaches have been applied to question retrieval models to overcome the performance degradation, such as adopting the cross-encoder (Mao et al., 2021b; Xiong et al., 2020) for re-ranking and increasing the model size (Lewis et al., 2021b). However, these approaches cause a significant loss of computational efficiency. Figure 1 shows the trade-off between the open-domain QA performance and the inference speed of question retrieval models.
+
+We propose SQuID (Sequential Question-Indexed Dense retrieval) which significantly improves QA performance without losing computational efficiency. Our work follows previous work on neural re-ranking methods, which use a cross-encoder to re-rank the top-k passages retrieved from the first-step retriever (Lewis et al., 2021b;
+
+Xiong et al., 2020). Re-ranking methods have improved retrieval performance but require huge computation costs due to the cross-encoder architecture. We use an additional bi-encoder retriever in SQuID instead of the cross-encoder to prevent loss on computational efficiency. We also provide distant supervision methods for training the additional retriever in the absence of training data for question retrievers.
+
+We evaluate SQuID on NaturalQuestions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). We conduct three types of experiments: open-domain QA, computational efficiency evaluation, and analysis on distant supervision methods for training the second-step retriever. Experimental results show that SQuID significantly outperforms the state-of-the-art question retrieval model by $4.0\%$ on NQ and $6.1\%$ on TriviaQA without losing computational efficiency. Our main contribution is in proposing a sequential question retriever model that successfully improves both QA performance and inference speed, thereby making a meaningful step toward developing real-time open-domain QA systems.
+
+# 2 Related Work
+
+The research problem of reducing the computational cost of open-domain QA has received much attention recently. The main bottleneck of a retriever-reader model is the machine reading step, and Seo et al. (2018, 2019); Lee et al. (2021a) propose phrase-indexed QA, which directly retrieves the answer from the corpus without the machine reading step. These models pre-compute the context of phrases in a corpus and conduct lexical and semantic similarity searches between the given question and the context of phrases (Zhao et al., 2021; Yamada et al., 2021). Most related to our work are the question retrieval models with question-generation models to build question-answer pairs and conduct a similarity search between the input question and the pre-indexed questions (Lewis et al., 2021a,b). These models significantly reduce the computational cost but results in lower performance. Our work provides an efficient question retrieval pipeline with distant supervision methods for training, while previous question retrieval models focus on the indexing methods with less attention on the retrieval pipeline.
+
+# 3 Method
+
+Our method is constructed based on the question retrieval pipeline proposed by Lewis et al. (2021b), where question retrievers find the most similar question to the input question and return the answer of the selected question. In this study, we note that previous question retrievers are optimized not just for improving the retrieval performance but for maintaining the inference speed to cover millions of text (Lewis et al., 2021b). In this process, the performance of retrievers decreases as they are more optimized for computational efficiency. We propose to use an additional retriever that takes the top-k predictions from the first retriever and selects the most similar question from the top-k results. The second-step retriever has a lower constraint in the inference speed than the first retriever since its search space contains only a few samples. This enables us to focus only on the retrieval performance when designing the training method. The overall training and inference procedure of SQuID is illustrated in Figure 2. We describe the details of SQuID below.
+
+# 3.1 Training
+
+Since the annotated question-question pairs are unavailable, we distantly supervise SQuID with heuristically selected positive and negative samples. We first select top-k similar questions with the first-step retriever. Among the top-k questions, we choose the positive samples and the negative samples as the following. For positive samples, we choose questions with the most similar answer to the ground truth answer in terms of F1-score, the evaluation metric used in extractive QA (Rajpurkar et al., 2016). For negative samples, we sample questions with answers that differ from the ground truth answer (Karpukhin et al., 2020; Xiong et al., 2021).
+
+When the input question is provided with a positive sample $(q^{+})$ and $m$ negative samples $(q_{1}^{-},\dots,q_{m}^{-})$ , our second-step retriever is trained to distinguish the positive and negative samples. The loss function is as follows:
+
+$$
+\begin{array}{l} L (q, q ^ {+}, q _ {1} ^ {-}, \dots , q _ {m} ^ {-}) = \\ - \log \left(\frac {e ^ {\operatorname {s i m} (q , q ^ {+})}}{e ^ {\operatorname {s i m} (q , q ^ {+})} + \sum_ {i = 1} ^ {m} e ^ {\operatorname {s i m} (q , q _ {i} ^ {-})}}\right). \tag {1} \\ \end{array}
+$$
+
+The similarity function is defined as the dot product of two vectors: $\mathrm{sim}(q_1,q_2) = E_Q(q_1)^T E_Q(q_2)$ . Where $E_{Q}(\cdot)$ is the question encoder of the second-step retriever.
+
+
+(a) Training procedure
+
+
+(b) Inference procedure
+Figure 2: Illustrations of training and inference processes of SQuID. SQuID consists of two retrievers. The first-step retriever selects top-k similar questions among the pre-indexed QAs. From the top-k results, (a) the second-step retriever is trained to distinguish the positive sample from the negative samples, and (b) it selects the most similar question at the inference time.
+
+# 3.2 Inference
+
+Given a question $q$ , the two retrievers of SQuID work in two steps. The first-step retriever selects top-k similar questions. The retrieved questions are then mapped to the question vectors pre-computed by the second-step retriever. The second-step retriever selects the most similar question $q'$ from the top-k results with the question vectors. We use Maximum Inner Product Search (MIPS) for the second-step retrieval. Finally, SQuID puts the answer of $q'$ as the answer for $q$ .
+
+# 4 Experimental Setup and Results
+
+We evaluate the performance and computational efficiency of SQuID on two open-domain QA datasets: NaturalQuestions (NQ) and TriviaQA. We also compare various distant supervision methods for training SQuID. We use exact match (EM) (Rajpurkar et al., 2016) for performance evaluation and the number of questions per second (Q/sec) for evaluation of inference speed. The details of our experimental setup is described in Appendix A.2.
+
+Question Retrievers on Open-Domain QA: We evaluate SQuID with two different first-step retrievers: BM25 and RePAQ-base $256^{2}$ (Lewis et al., 2021b). Table 1 shows that SQuID-BM25/DPR and SQuID-RePAQ/DPR achieve the best performance among question retrieval models on TriviaQA and NQ, respectively. Note that SQuID-RePAQ/DPR outperforms RePAQ-base256 significantly with a
+
+negligible loss of inference speed; $4.0\%$ p EM gain on NQ and $6.1\%$ p gain on TriviaQA at $92.0\%$ speed (1266 Q/sec vs. 1376 Q/sec).
+
+Trade-off between QA Performance and Computational Efficiency: Table 1 shows the tradeoff between the open-domain QA performance and the inference speed of the three types of open-domain QA models. Comparing RePAQ-large and RAG-Sequence, we see a large performance gap of $3.3\%$ on NQ and $18.0\%$ on TriviaQA, and we also see a large speed gap of $624~\mathrm{Q / s}$ and $0.8$ Q/s. SQuID bridges this gap, achieving comparable performances to RAG-Sequence on NQ while maintaining the high inference speed. The performance gain on TriviaQA is not as high, and we conjecture that this is because RePAQ uses only questions from NQ in its filtering step. We leave a deeper study of this discrepancy for future research.
+
+Figure 1 illustrates the QA performance and inference speed of various configurations of RePAQ SQuID. We vary the encoder of the second-step retriever with different pre-trained models: DPR (Karpukhin et al., 2020), BERT-base/large (Devlin et al., 2019), and ALBERT-base/large (Lan et al., 2019). The first and second-step question encoders can be executed concurrently, so we run them in parallel and set the batch size as half to measure the inference speed (SQuID-DPR-parallel). We use the maximum batch size possible on a single V100-16GB GPU. The figure shows that results of SQuID all lie to the top right of the curve fitted to the RePAQ results, meaning that SQuID succeeds in improving both QA performance and inference
+
+| Model Type | Model | NQ | TriviaQA | Inference speed (Q/sec) |
| Question retrieval | RePAQ-base256 (Lewis et al., 2021b) | 40.0 | 38.8 | 1376 |
| RePAQ-base (Lewis et al., 2021b) | 40.9 | 39.7 | 738 |
| RePAQ-large (Lewis et al., 2021b) | 41.2 | 38.8 | 624 |
| SQuID-BM25/DPR | 43.1 | 45.6 | 328 |
| SQuID-RePAQ/DPR | 44.0 | 44.9 | 1006 (1266†) |
| Phrase-indexed | DensePhrase (Lee et al., 2021a) | 40.9 | 50.7 | 20.6* |
| Retriever-reader | RAG-Sequence (Lewis et al., 2020b) | 44.5 | 56.8 | 0.8 |
| FiD-large (Izacard and Grave, 2021) | 51.4 | 67.6 | 0.5* |
+
+Table 1: The open-domain QA performance (EM) and inference speeds of SQuID and baselines on NQ test set and TriviaQA test set. We use the performance and the inference speed of each baseline reported from their results. * indicates the inference speed is from the original paper. † indicates that the inference speed is computed in the parallel computing setting.
+
+| Supervision | BM25 | RePAQ |
| w/o 2nd retriever | 34.4 | 40.0 |
| + Self | 39.5 | 40.4 |
| + Similar | 43.1 | 44.0 |
| + Similar / Self | 43.6 | 44.1 |
| + Same Answer | 43.4 | 44.4 |
+
+Table 2: The open-domain QA performance (EM) of SQuID in four different distant supervision methods on NQ test set.
+
+speed. The detailed results are in Appendix A.1.
+
+Analysis on Positive Sampling Methods: We distantly supervise the second-step retriever because annotated question-question pairs are unavailable. We conduct experiments on various positive sampling methods for distant supervision: "Self", "Similar", "Similar/Self", and "Same Answer". Each method uses the following as the positive sample:
+
+1) the input question itself ("Self"), 2) a similar question with a similar answer ("Similar"), 3) a similar question if it has the ground truth answer, or the input question itself ("Similar/Default"), and 4) a random question with the ground truth answer ("Same Answer").
+
+Table 2 shows the performance of SQuID-BM25 and SQuID-RePAQ-base256 on the NQ test set with the four distant supervision methods. The first row (w/o 2nd retriever) indicates the performance based only on the first-step retriever (BM25 or RePAQ-base256). The second-step retriever with "Self" method improves the performance slightly, and the others improve the performance more significantly. The large gap between "Self" and the
+
+other methods shows that using the answer information is essential for distant supervision.
+
+Error Propagation Analysis: The error rate of each stage in a multi-stage model provides a better understanding of the model's performance boundary. In SQuID, the second-step retriever only predicts the correct answer when the top-50 question-answer pairs retrieved by the first-step retriever contain the answer. This indicates that the upper-bound performance of SQuID is determined by the performance of the first-step retriever. We measure the R@50 accuracy of the first-step retrievers on NQ and TriviaQA. The performance of BM25 and RePAQ are $64.07\%$ and $64.34\%$ on NQ and $61.73\%$ and $59.10\%$ on TriviaQA, respectively.
+
+# 5 Conclusion
+
+The trade-off between the performance and the inference speed is an important problem in open-domain QA. Recently proposed question retrieval models have shown significantly improved inference speed. However, this improvement came at the cost of a significantly lower QA performance by the question retrieval models compared to the state-of-the-art open-domain QA models. In this paper, we proposed a two-step question retrieval model, SQuID. We evaluated the open-domain QA performance and the inference speed of SQuID on two datasets: NaturalQuestions and TriviaQA. From the results, we showed that the sequential two-retriever approach in SQuID achieves a significant QA performance improvement over the existing question retrieval models, while retaining the advantage of faster inference speed. This improvement in both
+
+QA performance and inference speed is a meaningful step toward the development of real-time open domain QA systems.
+
+# Acknowledgements
+
+This work was partly supported by Naver Corp. and the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921).
+
+# References
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*.
+Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In ACL.
+Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In EMNLP.
+Alexander Richard Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. Template-based question generation from retrieved sentences for improved unsupervised question answering. In ACL.
+Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In EACL.
+Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In ACL.
+Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In EMNLP.
+Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A benchmark for question answering research. TACL.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite bert for self-supervised learning of language representations. In ICLR.
+Jinhyuk Lee, Minjoon Seo, Hannaneh Hajishirzi, and Jaewoo Kang. 2020. Contextualized sparse representations for real-time open-domain question answering. In ACL.
+
+Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021a. Learning dense representations of phrases at scale. In ACL.
+Jinhyuk Lee, Alexander Wettig, and Danqi Chen. 2021b. Phrase retrieval learns passage retrieval, too. arXiv.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL.
+Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive nlp tasks. In NeurIPS.
+Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021a. Question and answer test-train overlap in open-domain question answering datasets. In EACL.
+Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021b. PAQ: 65 million probably-asked questions and what you can do with them. arXiv.
+Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021a. Generation-augmented retrieval for open-domain question answering. In ACL.
+Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021b. Reader-guided passage reranking for open-domain question answering. In ACL-Findings.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP.
+Devendra Singh Sachan, Siva Reddy, William Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. arXiv.
+Minjoon Seo, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Phrase-indexed question answering: A new challenge for scalable document comprehension. In EMNLP.
+Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In ACL.
+Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In ICLR.
+
+| Model | EM | Q/sec |
| SQuID-RePAQ/DPR-parallel | 44.0 | 1266 |
| SQuID-RePAQ/DPR | 44.0 | 1006 |
| SQuID-RePAQ/BERT-large | 43.1 | 814 |
| SQuID-RePAQ/BERT-base | 43.1 | 1006 |
| SQuID-RePAQ/ALBERT-large | 42.2 | 677 |
| SQuID-RePAQ/ALBERT-base | 41.8 | 920 |
| RePAQ-base256 | 40.0 | 1376 |
| RePAQ-large | 41.2 | 624 |
| RePAQ-xlarge | 41.5 | 467 |
| RePAQ-base + Reranker-base | 45.7 | 41 |
| RePAQ-large + Reranker-xlarge | 46.2 | 7 |
+
+Table 3: EM score and inference speed on NQ for various configurations of SQuID and RePAQ
+
+Wenhan Xiong, Xiang Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Scott Yih, Sebastian Riedel, Douwe Kiela, et al. 2020. Answering complex open-domain questions with multi-hop dense retrieval. In ICLR.
+
+Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. In ACL.
+
+Tiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2021. SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval. In NAACL-HLT.
+
+# A Appendix
+
+# A.1 Detailed results of Figure 1
+
+Table 3 shows the detailed results of Figure 1.
+
+# A.2 Experimental Setup
+
+Training Details: We set the batch size to 2 per GPU and the number of negative samples to 16. We used validation EM score for early stopping. SQuID was trained on a machine with four V100-16GB GPUs. We report the result of a single trial.
+
+Computational Environment for Measuring the Inference Speed: The inference speed of baseline models and SQuID is measured with a V100-16GB GPU and 32 CPUs (Intel Xeon E5-2686v4). We report mean of three separate trials.
+
+# A.3 License or Terms of Artifacts
+
+We use BERT whose license is under the Apache License 2.0 free with modification and distribution. Also, we use RePAQ whose license is under the CC
+
+BY-NC 4.0 free with modification and distribution. All models we used are publicly available.
\ No newline at end of file
diff --git a/twostepquestionretrievalforopendomainqa/images.zip b/twostepquestionretrievalforopendomainqa/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2d419c450cdbe37b483c00b8ca8957c9a9b8d1f6
--- /dev/null
+++ b/twostepquestionretrievalforopendomainqa/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:50a01ac76b10ccd3607ce9aa801c0bdd40a423c47084e225a99c0e89a7166a12
+size 243945
diff --git a/twostepquestionretrievalforopendomainqa/layout.json b/twostepquestionretrievalforopendomainqa/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ef01a67169c21fb56a2411a7c89713ff5a45a0d7
--- /dev/null
+++ b/twostepquestionretrievalforopendomainqa/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dce052c075bfe6b39d66d4d5b88c0950483e105db13ac11d95749c636618949c
+size 173920
diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_content_list.json b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a21c47991a8a3c0f3bcc1fca1d3482633cdd075b
--- /dev/null
+++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d43b5424b5172d728c0dd6d1358c998baf01e9c1b4e2881687b063093a36934
+size 83815
diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_model.json b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6cbe2805e0647d420d4baeffe3698a7727ca235d
--- /dev/null
+++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c37f1fc21a4c10eda9ac51444f31509a08848f79edfa30a76aa4213be700913
+size 99208
diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_origin.pdf b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d566aa0f1f10c7f467be7c02d42ec5b4e18c7e09
--- /dev/null
+++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/a36cb823-5d2b-46ba-86b5-3311e93ba1e5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ff3c24042d786d18961be3c7a13e9e9c3b0f53c0bcb041d27a2e75280e75f81
+size 453482
diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/full.md b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a06d749fab6a5cab53da5c7ebf5e0166fa822a36
--- /dev/null
+++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/full.md
@@ -0,0 +1,293 @@
+# Type-Driven Multi-Turn Corrections for Grammatical Error Correction
+
+Shaopeng Lai $^{1*}$ , Qingyu Zhou $^{2}$ , Jiali Zeng $^{2}$ , Zhongli Li $^{2}$ , Chao Li $^{2}$ , Yunbo Cao $^{2}$ , Jinsong Su $^{1,3\dagger}$
+
+$^{1}$ School of Informatics, Xiamen University, China
+
+$^{2}$ Tencent Cloud Xiaowei, China
+
+3Key Laboratory of Digital Protection and Intelligent Processing of Intangible
+
+Cultural Heritage of Fujian and Taiwan, Ministry of Culture and Tourism, China splai@stu.xmu.edu.cn, {qingyuzhou, lemonzeng, neutrali, diegoli, yunbocao} @tencent.com, jssu@xmu.edu.cn
+
+# Abstract
+
+Grammatical Error Correction (GEC) aims to automatically detect and correct grammatical errors. In this aspect, dominant models are trained by one-iteration learning while performing multiple iterations of corrections during inference. Previous studies mainly focus on the data augmentation approach to combat the exposure bias, which suffers from two drawbacks. First, they simply mix additionally-constructed training instances and original ones to train models, which fails to help models be explicitly aware of the procedure of gradual corrections. Second, they ignore the interdependence between different types of corrections. In this paper, we propose a Type-Driven Multi-Turn Corrections approach for GEC. Using this approach, from each training instance, we additionally construct multiple training instances, each of which involves the correction of a specific type of errors. Then, we use these additionally-constructed training instances and the original one to train the model in turn. By doing so, our model is trained to not only correct errors progressively, but also exploit the interdependence between different types of errors for better performance. Experimental results and in-depth analysis show that our approach significantly benefits the model training. Particularly, our enhanced model achieves state-of-the-art single-model performance on English GEC benchmarks. We release our code at https://github.com/DeepLearnXMU/TMTC.
+
+# 1 Introduction
+
+Grammatical Error Correction (GEC) aims at automatically detecting and correcting grammatical (and other related) errors in a text. It attracts much attention due to its practical applications in writing assistant (Napoles et al., 2017b; Ghufron and
+
+Rosyida, 2018), speech recognition systems (Karat et al., 1999; Wang et al., 2020; Kubis et al., 2020) etc. Inspired by the success of neural machine translation (NMT), some models adopt the same paradigm, namely NMT-based models. They have been quite successful, especially with data augmentation approach (Boyd, 2018; Ge et al., 2018; Xu et al., 2019; Grundkiewicz et al., 2019; Wang and Zheng, 2020; Takahashi et al., 2020). However, these models have been blamed for their inefficiency during inference (Chen et al., 2020; Sun et al., 2021). To tackle this issue, many researchers resort to the sequence-to-label (Seq2Label) formulation, achieving comparable or better performance with efficiency (Malmi et al., 2019; Awasthi et al., 2019; Stahlberg and Kumar, 2020; Omelianchuk et al., 2020).
+
+Despite their success, both NMT-based and Seq2Label models are trained by one-iteration learning, while correcting errors for multiple iterations during inference. As a consequence, they suffer from exposure bias and exhibit performance degrade (Ge et al., 2018; Lichtarge et al., 2019; Zhao and Wang, 2020; Parnow et al., 2021). To deal with this issue, Ge et al. (2018) propose to generate fluency-boost pseudo instances as additional training data. Besides, Parnow et al. (2021) dynamically augment training data by introducing the predicted sentences with high error probabilities.
+
+However, the above-mentioned approaches construct pseudo data based on a GEC model or an error-generation model, which extremely depends on the performance of these models. Thus, the error distribution of pseudo data is biased and lacks diversity and practicality. Moreover, they simply mix original and pseudo data to train models, which are unable to learn correcting errors progressively. Furthermore, they ignore the interdependence between different types of errors, which intuitively plays an important role on GEC. Taking Table 1 as example, correcting "little" with "few" or "job" with "jobs"
+
+Erroneous Sentence: In my country there are little job because the economy is very bad.
+
+Reference Sentence: In my country there are few jobs because the economy is very bad.
+
+Table 1: An example for the interdependence between corrections. Please note that whichever error is corrected first, the other error can be corrected more easily.
+
+first can help the other error be better corrected. Therefore, we believe that how to construct and exploit pseudo data with editing-action corrections for GEC is still a problem worthy of in-depth study.
+
+In this paper, we first conduct quantitative experiments to investigate the performance improvements of GEC model with providing different types of error corrections. Experimental results show that corrections of appending or replacing words first indeed benefit the corrections of other errors. Furthermore, we propose a Type-Driven Multi-Turn Corrections (TMTC) approach for GEC. Concretely, by correcting a certain type of errors with others unchanged, we construct an intermediate sentence for each training instance and pair it with its raw erroneous sentence and reference sentence respectively, forming two additional training instances. During the model training, using the former instance, we firstly guide the model to learn correcting the corresponding type of errors. Then, using the latter instance, we teach the model to correct other types of errors with the help of previous corrections. Overall, contributions of our work are three-fold:
+
+- Through quantitative experiments, we investigate the interdependence between different types of corrections, with the finding that corrections of appending or replacing words significantly benefit correcting other errors.
+- We propose a TMTC approach for GEC. To the best of our knowledge, our work is the first attempt to explore the interdependence between different types of errors for GEC.
+- We conduct experiments and in-depth analysis to investigate the effectiveness of our proposed approach. Experimental results show that our enhanced model achieves the state-of-the-art performance.
+
+# 2 Related Work
+
+Generally, there are two categories of models in GEC: Transformer-dominant NMT-based models
+
+(Boyd, 2018; Ge et al., 2018; Xu et al., 2019; Grundkiewicz et al., 2019; Wang and Zheng, 2020; Takahashi et al., 2020) and GECToR leading Seq2Label models (Malmi et al., 2019; Awasthi et al., 2019; Stahlberg and Kumar, 2020; Omelianchuk et al., 2020). The former models consider GEC as a machine translation task, where the model is fed with the erroneous sentence and then output the corrected sentence token by token. By comparison, Seq2Label models are able to correct grammatical errors more efficiently and even better. Among them, the GECToR models (Omelianchuk et al., 2020) obtain remarkable performance. Typically, they adopt a pre-trained language model as the encoder to learn word-level representations and utilize a softmax-based classifier to predict designed editing-action labels.
+
+Since GEC models may fail to completely correct a sentence through just one-iteration inference, some researchers resort to data augmentation that has been widely used in other NLP studies (Song et al., 2020; Xu et al., 2020). For instance, Ge et al. (2018) propose to let the GEC model infer iteratively and design a fluency boost learning approach. Specifically, they establish new erroneous-reference sentence pairs by pairing predicted less fluent sentences with their reference sentences during training. Likewise, to solve the mismatches between training and inference of Seq2Label models, Parnow et al. (2021) apply a confidence-based method to construct additional training data by pairing low-confidence sentences with reference sentences. Note that these two methods also involve constructing pseudo data using sentences with partial errors. However, ours is still different from them in two aspects. First, these two methods simply mix their pseudo data with original data to still train models in a one-iteration learning manner. By contrast, we decompose the one-iteration corrections into multiple turns, so as to make the model aware of gradual corrections. Second, these two methods ignore the interdependence between different types of errors, which is exploited by our proposed approach to enhance the model.
+
+# 3 Background
+
+In this work, we choose GECToR (Omelianchuk et al., 2020) as our basic GEC model due to its efficiency and competitive performance. Typically, it considers the GEC task as a sequence-to-label task, where the candidate editing-action la
+
+
+Figure 1: The procedure of our quantitative experiments. Each sentence is composed of five parts as illustrated, where Error(ACTION label) denote the erroneous words that can be corrected via corresponding editing-action label. We only correct one type of errors and compare the prediction results of other types of errors.
+
+bels mainly include $KEEP (to keep the current word unchanged), $DELETE (to delete the current word), $APPEND_t (to append the word t after the current word), $REPLACE_t (to replace the current word with the word t) and some elaborate g-transformation labels (Omelianchuk et al., 2020) performing task-specific operations, such as $TRANSFORM_CASE_LOWER and $TRANSFORM_CASE_CAPITAL (to change the case of the current word).
+
+On the whole, the GECToR model is composed of an encoder based on pre-trained language model and two linear classifiers: one for grammatical error detection (GED) and the other for GEC. The encoder reads the erroneous sentence $X_{e} = x_{1},x_{2},\ldots ,x_{N}$ and represent words with hidden states $\{h_i\}_{i = 1}^N$ , which are fed into classifiers to predict the binary label sequence $Y = y_{1},y_{2},\dots,y_{N}$ for GED and the editing-action label sequence $T = t_{1},t_{2},\dots,t_{N}$ for GEC, respectively. Formally, the losses of two classifiers can be formulated as
+
+$$
+L _ {d} = - \sum_ {i = 1} ^ {N} \log p \left(y _ {i} \mid X _ {e}, \theta\right), \tag {1}
+$$
+
+$$
+L _ {c} = - \sum_ {i = 1} ^ {N} \log p \left(t _ {i} \mid X _ {e}, \theta\right), \tag {2}
+$$
+
+where $\theta$ denotes model parameters. Usually, the GECToR model is trained to optimize the sum of two losses: $L = L_{d} + L_{c}$ .
+
+It is worth noting that the GECToR model is trained to correct all errors in a one-iteration manner, while correcting errors in a multiple-iteration way during inference (at most 5 iterations). Besides, there are three stages involved during the training of the GECToR model, as shown in Table 2.
+
+# 4 Effect of the Interdependence between Different Types of Corrections
+
+In this section, we conduct several groups of quantitative experiments to explore the interdependence
+
+| Dataset | #Instance | Stage |
| PIE-synthetic (Awasthi et al., 2019) | 9,000,000 | I |
| Lang-8 (Tajiri et al., 2012) | 947,344 | II |
| NUCLE (Dahlmeier et al., 2013) | 56,958 | II |
| FCE (Yannakoudakis et al., 2011) | 34,490 | II |
| W&I+LOCNESS (Bryant et al., 2019) | 34,304 | II, III |
+
+Table 2: GECToR is trained on PIE-synthetic dataset for pre-training at Stage I. Then, it is fine-tuned on Lang-8, NUCLE, FCE, W&I+LOCNESS at Stage II. At Stage III, the final fine-tuning is conducted on W&I+LOCNESS.
+
+between corrections.
+
+We first train the GECToR model on Stage II Only for efficiency. All training settings are the same to published parameters.1 Afterwards, we use the model to conduct corrections on the BEA-2019 (W&I+LOCNESS) dev set and CoNLL-2014 test set (Ng et al., 2014) and their variants with some errors corrected manually. For simplicity, we only consider the three most frequent editing-action labels: $APPEND_t$, $DELETE and $REPLACE_t$.
+
+Figure 1 shows the procedure of quantitative experiments. Specifically, we separate each raw erroneous sentence into five parts: correct words, erroneous words that can be corrected by $\$APPEND\_ \{t\} /$ \$DELETE/\ $REPLACE\_ \{t\}$ , and words with other types of errors. If we want to investigate the influence of $\$APPEND\_ \{t\}$ , we first select the data containing $\$APPEND\_ \{t\}$ labels and denote them as $D(\text{APPEND})$ . Then we manually correct all the errors which should be corrected by $\$APPEND\_ \{t\}$ labels, obtaining the new subset $D(\text{APPEND})$ . Afterwards, we use our model to correct erroneous sentences of subsets $D(\text{APPEND})$ and $D(\text{APPEND})$ for just one iteration, and finally we only evaluate and compare the model performance on the predictions of
+
+| Dataset | Evaluation | RoBERTa |
| BEA-2019 (dev) | CoNLL-2014 (test) |
| Num. | Prec. | Rec. | F1 | Num. | Prec. | Rec. | F1 |
| Original Dataset | $APPEND_t | 2609 | 53.43 | 35.22 | 42.46 | 621 | 27.46 | 23.35 | 25.24 |
| $DELETE | 1403 | 56.04 | 23.81 | 33.42 | 1115 | 51.89 | 18.48 | 27.25 |
| $REPLACE_t | 3495 | 50.87 | 23.32 | 31.98 | 1398 | 38.57 | 18.45 | 24.96 |
| D(APPEND) | $DELETE | 904 | 62.63 | 20.02 | 30.34 | 496 | 47.52 | 13.51 | 21.04 |
| $REPLACE_t | 2079 | 49.71 | 20.30 | 28.83 | 660 | 28.57 | 11.21 | 16.10 |
| D(APPEND✓) | $DELETE | 904 | 68.84 | 26.88 | 38.66 (+8.32) | 496 | 59.06 | 17.74 | 27.29 (+6.22) |
| $REPLACE_t | 2079 | 67.46 | 36.99 | 47.78 (+18.95) | 660 | 48.96 | 28.64 | 36.14 (+20.04) |
| DDELETE) | $APPEND_t | 1024 | 52.69 | 25.78 | 34.62 | 332 | 18.93 | 13.86 | 16.00 |
| $REPLACE_t | 1425 | 50.91 | 19.72 | 28.43 | 716 | 30.89 | 13.55 | 18.83 |
| D(DELETE✓) | $APPEND_t | 1024 | 57.14 | 27.73 | 37.34 (+2.72) | 332 | 22.77 | 15.36 | 18.35 (+2.35) |
| $REPLACE_t | 1425 | 55.02 | 22.32 | 31.75 (+4.32) | 716 | 36.17 | 16.62 | 22.78 (+3.95) |
| D(REPLACE) | $APPEND_t | 1762 | 52.76 | 29.34 | 37.71 | 443 | 23.92 | 18.74 | 21.01 |
| $DELETE | 996 | 56.19 | 18.67 | 28.03 | 767 | 47.10 | 15.91 | 23.78 |
| D(Replace✓) | $APPEND_t | 1762 | 68.05 | 49.21 | 57.11 (+19.40) | 443 | 41.97 | 44.24 | 43.08 (+22.07) |
| $DELETE | 996 | 69.33 | 34.04 | 45.66 (+17.63) | 767 | 61.08 | 25.16 | 35.64 (+11.86) |
+
+Table 3: Results of our quantitative experiments. $D(\text{ACTION})$ denotes a subset consisting of instances with ACTION label. $D(\text{ACTION}\checkmark)$ denotes another version of $D(\text{ACTION})$ , where corresponding errors have been manually corrected.
+
+$DELETE and $REPLACE_t$. For example, by comparing the model performance with respect to the $DELETE label, we can draw the conclusion that appending some words first could help the model to achieve better predictions on $DELETE.
+
+Likewise, we conduct experiments with respect to $DELETE and $REPLACE_{t} labels. Besides, we evaluate the performance for each type of labels on the raw dataset without any constraints. Experimental results of the RoBERTa-based GEC-ToR model (Liu et al., 2019) are listed in Table 3. We can observe that the consistent performance improvements occur on both the W&I+LOCNESS dev set and the CoNLL-2014 test set, no matter which type of errors are corrected first. Moreover, it is surprising that if replacing words or appending words are conducted beforehand, the model performance is significantly improved on correcting other types of errors. Meanwhile, deleting words does not benefit others compared with other two kinds of corrections.
+
+We also notice that the model improvements are positively associated with the number of manual corrections on the BEA-2019 dev set. However, the performance improvements on the CoNLL-2014 test set is not closely related to the number of manual corrections. Thus, we can conclude that the interdependence between different types of corrections indeed plays a more important role than the number of corrections on performance improvements. Having witnessed these experimental results, we can arrive at the following two conclus
+
+sions:
+
+- GEC models can better deal with errors when some types of errors have been corrected.
+- Corrections of appending words or replacing words help the model correct other types of errors more than deleting words.
+
+Please note that we also conduct experiments using the XLNet-based GECToR model (Yang et al., 2019). Similar trend can be observed from experimental results reported in Appendix §A.1.
+
+# 5 Our Approach
+
+In this section, we introduce our proposed Type-Driven Multi-Turn Corrections (TMTC) approach in detail. As concluded above, correcting certain types of errors first benefits correcting others, thus, we decompose one-iteration corrections of each training instance into multi-turn corrections, so as to make the trained model to learn performing corrections progressively.
+
+The key step of our approach is to construct an intermediate sentence for each training instance. Formally, each training instance is a sentence pair $(X_e, X_c)$ consisting of an erroneous sentence $X_e$ and a reference sentence $X_c$ . To construct its intermediate sentence $X'$ , we randomly select partial grammatical errors and correct them manually while keeping others unchanged. Then, $X'$ is paired with $X_e$ and $X_c$ to generate two new pairs: $(X_e, X')$ and $(X', X_c)$ , respectively. Figure 2 illustrates an example of constructing two additional training instances from a sentence pair. In this example, for
+
+
+Figure 2: The procedure illustration of constructing additional training instances. Here, we construct an intermediate sentence $X'$ , which is paired with the raw erroneous sentence $X_e$ and reference sentence $X_c$ to form two additional training instances $(X_e, X')$ and $(X', X_c)$ , respectively. Red squares mean labels correcting errors, while green ones mean the labels to keeping the current word unchanged. Losses of gray squares will be omitted in the first turn.
+
+the erroneous sentence with two grammatical errors "oldest" and "!," we correct "!" by "?" manually to form the semi-corrected sentence "How oldest are you ?". It should be noted that our constructed training instances are derived from the original training corpus, and thus their grammatical errors are also human-making.
+
+Based on the above findings mentioned in Section §4, we apply our approach to design three training strategies: APPEND-first, DELETE-first and REPLACE-first. Here, the ACTION-first strategy means that the model is trained to learn ACTION corrections in the first turn and then the others in the second turn. For example, when using the DELETE-first strategy, we keep the errors with "$DELETE" as target labels unchanged during the constructions of intermediate sentences. Using additionally-constructed training instances involving these sentences, the trained model will be encouraged to focus on performing corrections first via $DELETE. Table 4 lists the numbers of additionally-constructed training instances using these strategies. According to our findings concluded in Section §4, the models trained using APPEND-first and REPLACE-first strategies should perform better.
+
+Using our approach, we adopt different objectives to successively train our model. Specifically,
+
+| Strategy | #Additional Instance |
| RANDOM | 367,814 |
| APPEND-first | 311,348 |
| DELETE-first | 326,100 |
| REPLACE-first | 296,683 |
+
+Table 4: Numbers of additionally-constructed training instances. We also explore the training strategy that randomly corrects partial errors first. For convenience, we name this training strategy as RANDOM.
+
+we define the following training objectives $L_{c}^{(1)}$ and $L_{c}^{(2)}$ in the first and second turns, respectively:
+
+$$
+L _ {c} ^ {(1)} = - \sum_ {i = 1} ^ {N} \mathbb {1} \left(t _ {i} ^ {\prime} = t _ {i}\right) \cdot \log p \left(t _ {i} ^ {\prime} \mid X _ {e}, \theta\right), \tag {3}
+$$
+
+$$
+L _ {c} ^ {(2)} = - \sum_ {i = 1} ^ {\bar {N}} \log p \left(\bar {t} _ {i} \mid X ^ {\prime}, \theta\right), \tag {4}
+$$
+
+where $\{t_i^{\prime}\}_{i = 1}^{N}$ and $\{\bar{t}_i\}_{i = 1}^{\bar{N}}$ are the editing-action label sequences of additionally-constructed training instances $(X_{c},X^{\prime})$ and $(X^{\prime},X_{c})$ respectively.
+
+Notably, there remain some grammatical errors within intermediate sentences which not be learned by the model in the first turn. Therefore, we omit the incorrect supervisal signals in the definition of $L_{c}^{(1)}$ via an indicator function $\mathbb{1}(*)$ , which is used to shield the effect of incorrect losses. However, because our additionally-constructed training instances contain less grammatical errors compared with original ones, which causes the trained model to correct less errors. To address this defect, we still use the original training instances to continuously train model in the third turn.
+
+Formally, we finally, we use all training instances to continuously train our model with the following objective $L' = L_c^{(1)} + L_c^{(2)} + L$ . Our experimental results presented in Section §6 show that our additionally-constructed training instances and original ones are complementary to each other.
+
+# 6 Experiment
+
+# 6.1 Setup
+
+To ensure fair comparison, we train the GECToR models using the same training datasets and parameters as (Omelianchuk et al., 2020), and then evaluate them on the BEA-2019 (W&I+LOCNESS) dev, test set and the CoNLL-2014 test set. The details of the training data are listed in Table 2. Following (Omelianchuk et al., 2020), we conduct
+
+| Model | Pre-trained | BEA-2019 (dev) | CoNLL-2014 (test) |
| Prec. | Rec. | F0.5 | Prec. | Rec. | F0.5 |
| GECToR(Omelianchuk et al., 2020)† | RoBERTa | 50.30 | 30.50 | 44.50 | 67.50 | 38.30 | 58.60 |
| XLNet | 47.10 | 34.20 | 43.80 | 64.60 | 42.60 | 58.50 |
| GECToR | RoBERTa | 49.80 | 37.61 | 46.77 | 66.56 | 45.08 | 60.77 |
| XLNet | 45.55 | 39.81 | 44.27 | 64.04 | 48.67 | 60.24 |
| GECToR(RANDOM) | RoBERTa | 52.88 | 36.05 | 48.37 (+1.60) | 69.54 | 44.32 | 62.43 (+1.66) |
| GECToR(APPEND-first) | RoBERTa | 54.92 | 35.30 | 49.43 (+2.66) | 70.73 | 43.88 | 63.01 (+2.24) |
| GECToR(DELETE-first) | RoBERTa | 53.85 | 35.13 | 48.67 (+1.90) | 70.57 | 42.78 | 62.45 (+1.68) |
| GECToR(REPLACE-first) | RoBERTa | 54.78 | 34.82 | 49.14 (+2.37) | 70.2 | 43.92 | 62.70 (+1.93) |
| GECToR(RANDOM) | XLNet | 49.74 | 38.47 | 46.99 (+2.72) | 67.41 | 46.68 | 61.91 (+1.67) |
| GECToR(APPEND-first) | XLNet | 51.10 | 37.72 | 47.71 (+3.44) | 67.74 | 46.39 | 62.03 (+1.79) |
| GECToR(DELETE-first) | XLNet | 50.48 | 37.49 | 47.21 (+2.97) | 67.33 | 46.42 | 61.79 (+1.55) |
| GECToR(REPLACE-first) | XLNet | 51.96 | 37.19 | 48.14 (+3.87) | 69.36 | 46.30 | 63.08 (+2.84) |
+
+Table 5: Results of models in the dataset setting of Stage II Only. † indicates scores reported in previous papers.
+
+| Model | Pre-trained | BEA-2019 (test) | CoNLL-2014 (test) |
| Prec. | Rec. | F0.5 | Prec. | Rec. | F0.5 |
| Dual-boost(Ge et al., 2018)† | - | - | - | - | 64.47 | 30.48 | 52.72 |
| GECToR(Omelianchuk et al., 2020)† | RoBERTa | 77.2 | 55.1 | 71.5 | 72.1 | 42.0 | 63.0 |
| XLNet | 79.2 | 53.9 | 72.4 | 77.5 | 40.1 | 65.3 |
| GECToR(GST)(Parnow et al., 2021)† | RoBERTa | 77.5 | 55.7 | 71.9 | 74.1 | 42.2 | 64.4 |
| XLNet | 79.4 | 54.5 | 72.8 | 78.4 | 39.9 | 65.7 |
| SAD((12+2)(Sun et al., 2021)† | BART | - | - | 72.9 | 71.0 | 52.8 | 66.4 |
| GECToR | RoBERTa | 78.02 | 53.49 | 71.53 | 72.93 | 40.02 | 63.11 |
| XLNet | 80.23 | 51.76 | 72.36 | 77.63 | 40.11 | 65.57 |
| GECToR(RANDOM) | RoBERTa | 79.85 | 51.53 | 71.94 (+0.41) | 75.39 | 41.57 | 64.84 (+1.73) |
| GECToR(APPEND-first) | RoBERTa | 80.31 | 51.14 | 72.08 (+0.55) | 76.77 | 40.95 | 65.34 (+2.23) |
| GECToR(DELETE-first) | RoBERTa | 79.39 | 52.25 | 71.92 (+0.39) | 75.70 | 39.85 | 64.16 (+1.05) |
| GECToR(REPLACE-first) | RoBERTa | 81.27 | 50.67 | 72.51 (+0.98) | 77.36 | 40.35 | 65.37 (+2.26) |
| GECToR(RANDOM) | XLNet | 81.14 | 50.83 | 72.49 (+0.13) | 77.08 | 42.03 | 66.06 (+0.49) |
| GECToR(APPEND-first) | XLNet | 81.89 | 50.55 | 72.85 (+0.49) | 78.18 | 42.67 | 67.02 (+1.45) |
| GECToR(DELETE-first) | XLNet | 82.35 | 49.52 | 72.71 (+0.35) | 77.05 | 42.03 | 66.04 (+0.47) |
| GECToR(REPLACE-first) | XLNet | 81.33 | 51.55 | 72.91 (+0.55) | 77.83 | 41.82 | 66.40 (+0.83) |
+
+Table 6: Results of models at the dataset setting of Three Stages of Training.
+
+experiments in two dataset settings: Stage II Only and Three Stages of Training. Notably, in the latter setting, we only apply our approach at Stage II and Stage III for efficiency. Finally, we evaluate the model performance in terms of official ERRANT (Bryant et al., 2017) and $M^2$ scorer (Dahlmeier and Ng, 2012) respectively.
+
+# 6.2 Main Results and Analysis
+
+Stage II Only. In this setting, we compare the performance of GECToR with or without applying our approach2.
+
+Results are presented on Table 5. Notably, the results are consistent with our findings in Section §4. That is, since correcting some types of errors benefit the corrections of other errors, all models trained with our approach significantly perform bet-
+
+ter than their corresponding baselines. Moreover, the GECToR models trained by the APPEND-first or REPLACE-first strategies are superior to models trained by DELETE-first or RANDOM, echoing the conclusions mentioned in Section $\S 4$
+
+Three Stages of Training. In this setting, we compare our enhanced models with more baselines under the setting of the single model, including the most related work, Dual-boost (Ge et al., 2018), GECToR(GST) (Parnow et al., 2021) and the current best NMT-based model SAD(12+2) (Sun et al., 2021).
+
+As reported in Table 6, we obtain the similar results to Stage II Only. Our approach promotes models to obtain desirable improvements, where the APPEND-first and REPLACE-first strategies perform better. Overall, the GECToR models trained by our approach are comparable or even better than SAD(12+2). Particularly, when ensembling our
+
+| Dataset | Strategy | Evaluation | RoBERTa |
| BEA-2019 (dev) | CoNLL-2014 (test) |
| Num. | Prec. | Rec. | F1 | Num. | Prec. | Rec. | F1 |
| D(APPEND) | APPEND-first | $DELETE | 904 | 64.03 | 19.69 | 30.12 | 496 | 45.45 | 9.07 | 15.13 |
| $REPLACE_{\{t\}} | 2079 | 52.54 | 19.38 | 28.32 | 660 | 34.83 | 9.39 | 14.80 |
| D(APPEND✓) | APPEND-first | $DELETE | 904 | 79.17 | 33.63 | 47.20 (+17.08) | 496 | 68.18 | 18.15 | 28.66 (+13.53) |
| $REPLACE_{\{t\}} | 2079 | 73.49 | 36.80 | 49.04 (+20.72) | 660 | 60.84 | 28.48 | 38.80 (+24.00) |
| D(DELETE) | DELETE-first | $APPEND_{\{t\}} | 1024 | 54.31 | 22.75 | 32.07 | 332 | 24.53 | 11.75 | 15.89 |
| $REPLACE_{\{t\}} | 1425 | 52.75 | 18.88 | 27.80 | 716 | 35.19 | 10.61 | 16.31 |
| D(DELETE✓) | DELETE-first | $APPEND_{\{t\}} | 1024 | 60.28 | 25.49 | 35.83 (+3.76) | 332 | 30.32 | 14.16 | 19.30 (+3.41) |
| $REPLACE_{\{t\}} | 1425 | 59.16 | 22.67 | 32.78 (+4.98) | 716 | 40.32 | 13.97 | 20.75 (+4.44) |
| D(REPLACE) | REPLACE-first | $APPEND_{\{t\}} | 1762 | 55.32 | 27.13 | 36.41 | 443 | 28.74 | 16.03 | 20.58 |
| $DELETE | 996 | 58.13 | 19.38 | 29.07 | 767 | 50.00 | 11.34 | 18.49 |
| D(REPLACE✓) | REPLACE-first | $APPEND_{\{t\}} | 1762 | 73.57 | 47.56 | 57.77 (+21.36) | 443 | 53.82 | 42.89 | 47.74 (+27.16) |
| DELETE | 996 | 77.99 | 36.65 | 49.86 (+20.79) | 767 | 71.75 | 25.16 | 37.26 (+18.77) |
+
+Table 7: Results of our quantitative experiments using models enhanced by our approach. Three groups of experiments are conducted on the same data subset as Table 3.
+
+| Model | BEA-2019 (dev) | CoNLL-2014 (test) |
| Prec. | Rec. | F0.5 | Prec. | Rec. | F0.5 |
| GECToR | 49.80 | 37.61 | 46.77 | 66.56 | 45.08 | 60.77 |
| w/ TMTC | 54.92 | 35.30 | 49.43 | 70.73 | 43.88 | 63.01 |
| w/o turn 1 | 51.29 | 37.01 | 47.03 | 68.99 | 45.45 | 62.51 |
| w/o turn 2 | 50.43 | 37.3 | 47.12 | 66.94 | 44.60 | 61.31 |
| w/o original | 55.21 | 32.5 | 48.44 | 71.22 | 41.55 | 62.32 |
| mix data | 53.04 | 31.00 | 46.44 | 71.31 | 40.59 | 61.84 |
| w/o 1(*) | 53.23 | 33.49 | 47.62 | 71.31 | 42.16 | 62.64 |
+
+Table 8: Ablation study. Our model is based on RoBERTa and trained using APPEND-first. The $\mathbb{1}(*)$ is the indicator function mentioned in Equation 3.
+
+enhanced models with competitive GEC models, we obtain $77.93F_{0.5}$ , achieving SOTA score on the BEA-2019 test set.
+
+Moreover, we find that our approach allows the trained models to correct more cautiously. That is, the trained models tend to perform less but more precise corrections, compared with the basic GEC-ToR models. One of underlying reasons is that our additionally-constructed training instances contain more $KEEP labels especially in the second turn, which makes the label predictions of the model biased.
+
+# 6.3 Ablation Study
+
+Then, we conduct more experiments to investigate the effectiveness of various details on our proposed approach.
+
+All experimental results are provided in Table 8. Results of lines 3-5 ("w/o turn 1", "w/o turn 2", "w/o original") demonstrate that our additionally-constructed training instances are complementary to original ones. In addition, we also directly mix the additionally-constructed training instances and
+
+
+Figure 3: Label predictions of the RoBERTa-based model on the BEA-2019 dev set in the first iteration of prediction.
+
+the original ones to train a GECToR model. However, such a training strategy does not promote the model to learn much better, showing the advantage of gradual learning error corrections. Finally, as mentioned in Section §5, some grammatical errors should not be learned within intermediate sentence. Here, we also report the performance of the GECToR model without omitting incorrect superviscal signals. As shown in the line 7 ("w/o $\mathbb{1}(*)$ ) of Table 8, the lower recall values indicate these incorrect $\$ \mathrm{KEEP}$ labels make the model to infer more conservatively.
+
+# 6.4 Analysis
+
+Correction Trend. Here, we use the models trained under different strategies to not only evaluate the one-iteration performance with respect to our investigated three types of labels, but also conduct quantitative experiments again. By doing so, we can investigate if our approach indeed guides the model to correct some types of error first.
+
+| Model | BEA-2019 (dev) | CoNLL-2014 (test) |
| Prec. | Rec. | F0.5 | Prec. | Rec. | F0.5 |
| GECToR | 49.80 | 37.61 | 46.77 | 66.56 | 45.08 | 60.77 |
| GECToR(APP+REP+DEL) | 59.26 | 31.70 | 50.48 | 74.08 | 40.37 | 63.48 |
| GECToR(APP+DEL+REP) | 58.38 | 32.06 | 50.15 | 73.26 | 40.89 | 63.24 |
| GECToR(REP+APP+DEL) | 57.75 | 30.95 | 49.23 | 74.36 | 39.19 | 63.05 |
| GECToR(REP+DEL+APP) | 57.72 | 31.44 | 49.66 | 73.86 | 39.87 | 62.88 |
| GECToR(DEL+APP+REP) | 59.13 | 31.52 | 50.04 | 74.28 | 39.61 | 63.15 |
| GECToR(DEL+REP+APP) | 58.51 | 31.83 | 50.18 | 73.34 | 40.55 | 63.06 |
+
+Table 9: Results of more fine-grained strategies. We conduct experiments by the model trained at Stage II Only based on RoBERTa.
+
+
+Figure 4: The precision, recall and $\mathrm{F}_{0.5}$ values with respect to different correction ratios.
+
+As shown in Figure 3, we find our strategies indeed guide model to correct corresponding errors more precisely in the first iteration. Meanwhile, the less but more precise predictions occur again with respect to corresponding labels. For example, when only considering the model performance with respect to $\mathbb{S}$ APPEND_ $\{\mathsf{t}\}$ , we observe that the model trained by APPEND-first obtains the highest precision score.
+
+More importantly, back to Table 7, the phenomenon that correcting some types of errors benefits the others is highlighted. It indicates that our approach indeed allows the trained model to capture the interdependence between different types of corrections.
+
+Effect of Correction Ratio. As described in Section §5, the correction ratio is an important hyper-parameter that determines the numbers of manual corrections. Thus, we try different correction ratio values to investigate its effect on our approach. Figure 4 shows the performance of the trained model with varying correction ratios. Apparently, with the correction ratio increasing, the precision score drops and recall score rises. By contrast, the overall $\mathrm{F}_{0.5}$ scores are always steady.
+
+Effect of More Turns of Corrections. The above experimental results show that decompos-
+
+
+Figure 5: The $\mathrm{F}_{0.5}$ scores of GECToR(RANDOM) with more turns of corrections.
+
+ing the conventional one-iteration training of into the two-turn training is useful to improve model training. A natural problem arises: can the trained model be further improved if we use more turns of training?
+
+To answer this question, we use the model trained by the RANDOM strategy to conduct experiments. Specifically, we decompose the one-iteration corrections into $K$ turns of corrections, where we construct intermediate sentence by accumulatingly correct $\frac{1}{K}$ errors during each turn of corrections. From Figure 5, we can observe that more turns of corrections do not benefit our models over two-turn corrections under the RANDOM strategy while with more training cost.
+
+Also, we conduct experiments using more fine-grained strategies. For example, we can design a training strategy: after learning corrections of \(APPEND\_ \{t\}\), the model learns to correct errors of \)REPLACE\_ \{t\}\( and then to correct others. For convenience, we name this strategy as APP+REP+DEL, where APP, REP and DEL are abbreviations of \)APPEND\_ \{t\}\(, \)REPLACE\_ \{t\}\( and \)DELETE$, respectively. As illustrated in Table 9, all models trained by our approach obtain slightly better performance when introducing more iterations of corrections. However, they require
+
+almost $1.5\mathrm{x}$ training time compared with our standard TMTC approach.
+
+# 7 Conclusion
+
+In this paper, we have firstly conducted quantitative experiments to explore the interdependence between different types of corrections, with the finding that performing some types of corrections such as appending or replacing words first help models to correct other errors. Furthermore, we propose a Type-Driven Multi-Turn Corrections (TMTC) approach for GEC, which allows the trained model to be not only explicitly aware of the progressive corrections, but also exploit the interdependence between different types of corrections. Extensive experiments show that our enhanced model is able to obtain comparable or better performance compared with the SOTA GEC model.
+
+In the future, we plan to apply bidirectional decoding (Zhang et al., 2018; Su et al., 2019; Zhang et al., 2019) to further improve our approach. Besides, inspired by the recent syntax-aware research (Li et al., 2021), we will explore the interdependence between corrections from other perspectives for GEC such as syntax.
+
+# Acknowledgment
+
+The project was supported by National Key Research and Development Program of China (No. 2020AAA0108004), National Natural Science Foundation of China (No. 61672440), Natural Science Foundation of Fujian Province of China (No. 2020J06001), and Youth Innovation Fund of Xi-amen (No. 3502Z20206059). We also thank the reviewers for their insightful comments.
+
+# References
+
+Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In EMNLP-IJCNLP, pages 4260-4270.
+Adriane Boyd. 2018. Using wikipedia edits in low resource grammatical error correction. In NUT@EMNLP, pages 79-84.
+Christopher Bryant, Mariano Felice, Øistein E. Andersen, and Ted Briscoe. 2019. The BEA-2019 shared task on grammatical error correction. In *BEA@ACL*, pages 52–75.
+Christopher Bryant, Mariano Felice, and Ted Briscoe. 2017. Automatic annotation and evaluation of error
+
+types for grammatical error correction. In ACL, pages 793-805.
+Meng Hui Chen, Tao Ge, Xingxing Zhang, Furu Wei, and M. Zhou. 2020. Improving the efficiency of grammatical error correction with erroneous span detection and correction. In EMNLP, pages 7162-7169.
+Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In NACCL, pages 568-572.
+Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS corpus of learner English. In BEA@NAACL-HLT, pages 22-31.
+Tao Ge, Furu Wei, and M. Zhou. 2018. Fluency boost learning and inference for neural grammatical error correction. In ACL, pages 1055-1065.
+M. Ghufron and Fathia Rosyida. 2018. The role of grammarly in assessing english as a foreign language (efl) writing. *Lingua Cultura*, 12:395-403.
+Roman Grundkiewicz, Marcin Junczys-Dowmunt, and Kenneth Heafield. 2019. Neural grammatical error correction systems with unsupervised pre-training on synthetic data. In *BEA@ACL*, page 252–263.
+Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In ACL, pages 174-180.
+Clare-Marie Karat, Christine Halverson, Daniel B. Horn, and John Karat. 1999. Patterns of entry and correction in large vocabulary continuous speech recognition systems. In CHI '99, pages 568-575.
+Marek Kubis, Zygmunt Vetulani, Mikolaj Wypych, and Tomasz Zietkiewicz. 2020. Open challenge for correcting errors of speech recognition systems. ArXiv, abs/2001.03041.
+Zhongli Li, Qingyu Zhou, Chao Li, Ke Xu, and Yunbo Cao. 2021. Improving BERT with syntax-aware local attention. In Findings of ACL, pages 645-653.
+Jared Lichtarge, Christopher Alberti, Shankar Kumar, Noam M. Shazeer, Niki Parmar, and Simon Tong. 2019. Corpora generation for grammatical error correction. In *NAACL*, pages 3291-3301.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
+Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, tag, realize: High-precision text editing. In EMNLP-IJCNLP, pages 5054-5065.
+
+Courtney Naples, Keisuke Sakaguchi, and Joel Tetreault. 2017a. Jfleg: A fluency corpus and benchmark for grammatical error correction. In EACL, pages 229-234.
+Courtney Naples, Keisuke Sakaguchi, and Joel R. Tetreault. 2017b. Jfleg: A fluency corpus and benchmark for grammatical error correction. In EACL, pages 229-234.
+Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1-14.
+Kostiantyn Omelianchuk, Vitaliy Atrasevych, Artem Chernodub, and Oleksandr Skurzhanskyi. 2020. GECToR - grammatical error correction: Tag, not rewrite. In *BEA@ACL*, pages 163-170.
+Kevin Parnow, Zuchao Li, and Hai Zhao. 2021. Grammatical error correction as gan-like sequence labeling. In Findings of ACL, pages 3284-3290.
+Linfeng Song, Ante Wang, Jinsong Su, Yue Zhang, Kun Xu, Yubin Ge, and Dong Yu. 2020. Structural information preserving for graph-to-text generation. In ACL, pages 7987-7998.
+Felix Stahlberg and Shankar Kumar. 2020. Seq2Edits: Sequence transduction using span-level edit operations. In EMNLP, pages 5147-5159.
+Jinsong Su, Xiangwen Zhang, Qian Lin, Yue Qin, Junfeng Yao, and Yang Liu. 2019. Exploiting reverse target-side contexts for neural machine translation via asynchronous bidirectional decoding. Artif. Intell., 277:103168.
+Xin Sun, Tao Ge, Furu Wei, and Houfeng Wang. 2021. Instantaneous grammatical error correction with shallow aggressive decoding. In ACL/IJCNLP, pages 5937-5947.
+Toshikazu Tajiri, Mamoru Komachi, and Yuji Matsumoto. 2012. Tense and aspect error correction for ESL learners using global context. In ACL, pages 198-202.
+Yujin Takahashi, Satoru Katsumata, and Mamoru Komachi. 2020. Grammatical error correction using pseudo learner corpus considering learner's error tendency. In ACL SRW, pages 27-32.
+Haoyu Wang, Shuyan Dong, Yue Liu, James Logan, Ashish Kumar Agrawal, and Yang Liu. 2020. Asr error correction with augmented transformer for entity retrieval. In INTERSPEECH, pages 1550-1554.
+Lihao Wang and Xiaqing Zheng. 2020. Improving grammatical error correction models with purpose-built adversarial examples. In EMNLP, pages 2858-2869.
+
+Kun Xu, Linfeng Song, Yansong Feng, Yan Song, and Dong Yu. 2020. Coordinated reasoning for crosslingual knowledge graph alignment. In AAAI, pages 9354-9361.
+Shuyao Xu, Jiehao Zhang, Jin Chen, and Longlu Qin. 2019. Erroneous data generation for grammatical error correction. In *BEA@ACL*, pages 149-158.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, pages 5754-5764.
+Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In ACL, pages 180-189.
+Biao Zhang, Deyi Xiong, Jinsong Su, and Jiebo Luo. 2019. Future-aware knowledge distillation for neural machine translation. TASLP, 27(12):2278-2287.
+Xiangwen Zhang, Jinsong Su, Yue Qin, Yang Liu, Rongrong Ji, and Hongji Wang. 2018. Asynchronous bidirectional decoding for neural machine translation. In AAAI, pages 5698-5705.
+Zewei Zhao and Houfeng Wang. 2020. Maskgec: Improving neural grammatical error correction via dynamic masking. In AAAI, pages 1226-1233.
+
+# A Appendix
+
+# A.1 Quantitative Experiments on XLNet
+
+We also conduct quantitative experiments described in Section §4 using model trained based on XLNet. The overall results are closely similar to Table 3, which indicates that our findings and conclusions are not specific to a certain model or a certain dataset, but common among realistic human-making datasets.
+
+# A.2 Evaluation on JFLEG
+
+Suggested by reviewers, we evaluate our approach on the JFLEG (Napoles et al., 2017a) dataset which focus on fluency. As shown in Table 11 and Table 12, models trained by our approach obtain higher GLEU (Heilman et al., 2014) compared with baselines, which demonstrate the effectiveness of decomposing one-iteration correction into multiple turns. However, editing-action based interdependence seems not very beneficial from the view of fluency.
+
+| Dataset | Evaluation | XLNet |
| BEA-2019 (dev) | CoNLL-2014 (test) |
| Num. | Prec. | Rec. | F1 | Num. | Prec. | Rec. | F1 |
| Original Dataset | $APPEND_t | 2609 | 50.61 | 38.06 | 43.45 | 621 | 24.4 | 26.09 | 25.21 |
| $DELETE | 1403 | 52.79 | 25.66 | 34.53 | 1115 | 49.65 | 19.01 | 27.50 |
| $REPLACE_t | 3495 | 49.10 | 24.12 | 32.35 | 1398 | 37.06 | 20.89 | 26.72 |
| D(APPEND) | $DELETE | 904 | 61.89 | 21.02 | 31.38 | 496 | 46.34 | 15.32 | 23.03 |
| $REPLACE_t | 2079 | 50.65 | 20.68 | 29.37 | 660 | 32.30 | 14.24 | 19.77 |
| D(APPEND✓) | $DELETE | 904 | 72.66 | 30.86 | 43.32 (+11.94) | 496 | 68.18 | 18.15 | 28.66 (+5.63) |
| $REPLACE_t | 2079 | 67.13 | 36.84 | 47.58 (+18.21) | 660 | 60.84 | 28.48 | 38.80 (+19.03) |
| DDELETE) | $APPEND_t | 1024 | 50.27 | 27.44 | 35.50 | 332 | 18.09 | 16.57 | 17.30 |
| $REPLACE_t | 1425 | 49.57 | 20.00 | 28.50 | 716 | 28.12 | 14.80 | 19.40 |
| D(DELETE✓) | $APPEND_t | 1024 | 54.91 | 28.42 | 37.45 (+1.95) | 332 | 30.32 | 14.16 | 19.30 (+2.00) |
| $REPLACE_t | 1425 | 51.40 | 21.89 | 30.71 (+2.21) | 716 | 40.32 | 13.97 | 20.75 (+1.35) |
| D(REPLACE) | $APPEND_t | 1762 | 55.32 | 31.38 | 38.85 | 443 | 20.28 | 19.86 | 20.07 |
| $DELETE | 996 | 56.37 | 20.88 | 30.48 | 767 | 45.16 | 16.43 | 24.09 |
| D(Replace✓) | $APPEND_t | 1762 | 65.47 | 50.91 | 57.28 (+18.43) | 443 | 53.82 | 42.89 | 47.74 (+27.67) |
| $DELETE | 996 | 70.89 | 35.94 | 47.70 (+17.22) | 767 | 71.75 | 25.16 | 37.26 (+16.51) |
+
+Table 10: Results of our control experiment. Four groups of results are obtained by the same re-implemented GECToR model.
+
+| Model | Pre-trained | BEA-2019 (dev) | CoNLL-2014 (test) | JFLEG (test) GLEU |
| Prec. | Rec. | F0.5 | Prec. | F0.5 |
| GECToR(Omelianchuk et al., 2020)† | RoBERTa | 50.30 | 30.50 | 44.50 | 67.50 | 38.30 |
| XLNet | 47.10 | 34.20 | 43.80 | 64.60 | 42.60 |
| GECToR | RoBERTa | 49.80 | 37.61 | 46.77 | 66.56 | 45.08 |
| XLNet | 45.55 | 39.81 | 44.27 | 64.04 | 48.67 |
| GECToR(RANDOM) | Roberta | 52.88 | 36.05 | 48.37 (+1.60) | 69.54 | 44.32 |
| GECToR(APPEND-first) | Roberta | 54.92 | 35.30 | 49.43 (+2.66) | 70.73 | 43.88 |
| GECToR(DELETE-first) | Roberta | 53.85 | 35.13 | 48.67 (+1.90) | 70.57 | 42.78 |
| GECToR(REPLACE-first) | Roberta | 54.78 | 34.82 | 49.14 (+2.37) | 70.2 | 43.92 |
| GECToR(RANDOM) | XLNet | 49.74 | 38.47 | 46.99 (+2.72) | 67.41 | 46.68 |
| GECToR(APPEND-first) | XLNet | 51.10 | 37.72 | 47.71 (+3.44) | 67.74 | 46.39 |
| GECToR(DELETE-first) | XLNet | 50.48 | 37.49 | 47.21 (+2.97) | 67.33 | 46.42 |
| GECToR(REPLACE-first) | XLNet | 51.96 | 37.19 | 48.14 (+3.87) | 69.36 | 46.30 |
+
+Table 11: Results of models under the settings of Stage II Only. $\dagger$ indicates scores reported in previous papers.
+
+| Model | Pre-trained | BEA-2019 (test) | CoNLL-2014 (test) | JFLEG (test) GLEU |
| Prec. | Rec. | F0.5 | Prec. | Rec. | F0.5 |
| Dual-boost(Ge et al., 2018)† | | - | - | - | 64.47 | 30.48 | 52.72 | |
| GECToR(Omelianchuk et al., 2020)† | RoBERTa | 77.2 | 55.1 | 71.5 | 72.1 | 42.0 | 63.0 | - |
| XLNet | 79.2 | 53.9 | 72.4 | 77.5 | 40.1 | 65.3 | - |
| GECToR(GST)(Parnow et al., 2021)† | RoBERTa | 77.5 | 55.7 | 71.9 | 74.1 | 42.2 | 64.4 | - |
| XLNet | 79.4 | 54.5 | 72.8 | 78.4 | 39.9 | 65.7 | - |
| SAD((12+2)(Sun et al., 2021)† | BART | - | - | 72.9 | 71.0 | 52.8 | 66.4 | - |
| GECToR | RoBERTa | 78.02 | 53.49 | 71.53 | 72.93 | 40.02 | 63.11 | 42.96 |
| XLNet | 80.23 | 51.76 | 72.36 | 77.63 | 40.11 | 65.57 | 43.11 |
| GECToR(RANDOM) | Roberta | 79.85 | 51.53 | 71.94 (+0.41) | 75.39 | 41.57 | 64.84 (+1.73) | 59.05 |
| GECToR(APPEND-first) | Roberta | 80.31 | 51.14 | 72.08 (+0.55) | 76.77 | 40.95 | 65.34 (+2.23) | 58.88 |
| GECToR(DELETE-first) | Roberta | 79.39 | 52.25 | 71.92 (+0.39) | 75.70 | 39.85 | 64.16 (+1.05) | 58.94 |
| GECToR(REPLACE-first) | Roberta | 81.27 | 50.67 | 72.51 (+0.98) | 77.36 | 40.35 | 65.37 (+2.26) | 59.03 |
| GECToR(RANDOM) | XLNet | 81.14 | 50.83 | 72.49 (+0.13) | 77.08 | 42.03 | 66.06 (+0.49) | 58.73 |
| GECToR(APPEND-first) | XLNet | 81.89 | 50.55 | 72.85 (+0.49) | 78.18 | 42.67 | 67.02 (+1.45) | 58.64 |
| GECToR(DELETE-first) | XLNet | 82.35 | 49.52 | 72.71 (+0.35) | 77.05 | 42.03 | 66.04 (+0.47) | 58.45 |
| GECToR(REPLACE-first) | XLNet | 81.33 | 51.55 | 72.91 (+0.55) | 77.83 | 41.82 | 66.40 (+0.83) | 58.42 |
+
+Table 12: Results of models under the settings of Three Stages of Training.
\ No newline at end of file
diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/images.zip b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9ad7e97165213cce82d14e27ffe1f1533124e351
--- /dev/null
+++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b983434ad44a57467a2888c46aa5d537e025d0bf936a2c9d345dacae966488a
+size 1175580
diff --git a/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/layout.json b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c3a554f0afef10b67839c2d3283b868943a06e37
--- /dev/null
+++ b/typedrivenmultiturncorrectionsforgrammaticalerrorcorrection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27088c78b2ee78692ce1698d8092fdf5810fec93aae246cf256c0e1a8bf34663
+size 347441
diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_content_list.json b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..69c58a25322f492cd6352c80e4fc7f7c23ace7fd
--- /dev/null
+++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2081c5df1ea753cf30355f4fa21dca18b1b8e4459625fa5fe94a036ddcbf7c0a
+size 47350
diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_model.json b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1504b47e3e557b5d636b65a68e47b16524d5221b
--- /dev/null
+++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8559716abcf61eca9d3383273ab9929c36f5040b1f7f928c9b76ef261d5db816
+size 54342
diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_origin.pdf b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b9006e64e09dd8070add8097ed11ed594888fcea
--- /dev/null
+++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/0f71414d-6600-48dc-a66b-696b19438140_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f8fe5cb6bd62994a21177c007633682ec69c10d31a2a339a5fee0da363ee6f6b
+size 470632
diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/full.md b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..09127d9fdb9038393ba296f6cd1ed9f51a32eeed
--- /dev/null
+++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/full.md
@@ -0,0 +1,195 @@
+# uFACT: Unfaithful Alien-Corpora Training for Semantically Consistent Data-to-Text Generation
+
+Tisha Anders, Alexandru Coca, Bill Byrne
+
+Department of Engineering, University of Cambridge, United Kingdom
+
+anderstisha@gmail.com ac2123@cam.ac.uk wjb31@cam.ac.uk
+
+# Abstract
+
+We propose uFACT (Un-Faithful Alien Corpora Training), a training corpus construction method for data-to-text (d2t) generation models. We show that d2t models trained on uFACT datasets generate utterances which represent the semantic content of the data sources more accurately compared to models trained on the target corpus alone. Our approach is to augment the training set of a given target corpus with alien corpora which have different semantic representations. We show that while it is important to have faithful data from the target corpus, the faithfulness of additional corpora only plays a minor role. Consequently, uFACT datasets can be constructed with large quantities of unfaithful data, minimising the need for faithful data. We show how uFACT can be leveraged to obtain state-of-the-art results on the WebNLG benchmark using METEOR as our performance metric. Furthermore, we investigate the sensitivity of the generation faithfulness to the training corpus structure using the PARENT metric, and provide a baseline for this metric on the WebNLG (Gardent et al., 2017) benchmark to facilitate comparisons with future work.
+
+# 1 Introduction
+
+Data-to-text (d2t) generation is the task of generating fluent text $t$ given a set of information units, linearised into data source string $d$ (Table 1).
+
+| d | {name,Einstein),(born,1879), (profession,physicist)} |
| t | Einstein was a physicist,born in 1879. |
+
+Table 1: Example of d2t system input $(d)$ and output $(t)$
+
+Training high quality generation models requires corpora whose reference texts are faithful to the data sources representing their semantic content, i.e. the reference texts $t_r$ should have perfect information overlap with $d$ . Most corpora are, however, noisy, with imperfect fact overlap between data $d$
+
+and reference text $t_r$ (Dhingra et al., 2019a). The quality of the training data in that case negatively impacts the performance of a d2t generator trained on it, as well as making it difficult to estimate the true accuracy of a generation $t_g$ , given $t_r$ (Parikh et al., 2020). Faithful examples are however expensive to obtain, and usually only available in small quantities. In the context of this scarcity, we propose the UFACT training set construction method. UFACT allows a generator to learn a more accurate d2t generation model from a mixture of faithful and unfaithful corpora, which reduces the need for vast quantities of faithful examples. For instance, our best-performing UFACT dataset contains 88692 examples, of which only 20,000 (24.34%) examples (the ones from the target corpus) are guaranteed to be faithful. We find that our approach leads to significant improvement in PARENT (Dhingra et al., 2019b) and METEOR (Banerjee and Lavie, 2005) compared to the conventional approach of training a d2t generator on one large unfaithful corpus. We conclude that even unfaithful examples from other corpora can contribute to fluency and faithfulness. Our UFACT-trained T5 surpasses state-of-the-art performance for METEOR on the WebNLG dataset.
+
+# 2 Related work
+
+Early approaches (Reiter and Dale, 1997) formalize d2t generation as three subtasks: content determination, structuring/grouping of information, and surface realisation. A handcrafted system is designed to solve each task. Recently, the focus has shifted towards end-to-end neural approaches, incorporating each of the subtasks into one system (Ferreira et al., 2019, Puduppully et al., 2018, Harkous et al., 2020).
+
+A number of end-to-end approaches to increasing faithfulness in d2t generation are curative, i.e. address generation quality post-hoc. For instance, Harkous et al. (2020) and Dušek and Kasner (2020)
+
+produce candidate generations first, and then judge faithfulness with a separate model, by checking entailment between $d$ and $t_{g}$ . Another approach to enhance faithfulness is to alter the generation model. Chen et al. (2020b) propose a generation model comprised of a copy-generate gate within an LSTM positional encoder. The gate acts as a soft switch between a copy-from-data mode and a language-generation mode. Kale (2020) utilise transfer learning to enhance their generation model, through pre-training on a large unsupervised, task-agnostic corpus.
+
+A different line of research focuses on preventative approaches, where the typical aim is to obtain a better model by improving the training data quality. Chen et al. (2020a) apply a unigram-based dataset selection process, by removing examples for which $t_r$ is not sufficiently related to $d$ . Parikh et al. (2020) also investigate this approach, releasing the noise-free ToTTo dataset, to ensure the training data does not encourage unfaithful generation. Filippova (2020) look for hallucinative examples in their dataset, either considering word-overlap, or comparing how strongly a language model vs. a conditional language model anticipates subsequent text. Dhingra et al. (2019b) develop the PARENT metric, a faithfulness-quantifying F-score that takes into account the data source in addition to the potentially divergent reference, providing a more robust assessment of the d2t mapping.
+
+In their work on model-agnostic meta-learning, Finn et al. (2017) note that training on different instances of a required task (e.g., training on different corpora) can facilitate learning a particular task. Inspired by this approach, we add other corpora with different semantic representations to the training dataset. We find not only that adding corpora boosts the semantic faithfulness of the d2t generator, but also that said corpora need not necessarily satisfy stringent faithfulness requirements, unlike the target corpus.
+
+# 3 Constructing a UFACT dataset
+
+Typically, a d2t generation model is obtained by task-specific fine-tuning, where a large-scale pretrained model such as T5 (Raffel et al., 2019) is fine-tuned on a small corpus. UFACT however, as an instance of mixed-corpus training, takes a different approach: examples from multiple corpora which do not share semantic representations, are linearised and tagged to form a large training
+
+corpus. A UFACT dataset is comprised of a target dataset for which we desire to maximise d2t generation fidelity and alien corpora. The latter are d2t corpora that may differ thematically and structurally from the target corpus and whose role is to improve generation fidelity on the target corpus.
+
+# 3.1 Corpora included in the UFACT dataset
+
+The uFACT datasets we experiment with are constructed from three corpora which differ significantly in size, vocabulary, intended purpose, and linearisation technique. Figure 1 displays the relative sizes of the uFACT datasets (FU and FUU), their faithful counterparts (FF and FFF), as well as other dataset compositions examined.
+
+
+Figure 1: Dataset sizes The target corpus is WebNLG. Here U denotes unfaithful, describing a dataset that has not been curated while F stands for faithful, indicating a dataset that has been filtered to increase the faithfulness of the references to the data sources. See Appendix A for dataset curation approaches.
+
+WebNLG examples consist of up to seven RDF-triplets (subject-predicate-object), which are atomic entities of a knowledge graph, linearised into a string. 15 topics appear, of which 10 are seen in training.
+
+WikiInfo2Text1 is based on slot-value pairs, imitating a table. Our WikiInfo2Text set (a subset of the original) comprises five topics (UK_place, Book, Automobile, Military_conflict & French_commune).
+
+ViGGO (Juraska et al., 2019), a gaming dialogue corpus, has simple vocabulary, with 9 dialogue acts and 14 video game attributes available. The semantic representation consists of one dialogue act and 1-8 video game attributes, expressed as slot-value pairs that allow for lists of multiple values.
+
+Table 2 shows a sample training point from each corpus. It also shows that in the joint dataset the data source of every example, $d$ , is prepended with
+
+| d | webnlg: <s> Einstein <p> born <o> 1879 ; <s> Einstein <p> job <o> physicist |
| t | Einstein was a physicist, born in 1879. |
|
| d | wikiinfo: <name> H for Homicide && <author> S. Grafton && <series> Alpha Mysteries |
| t | H for Homicide, by S. Grafton, is part of the Alpha Mysteries series. |
|
| d | viggo: <request Explanation> (<rating>:[excellent], <genres>:[shooter, RTS]) |
| t | What is it about shooter and RTS games that you find so great? |
+
+a dataset-specific tag (webnlg:, wikiinfo:, viggo:). Tags are usually task-based, (e.g., translate eng-to-ger:) and have been shown to be particularly effective with Transformer models (Ribeiro et al., 2021). Treating each dataset as a different instance of the d2t task as in the meta-learning approach, the tags reveal an example's affiliation with a dataset.
+
+# 3.2 Assembling a uFACT dataset
+
+In summary, a UFACT dataset is a mixed corpus comprising a target (WebNLG) and alien datasets (WikiInfo2Text & ViGGO). The next section shows that while the target corpus should obey a maximum degree of faithfulness, the faithfulness of alien datasets plays a subordinate role. Therefore, in a UFACT dataset, the target corpus obeys the quality-over-quantity principle, whereas alien corpora prioritise quantity over quality.
+
+# 4 Experiments
+
+# 4.1 Experimental setup
+
+We fine-tune the pre-trained T5-base (Raffel et al., 2020) from HuggingFace $^2$ for one epoch with batch size 8. We report averages of 5 values, obtained from training the model with 5 different seeds. We measure METEOR, BLEU (up to 4-grams) and PARENT (Dhingra et al., 2019b), a metric specifically developed for d2t-generation, considering both the reference text and the data source. PARENT uniquely assesses the faithfulness of the generation to the data source. For computing PAR
+
+
+Figure 2: T5 instance PARENT scores for each model instance (i.e. data configuration). 'FUU\t' is a UFACT dataset without tags.
+
+ENT, we use both the word-overlap $(\mathrm{P}(\mathrm{w}))$ and co-occurrence $(\mathrm{P}(\mathrm{c}))$ entailment models. All models are tested on the WebNLG test set, as in Harkous et al. (2020), to provide a fair comparison. The dataset compositions for different experiments are given in Figure 1.
+
+# 4.2 Effect of training dataset structure
+
+Table 3 and Figure 2 show the effect of the training set structure on the model performance.
+
+Table 2: Examples of the three d2t corpora. WebNLG consists of subject-predicate-object triplets, marked as such with $$ , $$ , $$ . WikiInfo2Text has slot-value pairs, with slot-names in angle brackets, and pairs separated by &&. ViGGO has limited vocabulary, but the hierarchical structure of a dialogue act (e.g., request Explanation) parametrized by slot-value pairs (e.g., [excellent]).
+
+ | Web. | Wik. | ViG. | P(w)↑ | P(c)↑ | M↑ | B↑ |
| 1 | U | - | - | 33.32 | 44.43 | 48.28 | 18.89 |
| 2 | F | - | - | 43.62 | 55.57 | 60.28 | 42.03 |
| 3 | F | F | - | 45.32 | 58.19 | 61.36 | 39.1 |
| 4 | F | F | F | 44.47 | 56.17 | 60.13 | 40.61 |
| 5 | F | U | - | 46.49 | 58.95 | 61.81 | 41.48 |
| 6 | F | U | U | 46.02 | 58.54 | 61.59 | 40.88 |
| 7 | F\t | U\t | U\t | 43.63 | 59.32 | 60.06 | 33.71 |
| 8 | U | F | F | 37.54 | 48.70 | 51.02 | 25.16 |
| 9 | U | U | U | 38.07 | 51.04 | 52.31 | 18.85 |
+
+Table 3: Experimental results for T5, with different dataset configurations. PARENT, METEOR and BLEU scores are measured for dataset configurations involving WebNLG (target), WikiInfo2Test (alien) & ViGGO (alien), respectively.{F,U}\t=no tags. All numbers reported are averages of the score of 5 models.
+
+Training on single datasets (Table 3, rows 1-2) When training on the target dataset alone (i.e., WebNLG) a large performance boost is obtained on all metrics from using the faithful dataset WebNLG[F], despite the fact that it contains only $20\%$ of the examples in WebNLG[U] (Figure 1). This demonstrates the detrimental effect of unfaithful target datasets, which are commonly used, on d2t generation faithfulness. The METEOR score of 48.28 on WebNLG[U] is comparable to the range of $\sim 39 - 46$ reported in previous work (Ribeiro et al., 2021). Using faithful in-domain data has a large positive effect on all metrics (row 2).
+
+Addition of faithful alien corpora (rows 3-4) When augmenting the target corpus with faithful
+
+alien corpora (i.e. F-F & F-F-F), the training corpus size increases by factors of 1.88 and 1.90, respectively. As expected, performance increases on PARENT and METEOR, compared to faithful single-corpus training (F). However, F-F (i.e. just one alien dataset) outperforms F-F-F (two alien datasets). This may be due to the fact that ViGGO has a complex semantic representation diverging from the tuple/triplet representation in the other datasets, differs considerably in domain3 from WebNLG and WikiInfo2Text, and only represents $0.92\%$ of the F-F-F dataset (Figure 1). Therefore, it may act as too strong a regulariser during the training phase. The decrease in BLEU coupled with increases in METEOR and PARENT suggests that the generation model stays more faithful to the table, while also phrasing the sentence in its own way.
+
+Training on UFACT datasets (rows 5-6) Training on UFACT datasets F-U and F-U-U improves generator performance compared to training with the faithful counterparts (F-F & F-F-F) (rows 3-4). This increase shows that the faithfulness of alien datasets WikiInfo2Text and ViGGO plays a subordinate role, and the model instead benefits from the sheer number of fluent examples. However, with the addition of ViGGO[U] (row 6 vs. row 5), no metric score is boosted, suggesting a constraint on alien datasets in terms of how much domains and, potentially, semantic representation can differ.
+
+UFACT without tags (row 7) Training on the largest mixed corpus (F-U-U) without dataset-specific tags reduces every metric's score, with the exception of P(c) which increases by $1.33\%$ . Coupled with the decrease in P(w) and BLEU this suggests that the generated text contains less lexical overlap with the references.
+
+Can the target corpus be unfaithful? (rows 8-9) We have seen that the large unfaithful target corpus WebNLG[U] alone is the worst-performing dataset configuration. The addition of alien corpora in this case, unlike in previous experiments, does not lead to state-of-the-art-like performance. Metric scores stay significantly below any dataset with a faithful target corpus, including the UFACT datasets. The low performance in unfaithful-targetcorpus configurations shows that the straightforward addition of alien corpora does not automatically result in desirable scores, and therefore jus
+
+tifies UFACT's quality-over-quantity principle for the target corpus.
+
+# 4.3 Analysis of UFACT efficacy
+
+The above results indicate that faithfulness in the target corpus should not be compromised, not even to gain a larger training set (see largest dataset U-U-U vs. smallest dataset F, or simply F vs. U). Furthermore, faithful alien corpora cannot compensate for unfaithful target corpora (e.g. U-F-F vs. F).
+
+While faithful examples are also desirable in alien datasets, the trade-off between performance and effort for faithful examples is such that faithfulness is not worth pursuing at any cost, seeing that F-U / F-U-U outperform F-F / F-F-F.
+
+The UFACT-method however insists on the target corpus being faithful.
+
+Models trained with $N = 2$ corpora outperform those with $N = 3$ in this paper, suggesting that adding corpora with significantly different domain coverage and semantic representations may be counterproductive when those corpora make up a tiny portion of the dataset. Subsequently, the regularising effect is mitigated in F-U-U, since the portion of ViGGO is higher (7.37%).
+
+Both METEOR, a reference-based metric and PARENT(c/w), which both take the reference and the data source into account, increase when training on uFACT datasets compared to conventional training (row 6 vs. 1). These increases suggest the data source is more accurately represented in the generated text. Therefore, uFACT provides a method of training better d2t models, with increased semantic faithfulness. The efficacy of mixed-corpus training shows that pretrained language models are powerful enough to learn and benefit from several tasks at once, provided the tasks are similar enough and sufficiently represented among the training set.
+
+On WebNLG, uFACT achieves a new state-of-the-art result of 61.81 on METEOR (Ribeiro et al., 2021) (Table 4).
+
+| Author | Model/Method | M | B |
| Castro Ferreira et al. (2019) | UPF-FORGe | 39.00 | 38.65 |
| Harkous et al. (2020) | DATATUNER | 42.40 | 52.90 |
| Kale (2020) | T5-large | 44.00 | 61.44 |
| Moryossef et al. (2019) | StrongNeural | 39.20 | 46.5 |
| Schmitt et al. (2020) | Graformer | 43.38 | 61.15 |
| Zhao et al. (2020) | PLANENC | 41.00 | 52.78 |
| our paper | UFACT | 61.81 | 41.84 |
+
+Table 4: State-of-the-art results on WebNLG for METEOR and BLEU.
+
+The comparatively low BLEU scores, in combination with high METEOR scores, are arguably desirable, since $n$ -gram precision metric BLEU rewards simply copying from potentially unfaithful $t_r$ , whereas METEOR can also reward semantically equivalent rephrasings of $t_r$ . METEOR and BLEU results thus suggest high semantic overlap without copying. Meanwhile, UFACT datasets F-U-U and F-U achieve the highest PARENT scores (Table 3, rows 5-6), ensuring semantic overlap with both reference and data source.
+
+# 5 Conclusion
+
+We have presented the UFACT-method, which boosts the faithfulness of data-to-text generation models by appropriately constructing the training corpus. Training T5 on a mixture of d2t corpora results in strong semantic accuracy increase, as long as and the target corpus remains faithful. UFACT's lax constraints on the majority of the training set mitigates the scarcity problem in finding faithful d2t corpora, thus making faithful d2t generation more practically feasible. The new state-of-the-art METEOR score proves that language models alone, if trained with a carefully constructed dataset, can be highly effective data-to-text generators.
+
+# References
+
+Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics.
+Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552-562, Hong Kong, China. Association for Computational Linguistics.
+Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020a. KGPT: knowledge-grounded pretraining for data-to-text generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8635-8648. Association for Computational Linguistics.
+
+Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, and William Yang Wang. 2020b. Few-shot NLG with pre-trained language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 183-190, Online. Association for Computational Linguistics.
+Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019a. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884-4895, Florence, Italy. Association for Computational Linguistics.
+Bhuwan Dhingra, Manaal Faruqui, Ankur P. Parikh, Ming-Wei Chang, Dipanjan Das, and William W. Cohen. 2019b. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 4884-4895. Association for Computational Linguistics.
+Ondrej Dusek and Zdenek Kasner. 2020. Evaluating semantic accuracy of data-to-text generation with natural language inference. In Proceedings of the 13th International Conference on Natural Language Generation, pages 131-137, Dublin, Ireland. Association for Computational Linguistics.
+Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. CoRR, abs/1908.09022.
+Katja Filippova. 2020. Controlled hallucinations: Learning to generate faithfully from noisy data. CoRR, abs/2010.05873.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126-1135. PMLR.
+Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, INLG 2017, Santiago de Compostela, Spain, September 4-7, 2017, pages 124-133. Association for Computational Linguistics.
+Hamza Harkous, Isabel Groves, and Amir Saffari. 2020. Have your text and use it too! end-to-end neural data-to-text generation with semantic fidelity. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2410-2424, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+
+Juraj Juraska, Kevin Bowden, and Marilyn Walker. 2019. ViGGO: A video game corpus for data-to-text generation in open-domain conversation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 164-172, Tokyo, Japan. Association for Computational Linguistics.
+
+Mihir Kale. 2020. Text-to-text pre-training for data-to-text tasks. CoRR, abs/2005.10433.
+
+Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267-2277, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Ankur Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and Dipanjan Das. 2020. ToTTo: A controlled table-to-text generation dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1173-1186, Online. Association for Computational Linguistics.
+
+Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. CoRR, abs/1809.00582.
+
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. CoRR, abs/1910.10683.
+
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+
+Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57-87.
+
+Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2021. Investigating pretrained language models for graph-to-text generation. ArXiv, abs/2007.08426.
+
+Martin Schmitt, Leonardo F. R. Ribeiro, Philipp Dufter, Iryna Gurevych, and Hinrich Schütze. 2020. Modeling graph structure via relative position for better text generation from knowledge graphs. CoRR, abs/2006.09242.
+
+Chao Zhao, Marilyn Walker, and Snigdha Chaturvedi. 2020. Bridging the structural gap between encoding and decoding for data-to-text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2481-2491, Online. Association for Computational Linguistics.
+
+# A Obtaining faithful versions of the corpora
+
+# A.1 WebNLG & ViGGO
+
+For WebNLG and ViGGO, faithful examples were retrieved from Harkous et al. (2020)4, by selecting semantic fidelity classifier training examples labelled accurate.
+
+# A.2 WikiInfo2Text
+
+Slot-value pairs with slot names which are by default irrelevant to the text (e.g. img_size, or other website-specific meta-data) were excluded from the respective example.
+
+To be included in the training dataset, WikiInfo2Text examples had to obey two hand-crafted rules:
+
+1. Generation-to-data-source length ratio:
+
+- To prevent references from giving information beyond the data source, the number of characters in the generation was restricted, given the number of semantic components in the data source:
+
+$$
+l e n (r e f) < t a u * \text {n u m} _ {\text {d a t a p t s}}
+$$
+
+2. Overall reference text length:
+
+- To avoid hallucinative reference texts, the number of characters in the reference was restricted:
+
+$$
+l e n (r e f) < l a m b d a
+$$
+
+Values for $\tau$ and $\lambda$ can be found in the table below. For WikiInfo2Text, we still perform some superficial cleaning to prevent extremely long examples from overloading the GPU.
+
+ | τ | λ |
| WikiInfo2Text[F] | 60 | 800 |
| WikiInfo2Text[U] | 150 | 1500 |
+
+Table 5: WikiInfo2Text cleaning parameter settings
\ No newline at end of file
diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/images.zip b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..555c038e1ccba1f6dd627781ca87910d4f01de15
--- /dev/null
+++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ccde4fee4dda608d596d47a69c838bdfa1e2f1b97fc8d3e8ac16c436f9e05563
+size 181407
diff --git a/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/layout.json b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..783e8ee8e0c2a1f1390572ca6bb21d691577c241
--- /dev/null
+++ b/ufactunfaithfulaliencorporatrainingforsemanticallyconsistentdatatotextgeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:53a3c8827c6f87a1c5234636ab1a764ee4bff66711c29d1204a2fe554945ad63
+size 195908
diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_content_list.json b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..59ef8d1f551250784b78066c49baf4d874a63ee9
--- /dev/null
+++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20fc3ad062f80b459eeb1bd54dbf46a8c94d04bae97e3f052ada3cd6fde536e3
+size 93026
diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_model.json b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..9288ddbfc02622a7fd8077581c48bea5aa4cdbcc
--- /dev/null
+++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ddd79c836798a85b0d93e360820fbb2b8cadd1f97af4e2a185ac7f70187f2553
+size 112379
diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_origin.pdf b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5f510385f066e6437ccc2421fe2cc3bf99af5ef4
--- /dev/null
+++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/61996538-882d-4559-aaa4-21965154b1d5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:770161ab16bfe3bb47b541713166ac7a06232c12e218667d28dbbe194571865e
+size 3196765
diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/full.md b/unimo2endtoendunifiedvisionlanguagegroundedlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..23757fa83f22278955073d5c342a966979958edd
--- /dev/null
+++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/full.md
@@ -0,0 +1,336 @@
+# UNIMO-2: End-to-End Unified Vision-Language Grounded Learning
+
+Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang
+
+Baidu Inc., Beijing, China
+
+{liwei85,gaocan01,niuguocheng,xiaoxinyan, liuhao24,liujiachen,wu_hua,wanghaifeng}@baidu.com
+
+# Abstract
+
+Vision-Language Pre-training (VLP) has achieved impressive performance on various cross-modal downstream tasks. However, most existing methods can only learn from aligned image-caption data and rely heavily on expensive regional features, which greatly limits their scalability and performance. In this paper, we propose an end-to-end unified-modal pre-training framework, namely UNIMO-2, for joint learning on both aligned image-caption data and unaligned image-only and text-only corpus. We build a unified Transformer model to jointly learn visual representations, textual representations and semantic alignment between images and texts. In particular, we propose to conduct grounded learning on both images and texts via a sharing grounded space, which helps bridge unaligned images and texts, and align the visual and textual semantic spaces on different types of corpora. The experiments show that our grounded learning method can improve textual and visual semantic alignment for improving performance on various cross-modal tasks. Moreover, benefiting from effective joint modeling of different types of corpora, our model also achieves impressive performance on single-modal visual and textual tasks. Our code and models are public at the UNIMO project page https://unimo-ptm.github.io/.
+
+# 1 Introduction
+
+Large-scale pre-training has drawn much attention in the community of Computer Vision (CV), Natural Language Processing (NLP) and Multi-Modal (MM) due to its strong capability of generalization and efficient usage of large-scale data. However, in the existing literature, the work on vision, language and vision-language representation learning are mostly studied separately with different training data sources. In the vision domain, pre-training on large-scale image corpus such as ImageNet
+
+(Deng et al., 2009), OpenImages (Kuznetsova et al., 2020) and JFT-300M (Dosovitskiy et al., 2020) has proven to be critical for learning transferring visual representation for various downstream tasks. In NLP, pre-training on easily-accessible unannotated text corpora greatly improves the capabilities of language understanding and generation (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019). Pre-training has also become the de-facto approach in vision-language modeling (Lu et al., 2019; Chen et al., 2020c; Li et al., 2020, 2019a; Yu et al., 2020). However, existing VLP methods require a massive amount of aligned image-text pairs which are costly to collect and hard to scale up. The large volumes of image corpus in CV and text corpus in NLP cannot be effectively utilized. Thus, the scalability and performance upper limit of existing VLP methods are largely restricted. As they only learn joint vision-language representations on image-text pairs, they are also difficult to be effectively adapted to visual and textual tasks (Li et al., 2021b; Lin et al., 2020).
+
+To address the limitations, we propose a new end-to-end unified-modal pre-training framework, namely UNIMO-2, for joint learning on various types of corpora, including images, texts, and image-caption pairs. Specifically, we build a unified Transformer model to jointly learn visual representations, textual representations, and cross-modal alignment from the three types of corpora. Both the visual and textual representations are learned end-to-end from raw images and textual sentences. Combining a large number of unaligned images and texts is not only expected to improve the performance of joint vision-language tasks, but also improve the scalability of adapting to single-modal visual and textual tasks. However, it is challenging to bridge unaligned images and texts and effectively align the visual and textual semantic spaces on different types of corpora.
+
+Only a few works have attempted to bridge
+
+unaligned images and texts by leveraging object tags from an pre-trained object detector as "anchor points" (Li et al., 2021a,b). However, they all rely heavily on expensive object-centric visual feature extraction, thus facing the problems of limited visual expressive power and computation inefficiency. In this paper, in order to bridge the unpaired image and text corpora and align the visual and textual semantic spaces end-to-end, we propose to conduct grounded learning on images, texts, and image-text pairs via a sharing grounded space. Specifically, we introduce a grounded dictionary shared by images and texts, which represents vision-language grounded semantics. To learn the grounded dictionary, we apply vector quantization on both visual and textual representations to group image patches and text tokens with similar semantics into grounded tokens. Furthermore, we design a Grounded Transformer architecture to let the visual and textual information exchanged by the grounded tokens, which not only facilitates grounded dictionary learning, but also improves cross-modal alignment. Our grounded learning method can help bridge the textual and visual semantic spaces on unpaired image and text corpora to improve cross-modal fusion on different types of corpora.
+
+We evaluate UNIMO-2 on a variety of representative vision-language understanding and generation tasks, including image/text retrieval, visual question answering, visual reasoning and image caption. On all these tasks, UNIMO-2 obtains obvious improvements compared to the baselines that only learn on aligned image_caption data or without our grounded learning component. Moreover, we also evaluate our model on single-modal textual tasks such as natural language inference and visual tasks such as image classification (Deng et al., 2009). The results show that our model has also achieved very impressive performance on these tasks, which proves the strong scalability and adaptability of our model.
+
+UNIMO-2 has the following advantages compared with previous methods:
+
+- UNIMO-2 can jointly learn from both aligned and unaligned image and text corpora end-to-end, effectively alleviating the limitations of corpus, and learning more generalized visual and textual representations on large volumes of different types of corpus.
+
+- Benefiting from utilizing different types of
+
+corpora, UNIMO-2 has better scalability for different types of tasks, including both cross-modal tasks and single-modal tasks.
+
+- Our grounded learning method can help align textual and visual semantic spaces more effectively, thereby greatly improving the performance of various cross-modal tasks. In particular, the performance of zero-shot image/text retrieval even outperforms CLIP pre-trained on an order of magnitude larger pair corpus.
+
+# 2 Related Work
+
+Vision-Language Pre-training Recent years have witnessed rapid progress in vision-and-language pretraining (VLP) (Li et al., 2019b; Lu et al., 2019; Chen et al., 2020c; Li et al., 2019a, 2020; Yu et al., 2020). Most existing mainstream VLP models adopt a two-stage training method, which firstly extracts region-based visual features using a pre-trained object detection model, and then combines the derived object-centric region features of images and text embeddings as the input of Transformer (Vaswani et al., 2017) for cross-modal pre-training. These methods rely heavily on an off-the-shelf object detector like Faster R-CNN (Ren et al., 2016) typically pretrained on the Visual Genome dataset (Anderson et al., 2018). As the visual representation is not optimized towards a more generic cross-modal understanding and extracting region features with an object detection model is so time-consuming, they face the problems of limited visual expressive power and computation inefficiency, which makes them less scalable.
+
+Some recent work has also explored VLP without object detection modules (Xu et al., 2021; Kim et al., 2021; Huang et al., 2021; Wang et al., 2021). They either utilize grid features from pretrained CNNs or patch features following ViT (Dosovitskiy et al., 2020), however they only use limited image_caption pairs for cross-modal pretraining and thus their scalability and performance are limited. Only a few works have explored utilizing unaligned images and texts for vision-language pre-training, including our previous work UNIMO (Li et al., 2021b) and U-VisualBERT (Li et al., 2021a). However, they all rely on pre-extraction of region-based visual features or object tags by time-consuming object detection. How to bridge unpaired visual and textual corpora end-to-end without using object detection remains challenging.
+
+
+Figure 1: Illustration of our UNIMO-2 framework. The left part shows the architecture of learning on image-text pairs, which produces grounded tokens based on the sharing semantics in images and texts. The right part shows the architecture of learning on unpaired images and texts, which produces grounded tokens from image representations or text representations, respectively. As they share the same grounded dictionary, the grounded tokens act as "anchor points" to bridge the gap between images and texts.
+
+
+
+Grounded Learning Language grounding is an active field aiming at enriching textual representations with visual information, which has been shown to improve performance on a variety of core NLP tasks (Bruni et al., 2014; Baroni, 2016; Kiela, 2017). Kiela et al. (2018) investigate grounded sentence representations by training a sentence encoder to predict the image features of a given caption. Tan and Bansal (2020) propose a vokenization method that maps language tokens to their related images. These works all enrich the language representation with visual information by learning a projection of text representations to corresponding images (Chrupaña et al., 2015). Recently, Huang et al. (2021) propose an end-to-end VLP method that aggregates visual features from a CNN encoder into visual tokens with a visual dictionary. Liu et al. (2021) propose to improve cross-modal retrieval tasks by incorporating a shared discretized embedding space, which is utilized to compute matching scores between different modalities to complement the representations from individual encoders. These works all rely on image-text pairs to learn cross-modal representations and only focus on joint vision-language tasks. By contrast, our work for the first time proposes to jointly model both aligned and unaligned images and texts by end-to-end learning a shared grounded semantic space, which can improve modality alignment between both aligned and unaligned images and texts.
+
+# 3 Approach
+
+The overall architecture of our model is shown in Figure 1. UNIMO-2 is an end-to-end framework, which consists of a trainable Transformer-based visual encoder, a Transformer-based text encoder, a grounded dictionary (GD) embedding module, and a multi-layer Grounded Transformer for modality fusion. The visual encoder takes an image as input by splitting it into small sizes of patches, and produces the high-level visual representations for all patches, similar to ViT (Dosovitskiy et al., 2020). The text encoder encodes textual tokens to produce high-level token representations. Based on the high-level representations of patches and tokens, we design a GD embedding module to group similar vision-language representations into grounded tokens with a shared grounded dictionary. The Grounded Transformer is further adopted to fuse features from vision and language modalities through interacting with the common grounded tokens. UNIMO-2 can be end-to-end pre-trained by joint Masked Language Modeling (MLM) on text, Image-Text Matching (ITM) on image-text pairs and Visual Contrastive Learning (VCL) on images. UNIMO-2 can also be easily adapted to various tasks including visual, textual and cross-modal tasks.
+
+# 3.1 End-to-End Grounded Learning
+
+Human acquire much of their knowledge through grounded learning - visual concepts can be acquired through language, and language acquisi
+
+tion emerges through visual interaction (Jones et al., 1991; Perfetti, 1998; Fincher-Kiefer, 2001; Andrews et al., 2009; Riordan and Jones, 2011). Inspired by this type of grounded learning, we propose to learn a sharing semantic space (i.e. grounded space) between images and texts to better align fine-grained visual and textual semantics. Specifically, based on the high-level visual representations of patches $V = \{v_{1},\ldots ,v_{M}\}$ and textual representations of tokens $T = \{t_1,\dots ,t_N\}$ , we introduce a grounded dictionary to group similar visual and textual representations into the same grounded token. The grounded features not only help align the visual and textual semantics in aligned image-caption data, but also act as "anchor points" to help bridge the unaligned images and texts, as shown in Figure 1.
+
+Grounded Dictionary Learning We define a grounded dictionary (GD) as a matrix $G \in \mathbb{R}^{C \times D}$ which contains $C$ embedding vectors with $D$ -dim. The embedding vector for the $j^{th}$ grounded token is denoted as $g_{j} \in \mathbb{R}^{D}, j \in 1,2,\ldots,C$ . Vector Quantization (VQ) is widely used to group continuous embeddings into groups of discrete latent variables (Oord et al., 2017; Liu et al., 2021; Huang et al., 2021). For example, each patch or token can be mapped to a grounded token by finding its nearest neighbor in the GD, as in Oord et al. (2017).
+
+Most existing VLP methods implicitly assume that there is a one-to-one correspondence hypothesis between the visual and textual modalities of image-text pairs. However, this hypothesis does not hold in reality as most image-text pairs on the Web are noisy or only have weak correlation. To tackle this issue, instead of mapping each patch or token representation to a grounded token, we only detect the most significant sharing semantics between image and text. We propose to find the top- $K$ most significant grounded tokens for both the textual and visual input. Specifically, let $x_{ij}$ denotes the similarity between embedding vectors of visual token $v_{i}$ and grounded token $g_{j}$ , which is computed by:
+
+$$
+x _ {i j} = \sigma \left(\eta * v _ {i} ^ {T} g _ {j}\right) \tag {1}
+$$
+
+where $\sigma$ denotes the sigmoid function, and $\eta$ denotes a learnable temperature parameter. Similarly, $y_{kj}$ denotes the similarity between embedding vectors of textual token $t_k$ and grounded token $g_j$ .
+
+For image-text pairs, the accumulated score of
+
+the grounded token $g_{j}$ is computed as:
+
+$$
+s _ {j} = \sum_ {i = 1} ^ {M} x _ {i j} + \sum_ {k = 1} ^ {N} y _ {k j} \tag {2}
+$$
+
+We obtain the top- $K$ most significant grounded tokens with the largest accumulated scores: $g_{1},\ldots ,g_{K} = Top_{K}\{s_{1},\ldots ,s_{C}\}$ , where $K$ is a hyper-parameter. Note that, if we set $K = M + N$ then it is similar that each patch or token is mapped to a grounded token, which will increase the computation cost and introduce noisy information into the grounded learning process. So, we set $K$ much smaller than $M + N$ to obtain the most significant and sharing grounded tokens, which can help align fine-grained visual and textual representations while eliminating the noisy or unrelated information in image-text pairs. For unpaired images or text, the accumulated score of each grounded token $g_{j}$ is $s_j = \sum_{i = 1}^{M}x_{ij}$ or $s_j = \sum_{k = 1}^{N}y_{kj}$ , and the top- $K$ grounded tokens can be obtained similarly.
+
+The grounded dictionary is randomly initialized, and further updated end-to-end while pre-training. As the $Top_{K}$ function is non-differentiable, we import a grounding loss to help learn the grounded dictionary. Specifically, we propose a revised form of the Vector Quantisation (VQ) algorithm (Oord et al., 2017), which uses the $l_{2}$ error to move the embedding vectors $g_{i}$ towards the mapped patch or token representations, as shown in the first term of Equation 3. For simplicity, here we take image input as an example. Since the volume of the embedding space is dimensionless, it can grow arbitrarily if the embeddings $g_{i}$ do not train as fast as the visual and textual encoder parameters. To make sure the encoder commits to an embedding and its output does not grow, we add a commitment loss, the second term in Equation 3. Thus, the total grounding loss becomes:
+
+$$
+\begin{array}{l} \mathcal {L} _ {G D} = \sum_ {i = 1} ^ {M} \| s g \left(v _ {i}\right) - \sum_ {j} \frac {x _ {i j}}{\sum_ {k} x _ {i k}} g _ {j} \| _ {2} ^ {2} \tag {3} \\ + \beta \sum_ {j = 1} ^ {K} \| s g (g _ {j}) - \sum_ {i} \frac {x _ {i j}}{s _ {j}} v _ {i} \| _ {2} ^ {2} \\ \end{array}
+$$
+
+where $sg(.)$ denotes the stop-gradient operator that is defined as identity at forward computation time and has zero partial derivatives, and $\beta$ denotes a weight parameter.
+
+The grounded dictionary faces a cold-start problem for unpaired images and texts. So we apply
+
+
+Figure 2: The self-attention architecture of Grounded Transformer. Cross-modal information is exchanged through the grounded tokens.
+
+curriculum learning on different types of corpora. Specifically, we first only train on image-text pairs for 20 epochs to obtain a usable grounded embedding space, then further train on all three types of corpus to help bridge unpaired images and texts. To show what the GD has learned, we have visualized some grounded tokens in Appendix A.
+
+Grounded Transformer After obtaining the grounded tokens, we append them with the visual tokens and textual tokens as input to our Grounded Transformer for cross-modal fusion. Specifically, we propose to bridge visual and textual representations by grounded tokens. As shown in Figure 2, the cross-modal information can only be exchanged by grounded tokens, which also push the grounded tokens to capture the most significant sharing semantics between images and texts. In this way, our model is more robust on weak correlation image-text pairs by modeling cross-modal interaction through common grounded tokens. Furthermore, the novel self-attention architecture can improve the computation efficiency compared to the standard pairwise self-attention mechanism.
+
+For unpaired images and texts, the Grounded Transformer also models the fusion of visual tokens or textual tokens with the grounded tokens. As the grounded dictionary captures common visual and textual semantics, it also helps learn cross-modal representations on unpaired images and texts.
+
+# 3.2 Pre-training On Different Corpus
+
+Based on the outputs of the Grounded Transformer, we adopt Masked Language Modeling (MLM) and Image-Text Matching (ITM) pre-training tasks on image-text pairs. Furthermore, we also apply MLM on text corpus and Visual Constrastive Learning (VCL) on images.
+
+Masked Language Modeling We iteratively sample spans of text until totally $15\%$ tokens have been selected. We sample the span length from a geometric distribution $l \sim Geo(p)$ , where $p$ is set as 0.2, similar to SpanBERT (Joshi et al., 2020). All tokens in the selected spans are replaced with either a special [MASK] token, a random token or the original token with probability $80\%$ , $10\%$ and $10\%$ , respectively. The goal is to predict these masked tokens based on their surrounding context and all visual features. The MLM task is also applied on text-only corpus, which predicts masked tokens only based on the surrounding tokens.
+
+Image-Text Matching To enhance the cross-modal matching, we adopt ITM task for pretraining as in previous works (Chen et al., 2020c). We apply a binary classifier on the concatenated embedding features of the “[CLS)” token in text and the “[CLS)” token in image by Grounded Transformer to predict whether the input image and text are matched or not.
+
+Visual Contrastive Learning UNIMO-2 learns representations on unpaired images by maximizing agreement between differently augmented views of the same image while minimizing similarities between different images via a contrastive loss in the latent space, similar to SimCLR (Chen et al., 2020a). We apply stochastic data argumentation module that transforms an image randomly resulted in two correlated views as a positive pair, and random images in the same minibatch as negative pairs. We combine augmentations of random cropping, random rotating and random color distortion followed by resizing back to the original size.
+
+# 3.3 Transferring To Different Tasks
+
+Our model can be effectively finetuned on different types of tasks, including cross-modal tasks, visual tasks and textual tasks. For cross-modal tasks, the model architecture is the same as the pre-training architecture on image-text pairs, as shown in the left part of Figure 1. Grounded tokens are produced based on both the visual and textual representations to facilitate cross-modal understanding and generation. For visual tasks, the model architecture is the same as the pre-training architecture on images, as shown in the middle part of Figure 1. Grounded tokens are obtained based on the visual representations from the Visual Transformer. As the grounded tokens contain sharing semantics between images and texts, UNIMO-2 can learn language-grounded
+
+image representations for visual tasks. Similarly, for textual tasks the model architecture is the same as the pre-training architecture on text, as shown in the right part of Figure 1. Grounded tokens are obtained based on the textual representations from the Text Transformer. Also, the sharing grounded space helps learn grounded text representations to facilitate textual tasks.
+
+# 4 Experimental Settings
+
+Pretraining Dataset Our pre-training datasets consist of three types: text corpus, image corpus and image-text pairs. The text corpus includes two large-scale corpora: BookWiki and OpenWebText, which are part of the training dataset of RoBERTa (Liu et al., 2019). The image corpus are images without textual descriptions, including a subset of OpenImages (Krasin et al., 2017) and ImageNet-21k (Deng et al., 2009). Each image in these datasets contains a textual label. The image-text pairs are composed of four existing multi-modal datasets: COCO (Lin et al., 2014), Visual Genome (VG) (Krishna et al., 2017), Conceptual Captions (CC) (Sharma et al., 2018) and SBU Captions (Ordonez et al., 2011), which have also been widely used in previous VLP models. The detail statistics are shown in the appendix. We also transform the label of each image to a sentence by prompts (e.g. "a photo of [label]") to create pseudo image-text pairs from the OpenImages and ImageNet-21k datasets for pretraining.
+
+Implementation Detail UNIMO-2 consists of 12 layers of Visual Transformer, 12 layers of Text Transformer, and 12 layers of Grounded Transformer. The Visual Transformer is initialized by ViT-B/16. The Text Transformer and Grounded Transformer are both initialized by RoBERTa-Base. The maximum sequence length of text tokens are set as 512. An Adam optimizer with initial learning rate 5e-5 and a learning rate linear decay schedule is utilized.
+
+For the visual encoder, our model receives the raw image $\mathbf{x} \in \mathbb{R}^{H \times W \times C}$ and maps it into flattened $1D$ sequence of patches $\mathbf{x}_p \in \mathbb{R}^{\frac{HW}{P^2} \times D}$ as input for the transformer, where $D$ is the fixed hidden size of the transformer layers and $P$ is the patch size. During pretraining, we utilize the $224 \times 224$ resolution with a fixed patch size of $16 \times 16$ , resulting in a patch sequence of length $14 \times 14$ as visual tokens. During fine-tuning, we increase the image resolution to $384 \times 384$ and interpolate the
+
+positional encoding of image patches following (Dosovitskiy et al., 2020). For the grounded embedding module, the grounded dictionary size $C$ is set as 2048, and the number of grounded tokens $K$ during pre-training and finetuning are both set as 100 that is much smaller than the max number of patches and tokens for pre-training (i.e. 709) and finetuning (i.e. 1089). We set $\beta = 0.25$ in all our experiments and the results did not vary obviously for values ranging from 0.1 to 1.0. We have compared different grounding settings in detail in Appendix A.
+
+Finetuning Tasks To show the scalability of our model, we fine-tune it on three types of downstream tasks: (1) joint vision-language cross-modal tasks, (2) visual tasks, and (3) textual tasks. The cross-modal tasks include: visual question answering (VQA) on the VQA v2.0 dataset (Goyal et al., 2017), image caption on the Microsoft COCO Captions dataset (Chen et al., 2015), visual entailment on the SNLI-VE dataset (Xie et al., 2019) and image-text retrieval on Flickr30k datasets (Young et al., 2014). The visual tasks include image classification on the ImageNet-1k dataset (Krizhevsky et al., 2012). The textual tasks include sentiment classification on the SST-2 dataset (Socher et al., 2013), natural language inference on the MNLI dataset (Williams et al., 2018), linguistic acceptability analysis on the CoLA dataset (Warstadt et al., 2019) and semantic similarity analysis on the STS-B dataset (Cer et al., 2017). The detail statistics of the datasets and hyper-parameter settings for the above tasks are described in Appendix B.
+
+# 5 Results and Analysis
+
+We compare UNIMO-2 to a variety of state-of-the-art models on cross-modal, visual and textual tasks.
+
+# 5.1 Cross-Modal Tasks
+
+The evaluation results on the joint vision-language cross-modal tasks are shown in Table 1. We compare with most of the existed VLP models, including regional feature-based models ViLBERT (Lu et al., 2019), UNITER (Chen et al., 2020c), Oscar (Li et al., 2020), Villa (Gan et al., 2020) and UNIMO (Li et al., 2021b), and end-to-end models ViLT (Kim et al., 2021), E2E-VLP (Xu et al., 2021), SOHO (Huang et al., 2021) and CLIP (Radford et al., 2021). The results show that UNIMO-2 achieves the best results against most benchmarks, outperforming both the base and large sizes of other
+
+| Model | ZS-IR
+R@1/R@5 | ZS-TR
+R@1/R@5 | IR
+R@1/R@5 | TR
+R@1/R@5 | SNLI-VE
+Val / Test | VQA
+test-dev / std | Caption
+B@4 / C |
| Region-based Models Pretrained on Image-Text Pairs of CC, SBU, COCO and VG. |
| ViBERT | 31.86/61.12 | - | 58.20/84.90 | - | - | 70.55/70.92 | - |
| UNITER-Base | 66.16/88.40 | 80.70/95.70 | 72.52/92.36 | 85.90/97.10 | 78.59/78.28 | 72.70/72.91 | - |
| Villa-Base | - | - | 74.74/92.86 | 86.60/97.90 | 79.47/79.03 | 73.59/73.67 | - |
| Oscar-Base | - | - | - | - | - | 73.16/73.44 | 36.5/123.7 |
| UNIMO-Base | 62.44/86.16 | 77.40/95.10 | 74.66/93.40 | 89.70/98.40 | 80.00/79.10 | 73.79/74.02 | 38.8/124.4 |
| UNITER-Large | 68.74/89.20 | 83.60/95.70 | 75.56/94.08 | 87.30/98.00 | 79.39/79.38 | 73.82/74.02 | - |
| Villa-Large | - | - | 76.26/94.24 | 87.90/97.50 | 80.18/80.02 | 74.69/74.87 | - |
| Oscar-Large | - | - | - | - | - | 73.61/73.82 | 37.4/127.8 |
| UNIMO-Large | 72.14/91.14 | 85.80/96.80 | 78.04/94.24 | 89.40/98.90 | 81.11/80.63 | 75.06/75.27 | 39.6/127.7 |
| End-to-End Models Pretrained on Image-Text Pairs of CC, SBU, COCO and VG. † denotes 400 Million pairs. |
| ViLT | 51.3/79.9 | 69.7/91.0 | 62.2/87.6 | 83.7/97.2 | - | 70.94/- | - |
| E2E-VLP | - | - | 73.58/92.42 | 86.24/97.50 | - | 73.25/73.67 | 36.2/117.3 |
| SOHO | - | - | 72.5/92.7 | 86.5/98.1 | 85.00/84.95 | 73.25/73.47 | - |
| CLIP† | 68.7/90.6 | 88.0/98.7 | - | - | - | - | - |
| Our Baseline | 65.11/87.44 | 78.80/94.38 | 78.52/94.02 | 91.62/98.72 | 80.37/80.43 | 75.69/75.87 | 38.5/128.4 |
| UNIMO-2 | 72.70/91.18 | 88.46/96.84 | 80.14/95.58 | 92.01/99.31 | 81.97/81.48 | 76.31/76.42 | 39.7/131.2 |
+
+Table 1: Evaluation results on cross-modal tasks. ZS denotes zero-shot performance. IR and TR represents image-retrieval and text-retrieval, respectively. B@4 and C denotes metrics of BLUE4 and CIDEr, respectively. “Our Baseline” is similar to UNIMO-2, except that the grounded embedding module in UNIMO-2 is removed. It is trained on the same corpus and experimental settings with UNIMO-2.
+
+| Model | Acc@1 |
| Zero-Shot | Finetuned |
| SimCLRv2 (Chen et al., 2020b) | - | 80.5 |
| CLIP-ViT(B/16) | 68.6 | 80.2 |
| Our Baseline | 58.2 | 80.7 |
| UNIMO-2 | 66.3 | 80.8 |
+
+VLP models. Particularly, UNIMO-2 achieves very good performance on the task of zero-shot image/text retrieval, even outperforming CLIP (Radford et al., 2021) that pre-trained on an order of magnitude larger corpus. The results demonstrate that UNIMO-2 can obtain better cross-modal representations based on joint end-to-end grounded learning on different types of corpus.
+
+Furthermore, the performance of "Our Baseline" that just removes the grounded embedding module in UNIMO-2 drop obviously on all tasks, which demonstrates the effectiveness of our grounded learning method for cross-modal alignment. Especially, on the zero-shot image retrieval and text retrieval tasks, UNIMO-2 obtains 7.59 R@1 and 9.66 R@1 absolute gains compared to "Our Baseline". The results demonstrate that our grounded learning method can help align the visual and textual semantic space on different types of corpora to obtain more effective cross-modal representations.
+
+Table 2: Evaluation results on visual tasks, compared to state-of-the-art representation learning methods. We report both the zero-shot and finetuned top-1 accuracy on ImageNet-1k. The finetuned result of CLIP-ViT is linear probe performance.
+
+| Model | SST-2 Acc | MNLI Acc-(m/mm) | CoLA Mat | STS-B Per |
| BERT | 92.7 | 84.4 / - | - | - |
| RoBERTa | 94.8 | - | 63.6 | - |
| UniLM | 94.5 | 87.0/85.9 | 61.1 | 87.7 |
| UNITER | 89.7 | 80.8/- | 37.4 | - |
| VilBERT | 90.4 | 79.9/- | 36.1 | - |
| UNIMO | 95.1 | 86.8/86.7 | 65.4 | 91.0 |
| Our Baseline | 94.1 | 87.1/86.9 | 60.6 | 91.0 |
| UNIMO-2 | 94.7 | 87.5/87.5 | 62.1 | 91.2 |
+
+Table 3: Evaluation results on textual tasks. Mat and Per denote Matthews correlation coefficient and Pearson correlation coefficient, respectively. All the results are evaluated on the dev set.
+
+# 5.2 Visual Tasks
+
+UNIMO-2 can also be effectively adapted to visual tasks such as image classification. As UNIMO-2 learns effective cross-modal representations, it can classify images without finetuning. Specifically, the target labels of images can be transformed into pseudo image descriptions, such as "a photo of [label]". Then the zero-shot image-to-text retrieval method can be used to obtain the label for each image, similar to CLIP (Radford et al., 2021). Both the zero-shot and finetuned performance is compared to several state-of-the-art representation learning methods. The results in Table 2 show that UNIMO-2 can achieve comparable performance with CLIP that pretrained on billions of image-text pairs, on both the zero-shot and supervised settings. Moreover, UNIMO-2 obviously outperforms
+
+ | Model | ZS-IR R@1 | ZS-TR R@1 | IR R@1 | TR R@1 | COCO Caption B@4 / C | ZS-ImageNet Acc@1 | MNLI m/mm |
| UNIMO-2 | 72.70 | 88.46 | 80.14 | 92.01 | 39.7 / 131.2 | 66.3 | 87.5/87.5 |
| GD | w/o GD (P) | 65.11 | 78.80 | 78.52 | 91.62 | 38.5 / 128.4 | 58.2 | 87.1/86.9 |
| w/o GD (I) | 40.22 | 31.76 | 74.08 | 88.26 | 39.0 / 127.4 | 21.3 | 87.5/87.3 |
| w/o G.T. | 70.10 | 85.01 | 78.84 | 91.12 | 39.6 / 130.1 | 66.4 | 87.1/86.8 |
| 1-to-1 Map | 66.06 | 80.97 | 77.61 | 90.43 | 38.7 / 127.4 | 66.3 | 87.0/86.9 |
| Corpus | w/o Text | 70.00 | 85.50 | 78.90 | 90.24 | 39.0 / 128.7 | 65.0 | 84.9/85.0 |
| w/o Images | 69.17 | 84.81 | 77.65 | 90.34 | 39.4 / 129.5 | 42.2 | 87.1/87.0 |
| w/o Both | 70.06 | 84.12 | 78.17 | 91.32 | 39.3 / 129.3 | 43.0 | 85.9/85.7 |
+
+Table 4: Ablation study on the effectiveness of our unified end-to-end grounded learning architecture.
+
+"Our Baseline" on the zero-shot setting, achieving 8.1 Acc@1 absolute gains. The results demonstrate that UNIMO-2 also learns generalized visual representations through unified-modal learning on different types of corpora.
+
+# 5.3 Textual Tasks
+
+To show the effectiveness of UNIMO-2 on textual tasks, we further compare with both VLP models including UNITER, VilBERT and UNIMO, and pre-trained language models including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and UniLM (Dong et al., 2019). The comparison results in Table 3 demonstrate that UNIMO-2 achieves much better performance than existing VLP models including UNITER and VilBERT, and achieves comparable performance than existed PLMs such as RoBERTa. UNIMO-2 also outperforms "Our Baseline" on all textual tasks.
+
+The above results demonstrate the adaptability and scalability of our unified end-to-end VLP architecture for joint learning on both aligned and unaligned images and texts. In all, UNIMO-2 not only achieves excellent performance on cross-modal tasks, but also performs very well on visual and textual tasks, which validates the superiority of our unified-modal learning architecture.
+
+# 5.4 Analysis
+
+Effectiveness of Grounded Learning We further validate the effectiveness of our grounded learning component by ablation study. "w/o GD (P)" denotes removing the grounded learning component during both pre-training and inference in order to validate its effectiveness for unified learning on different types of corpus. "w/o GD (I)" denotes keeping the grounded learning component during pre-training, but removing it during inference, in order to validate the effectiveness of the grounded representations to downstream tasks. "1-to-1 Map" denotes mapping each patch or token to
+
+a grounded token by finding its nearest neighbor in the grounded dictionary, similar to the vector quantization method in (Oord et al., 2017). We compare their performance on three types of tasks, as shown in the top part of Table 4. The results demonstrate that our grounded learning (GD) method is essential to the end-to-end joint learning from different types of corpus, which can help bridge unaligned images and texts and improve vision-language semantic alignment. The learned grounded representations is also critical to both the cross-modal and single-modal downstream tasks. We further validate the effectiveness of our Grounded Transformer by replacing it with a traditional Transformer, denoted as "w/o G.T". The results show that the performance of cross-modal tasks drop obviously compared to UNIMO-2, which demonstrate the effectiveness of our Grounded Transformer architecture.
+
+Effectiveness of Unaligned Images and Texts To further validate the effectiveness of unaligned images and texts to cross-modal learning, we compare the performance of UNIMO-2 on different pre-training datasets. Specifically, we compare the performance of UNIMO-2 by either removing the text cropus (i.e. "w/o Text"), the image corpus (i.e. "w/o Images") or removing them both (i.e. "w/o Both"). The comparison results are shown in the bottom part of Table 4, which show that either removing text corpus or image corpus will consistently reduce the performance of all three types of tasks, including cross-modal, visual and textual tasks. It is worth noting that the performance of the image/text retrieval tasks drop obviously when either removing the text-only cropus or image-only corpus, which demonstrate that unaligned corpus is also useful to cross-modal tasks. UNIMO-2 can effectively leverage unaligned images and texts to improve cross-modal learning.
+
+# 6 Conclusion
+
+In this work, we propose UNIMO-2, an end-to-end unified-modal pre-training framework that can learn from both aligned and unaligned image and text corpora. Our proposed grounded learning method can help bridge unpaired images and texts and align the textual and visual semantic spaces more effectively. Benefiting from effectively utilizing different types of corpora, UNIMO-2 has better scalability for different types of tasks. Experiments show that UNIMO-2 greatly improves the performance of various cross-modal tasks and also achieves very impressive performance on visual and textual tasks. The results also show that it is promising to further uniformly improve the performance of cross-modal, visual and textual tasks by utilizing larger scales of unpaired images and texts.
+
+# Acknowledgments
+
+This work was supported in part by the National Key R&D Program of China under Grant 2020YFB1406701. Xinyan Xiao is the corresponding author.
+
+# References
+
+Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077-6086.
+Mark Andrews, Gabriella Vigliocco, and David Vinson. 2009. Integrating experiential and distributional data to learn semantic representations. Psychological review, 116(3):463.
+Marco Baroni. 2016. Grounding distributional semantics in the visual world. Language and Linguistics Compass, 10(1):3-13.
+Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of artificial intelligence research, 49:1-47.
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020a. A simple framework for contrastive learning of visual representations.
+
+In International conference on machine learning, pages 1597-1607. PMLR.
+Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. 2020b. Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029.
+Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dólar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.
+Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020c. Uniter: Universal image-text representation learning. In European Conference on Computer Vision, pages 104-120. Springer.
+Grzegorz Chrupa, Ákos Kádár, and Afra Alishahi. 2015. Learning language through pictures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 112-118, Beijing, China. Association for Computational Linguistics.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pages 13063-13075.
+Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
+Rebecca Fincher-Kiefer. 2001. Perceptual components of situation models. Memory & Cognition, 29(2):336-343.
+
+Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. arXiv preprint arXiv:2006.06195.
+Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913.
+Zhicheng Huang, Zhaoyang Zeng, Yupan Huang, Bei Liu, Dongmei Fu, and Jianlong Fu. 2021. Seeing out of the box: End-to-end pre-training for vision-language representation learning. arXiv preprint arXiv:2104.03135.
+Susan S Jones, Linda B Smith, and Barbara Landau. 1991. Object properties and knowledge in early lexical learning. *Child development*, 62(3):499-516.
+Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
+Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128-3137.
+Douwe Kiela. 2017. Deep embodiment: grounding semantics in perceptual modalities. Technical report, University of Cambridge, Computer Laboratory.
+Douwe Kiela, Alexis Conneau, Allan Jabri, and Maximilian Nickel. 2018. Learning visually grounded sentence representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 408-418, New Orleans, Louisiana. Association for Computational Linguistics.
+Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. arXiv preprint arXiv:2102.03334.
+Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Andreas Veit, et al. 2017. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2(3):2-3.
+Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32-73.
+
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097-1105.
+Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. 2020. The open images dataset v4. International Journal of Computer Vision, 128(7):1956-1981.
+Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2019a. Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. arXiv preprint arXiv:1908.06066.
+Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019b. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557.
+Lianian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, and Kai-Wei Chang. 2021a. Unsupervised vision-and-language pre-training without parallel images and captions. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5339-5350, Online. Association for Computational Linguistics.
+Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. 2021b. UNIMO: Towards unified-modal understanding and generation via cross-modal contrastive learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2592-2607, Online. Association for Computational Linguistics.
+Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer.
+Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, and Hongxia Yang. 2020. Interbert: Vision-and-language interaction for multi-modal pretraining. arXiv preprint arXiv:2003.13198.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
+Alexander H Liu, SouYoung Jin, Cheng-I Jeff Lai, Andrew Rouditchenko, Aude Oliva, and James Glass. 2021. Cross-modal discrete representation learning. arXiv preprint arXiv:2106.05438.
+
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi-olinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23.
+Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. arXiv preprint arXiv:1711.00937.
+Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24:1143-1151.
+Charles A Perfetti. 1998. The limits of co-occurrence: Tools and theories in language research.
+Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020.
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2016. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137-1149.
+Brian Riordan and Michael N Jones. 2011. Redundancy in perceptual and linguistic experience: Comparing feature-based and distributional models of semantic representation. Topics in Cognitive Science, 3(2):303-345.
+Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565, Melbourne, Australia. Association for Computational Linguistics.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
+Hao Tan and Mohit Bansal. 2020. Vokenization: Improving language understanding with contextualized, visual-grounded supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages
+
+2066-2080, Online. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak supervision. arXiv preprint arXiv:2108.10904.
+Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Cola: The corpus of linguistic acceptability (with added annotations).
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
+Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. 2019. Visual entailment: A novel task for fine-grained image understanding. arXiv preprint arXiv:1901.06706.
+Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Songfang Huang, Wenming Xiao, and Fei Huang. 2021. E2E-VLP: End-to-end vision-language pre-training enhanced by visual learning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 503-513, Online. Association for Computational Linguistics.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763.
+Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.
+Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernievil: Knowledge enhanced vision-language representations through scene graph. arXiv preprint arXiv:2006.16934.
+
+# A Grounded Learning Analysis
+
+Visualization of Grounded Dictionary To show the semantics of the grounded dictionary learned by UNIMO-2, we visualize the image patches and textual tokens that are grouped in each grounded token. We map each patch or token into a grounded token with which has the largest similarity between their representations by Equation 1. For each grounded token, the patches and tokens that have the largest similarity scores are selected and visualized. Several examples are shown in Figure 3, which demonstrate that each grounded token captures meaningful and consistent vision-language grounded semantics.
+
+Parameter Analysis In all our experiments, we utilize the default grounding settings that the grounded dictionary (GD) size $C$ is set as 2048 and the number of grounded tokens $K$ is set as 100. We further compare different grounding settings to explore the properties of the grounded semantic space for cross-modal learning. Specifically, we validate the performance of grounded learning with different grounded dictionary (GD) size $C$ from {1024, 2048, 4096, 8192} and different number of grounded tokens $K$ from {10, 20, 50, 100}. When comparing different GD size $C$ , we set $K$ as 100. We also keep $C = 2048$ when comparing different settings of $K$ . Furthermore, we also compare our method with the simplest Vector Quantization (VQ) method that maps each visual or textual token to a grounded token by finding its nearest neighbor in the grounded dictionary, namely "1-to-1 map". The number of grounded tokens for "1-to-1 map" is depended on the total number of image patches and textual tokens, which is 709 (i.e. $197 + 512$ ) during pre-training and 1089 (i.e. $577 + 512$ ) during finetuning.
+
+For time efficiency, we only pre-train UNIMO-2 on the corpus of image-text pairs for 10 epochs under the above settings, and then compare their performance on two representative cross-modal tasks, including zero-shot image/text retrieval and image caption, to validate their effectiveness on cross-modal alignment. The comparison results are shown in Table 5, which demonstrate that our grounded learning method achieves better performance on the two representative cross-modal tasks when the GD size $C$ is set as 4096 or the number of grounded tokens $K$ is set as 50. Too large $C$ will increase the difficulty of learning while too
+
+small $C$ may restrict the volume of grounded semantic space. Similarly, too small $K$ will lose sharing semantics between images and texts while too large $K$ will introduce noisy information. Although different settings have different behavior, the performance of our grounded learning method is relatively stable. In particular, the "1-to-1 map" method achieves much worse results than our grounded learning method under different settings, which validates the effectiveness of our grounded learning method on cross-modal alignment. Furthermore, our grounded learning method is much more efficient in computation than "1-to-1 map" as the number of grounded tokens is much smaller, which largely reduce the sequence length during cross-modal fusion.
+
+# B Experimental Settings
+
+Pretraining Datasets The pre-training datasets consist of text corpus, image collections and imagetext pairs. The detail statistics of them are shown in Table 6.
+
+Finetuning Tasks The multi-modal finetuning tasks include:
+
+- VQA requires the model to answer natural language questions by selecting the correct answer from a multi-choice list based on an image. We conduct experiments on the widely-used VQA v2.0 dataset (Goyal et al., 2017), which is built based on the COCO (Chen et al., 2015) images. Similar to previous work, both training and validation sets are used for training for the results on both the test-standard and test-dev splits.
+
+- Image Caption requires the model to generate a natural language description of an image. We report our results on the Microsoft COCO Captions dataset (Chen et al., 2015). Following Karpathy's (Karpathy and Fei-Fei, 2015) split, the dataset contains $113.2\mathrm{k} / 5\mathrm{k} / 5\mathrm{k}$ images for train/val/test splits respectively.
+
+- Visual Entailment (SNLI-VE) is evaluated on the SLNI-VE dataset (Xie et al., 2019) which was derived from Flickr30K images and Stanford Natural Language Inference (SNLI) dataset. The task is to determine the logical relationship (i.e., "Entailment", "Neutral" and "Contradiction") between a natural language statement and an image.
+
+
+ID=67: cake, birthday, candles, happy, party
+
+
+ID=74: airplane, flight, flying, plane, aircraft
+
+
+ID=153: kitchen, cook, chefs, food, pans
+
+
+ID=680: glasses, sun, wearing, beach, sunny
+
+
+ID=885: phone, cell, mobile, cellphone, talking
+Figure 3: Visualization of the grounded dictionary learned by UNIMO-2, which groups consistent semantics of image patches and textual tokens. Each grounded token reflects an abstraction of vision-language grounded semantics.
+
+
+ID=1211: taxi, transports, city, traffic, street
+
+| Model | ZeroShot-IR
+R@1 / R@5 / R@10 | ZeroShot-TR
+R@1 / R@5 / R@10 | COCO Caption
+B@4 / M / C / S |
| GD Size C | 1024 | 58.52 / 82.19 / 88.92 | 71.10 / 90.14 / 95.17 | 37.58 / 29.18 / 123.53 / 22.23 |
| 2048 | 60.32 / 84.02 / 89.72 | 75.84 / 91.91 / 95.56 | 37.62 / 29.12 / 123.38 / 22.16 |
| 4096 | 64.10 / 86.41 / 91.79 | 77.91 / 94.38 / 96.75 | 38.07 / 29.20 / 124.18 / 22.20 |
| 8192 | 61.20 / 85.29 / 90.73 | 75.84 / 92.50 / 96.15 | 37.86 / 29.03 / 124.23 / 22.33 |
| Top-K | 10 | 57.79 / 82.66 / 89.47 | 69.92 / 91.42 / 95.56 | 37.36 / 28.92 / 122.93 / 22.15 |
| 20 | 61.46 / 85.07 / 90.75 | 74.46 / 93.10 / 97.34 | 37.90 / 28.81 / 123.68 / 22.03 |
| 50 | 63.49 / 86.13 / 91.54 | 77.32 / 93.10 / 96.65 | 38.38 / 29.17 / 125.31 / 22.39 |
| 100 | 60.32 / 84.02 / 89.72 | 75.84 / 91.91 / 95.56 | 37.62 / 29.12 / 123.38 / 22.16 |
| 1-to-1 Map | 56.51 / 81.54 / 88.19 | 71.99 / 90.43 / 94.58 | 35.62 / 27.97 / 117.92 / 21.38 |
+
+Table 5: Parameter analysis for grounded learning. The top part validates the influence of GD size $C$ , and the middle part compares the performance of different number of grounded tokens $K$ used during learning. The bottom part shows the effectiveness of our grounded learning method compared with the existing VQ method.
+
+| Type | Image-Text Pairs | Unaligned Images | Unaligned Text |
| Dataset | COCO | VG | CC | SBU | ImageNet21K | Open Images | BookWiki | OpenWebText |
| #Images | 113K | 108K | 3.01M | 867K | 14M | 1.7M | | |
| #Texts | 567K | 5.41M | 3.01M | 867K | | | 16G | 38G |
+
+Table 6: Statistics of the aligned image-text pairs, and unaligned images and texts for pre-training.
+
+| Task | Image Src. | #Images (#Text) |
| Train | Val | Test |
| test-standard | test-dev |
| VQA | COCO | 83K(444K) | 41K(214K) | 81K(107K) | 81K(448K) |
| Image Caption | COCO | 113.2K | 5K | 5K | - |
| Visual Entailment | Flickr30K | 529.5K | 17.9K | 17.9K | - |
| Image-Text Retrieval | Flickr30K | 29K(145K) | 1K(5K) | 1K(5K) | - |
+
+Table 7: Statistics of the datasets for the cross-modal downstream tasks.
+
+| Hyper-params | Textual Tasks | Visual Tasks |
| Learning Rate | {1e-5, 2e-5, 3e-5} | {1e-4, 3e-4, 5e-4} |
| Batch Size | {16, 32} | 512 |
| Epochs | 10 | 10 |
| Warmup Raito | 0.06 | 0.06 |
| Weight Decay | 0.01 | 0.01 |
+
+Table 8: Hyper-parameters for fine-tuning on visual and textual tasks.
+
+- Image-Text Retrieval is evaluated on the Flickr30k dataset (Young et al., 2014), which contains two sub-tasks: image retrieval (Flickr30k-IR) and text retrieval (Flickr30k-TR), depending on which modality is used as the retrieved target. We report the top-K retrieval results on the test sets, including R@1, R@5 and R@10.
+
+The statistics of the datasets for the above multimodal-tasks are described in Table 7. The hyper-parameters for finetuning all the downstream tasks, including both the single-modal tasks and cross-modal tasks are shown in Table 8 and 9, respectively. The full evaluation results (including R@1, R@5 and R@10) on Image/Text Retrieval tasks and comparison with other state-of-the-art
+
+VLP methods are shown in Table 10.
+
+| Hyper-parameters | Image-Text Retrieval | SNLI-VE | VQA | COCO Caption |
| Batch Size | 32 | 64 | 256 | 32 |
| Epoch | 40 | 10 | 12 | 10 |
| Learning Rate | 5e-6 for epoch=[0,24]
+5e-7 for epoch=[24,32]
+5e-8 for epoch=[32,40] | 1e-5 | 4e-5 for epoch=[0,5]
+4e-6 for epoch=[6,8]
+4e-7 for epoch=[9,12] | 1e-5 |
| Warmup Ratio | - | 0.06 | - | 0.06 |
| Weight Decay | 0.01 | 0.0 | 0.0 | 0.01 |
+
+Table 9: Hyper-parameters for fine-tuning on cross-modal tasks.
+
+| Model | ZeroShot-IR
+R@1 / R@5 / R@10 | ZeroShot-TR
+R@1 / R@5 / R@10 | Finetuned-IR
+R@1 / R@5 / R@10 | Finetuned-TR
+R@1 / R@5 / R@10 |
| ViLBERT-base | 31.86 / 61.12 / 72.80 | - | 58.20 / 84.90 / 91.52 | - |
| UNITER-base | 66.16 / 88.40 / 92.94 | 80.70 / 95.70 / 98.00 | 72.52 / 92.36 / 96.08 | 85.90 / 97.10 / 98.80 |
| Villa-base | - | - | 74.74 / 92.86 / 95.82 | 86.60 / 97.90 / 99.20 |
| UNIMO-base | 62.44 / 86.16 / 91.68 | 77.40 / 95.10 / 97.80 | 74.66 / 93.40 / 96.08 | 89.70 / 98.40 / 99.10 |
| UNITER-large | 68.74 / 89.20 / 93.86 | 83.60 / 95.70 / 97.70 | 75.56 / 94.08 / 96.76 | 87.30 / 98.00 / 99.20 |
| Villa-large | - | - | 76.26 / 94.24 / 96.84 | 87.90 / 97.50 / 98.80 |
| UNIMO-large | 72.14 / 91.14 / 94.98 | 85.80 / 96.80 / 98.80 | 78.04 / 94.24 / 97.12 | 89.40 / 98.90 / 99.80 |
| ViLT | 51.3 / 79.9 / 81.9 | 69.7 / 91.0 / 96.0 | 62.2 / 87.6 / 93.2 | 83.7 / 97.2 / 98.1 |
| E2E-VLP | - | - | 73.58 / 92.42 / 96.03 | 86.24 / 97.50 / 98.92 |
| SOHO | - | - | 72.5 / 92.7 / 96.1 | 86.5 / 98.1 / 99.3 |
| CLIP | 68.7 / 90.6 / 95.2 | 88.0 / 98.7 / 99.4 | - | - |
| Our Baseline | 65.11 / 87.44 / 92.62 | 78.80 / 94.38 / 97.63 | 78.52 / 94.02 / 96.63 | 91.62 / 98.72 / 99.51 |
| UNIMO-2 | 72.70 / 91.18 / 94.60 | 88.46 / 96.84 / 98.92 | 80.14 / 95.58 / 97.75 | 92.01 / 99.31 / 99.51 |
+
+Table 10: Full evaluation results on the Flickr30k retrieval tasks.
\ No newline at end of file
diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/images.zip b/unimo2endtoendunifiedvisionlanguagegroundedlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5ef8377e0246c971f8fc3cc2330132cf8e08c846
--- /dev/null
+++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e52e19a279ee2ec07d0a2b3229d6645685d6ce5e9ddadc56612b09f441d62a8f
+size 1193251
diff --git a/unimo2endtoendunifiedvisionlanguagegroundedlearning/layout.json b/unimo2endtoendunifiedvisionlanguagegroundedlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f0d4a5526d43f9539b50d31004f910628da618b0
--- /dev/null
+++ b/unimo2endtoendunifiedvisionlanguagegroundedlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee9a3c44e4483db7821b124e2758ace8be9470332478260b3d1d7dfc69ca0347
+size 393774
diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_content_list.json b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..cd06d5682e5bd4d7261fdabae055a3e5c61925a1
--- /dev/null
+++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c66a864620809c3eb157afefbf9f58075ccfce14a973399c707ed6c257147036
+size 41126
diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_model.json b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ea271b0a98d90b4cc8ce605a7fee9101a8f53430
--- /dev/null
+++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f485e87163f4f94fbab447e64d1a4670d192483bc7eb003d3471147a47f2c60
+size 48237
diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_origin.pdf b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..08891e51d2a3390c731476a2ef78d062b3661f99
--- /dev/null
+++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/22bf9639-18d1-4371-8318-8a1085d4648c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e35aea220ceec94a840ea759a1e62972ae6b6b76ec9e462f3fc2166c204c45dd
+size 295773
diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/full.md b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a4e3e734bcbe691e32672c659df43b7ea02f7c6
--- /dev/null
+++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/full.md
@@ -0,0 +1,188 @@
+# Unsupervised Word Segmentation with BERT Oriented Probing and Transformation
+
+Wei Li $^{1*}$ , Yuhan Song $^{2*}$ , Qi Su $^{3}$ , Yanqiu Shao $^{1}$
+
+$^{1}$ School of Information Science, Beijing Language and Culture University
+
+$^{2}$ School of EECS, Peking University
+
+$^{3}$ School of Foreign Languages, Peking University
+
+liweitj47@blcu.edu.cn
+
+{songyuhan, sukia}@pku.edu.cn
+
+shaoyanqiu@blcu.edu.cn
+
+# Abstract
+
+Word Segmentation is a fundamental step for understanding many languages. Previous neural approaches for unsupervised Chinese Word Segmentation (CWS) only exploit shallow semantic information, which can miss important context. Large scale Pre-trained language models (PLM) have achieved great success in many areas. In this paper, we propose to take advantage of the deep semantic information embedded in PLM (e.g., BERT) with a self-training manner, which iteratively probes and transforms the semantic information in PLM into explicit word segmentation ability. Extensive experiment results show that our proposed approach achieves a state-of-the-art F1 score on two CWS benchmark datasets. The proposed method can also help understand low resource languages and protect language diversity. $^{1}$
+
+# 1 Introduction
+
+There exist many low resource fields and languages where labeled word segmentation is inaccessible, which makes unsupervised word segmentation desirable. Previous unsupervised word segmentation methods mainly apply statistical models to either evaluate the quality of possible segmented sequence with discriminative models (e.g., Mutual Information (Chang and Lin, 2003)) or estimate the generative probabilities with generative models (e.g., Hidden Markov Model (Chen et al., 2014)). However, these statistical methods can only make use of the limited contextual information, thus yielding less competitive performance.
+
+With the thrive of neural networks, researchers have applied neural models for unsupervised word segmentation. Sun and Deng (2018) propose a segmental language model (SLM) to estimate the
+
+generative probability with recurrent networks. Although SLM can exploit more contextual information compared with statistical models, it is still weak in modeling deep semantic information, limited by its model capacity and training data scale.
+
+Pre-trained language models trained on large scale data have shown superior ability to model contextual information, and achieve great success in various tasks (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019). Inspired by the attempt for interpreting BERT (Wu et al., 2020), we propose to take advantage of the semantic representation ability of BERT to evaluate the closeness between characters in a probing manner. To be more specific, we assume that the difference between masking one character and masking several adjacent characters as a whole reveals the closeness between that character and the adjacent ones.
+
+Although this probing-based method can take advantage of the large amount of knowledge embedded in BERT, it only implicitly exploits the representation ability of BERT. To transfer the implicit knowledge into explicit segmentation boundary, we propose to apply a self-training method that transforms the segmentation decision from generative methods with high confidence into traditional "BI" sequence labeling system, which is then treated as the supervision signals for a discriminative model.
+
+To combine the advantage of both generative and discriminative models, we propose to iteratively train the discriminative model and generative model under the supervision signal from their counterparts. To select the model with the best performance in the unsupervised setting, we propose an evaluation module that evaluates the quality of the word boundaries with masked prediction accuracy based on the assumption that the closer two characters are, the bigger loss masking one adjacent character would bring.
+
+We conduct experiments on two Chinese Word Segmentation benchmark datasets in an unsuper
+
+vised manner. Experiment results show that our method can outperform the strong baseline models and achieve state-of-the-art results in unsupervised CWS. Extensive analysis shows the effectiveness of the proposed modules.
+
+We conclude our contributions as follows:
+
+- We propose an unsupervised word segmentation method that segments tokens by probing and transforming PLM with generative and discriminative modules, which are trained in a mutual promotion manner and selected for inference with an evaluation module.
+- Experiment results show that our proposed method achieves the state-of-the-art result in unsupervised CWS. Extensive analysis testifies the effectiveness of the proposed modules.
+
+# 2 Related Work
+
+Previous unsupervised word segmentation methods can be roughly classified as generative and discriminative two ways. Generative models focus on finding the segmented sequence with the highest posterior probability. Hierarchical Dirichlet process (HDP) model (Goldwater et al., 2009), Nested PitmanYor process (NPY) (Mochihashi et al., 2009), Hidden Markov Model (HMM) (Chen et al., 2014) and SLM (Sun and Deng, 2018) are all different ways to estimate the generative probabilities for segmented sequences. On the other hand, discriminative models focus on designing a measure to evaluate the segmented sequences. Mutual Information (MI) (Chang and Lin, 2003), normalized Variation of Branching Entropy (nVBE) (Magistry and Sagot, 2012) and ESA (Wang et al., 2011) apply co-occurrence based measurement to evaluate the segmented sequences.
+
+# 3 Approach
+
+In this section, we describe our BERT oriented probing and transformation based unsupervised word segmentation approach. Our model mainly consists of three parts, a generative module that suggests the plausible word boundaries by probing BERT, a discriminative module that transforms the implicit boundary information into explicit sequence labels, and an evaluation module that estimates the performance of the model in an unsupervised manner.
+
+Algorithm 1 Unsupervised Word Segmentation Procedure
+Require: Generative Module $G$ ,Discriminative
+Module $D$ ,Evaluation Module $E$ ,sequences to
+be segmented $X$ $\mathrm{i} = 0$
+while True do Segment the sequences $X$ with $G$ into $X^g$ Transform the segmented $X^g$ into "BI" labels Train $D$ with high confident segmentations in
+ $X^g$ Segment the sequences $X$ with updated $D$
+into $X^{d}$
+Train $G$ with high confident segmentations in
+ $X^d$ Evaluate the segmented sequence $X^d$ with $E$ $e = E(X^{d})$
+if $e^i < e^{i - 1}$ then Return $D^{i - 1}$
+end if
+ $\mathrm{i + = 1}$
+end while
+
+# 3.1 Overview
+
+Because our method works in an unsupervised manner, we propose to get the original word boundary information by probing BERT, which reveals the word boundaries by measuring the distance between masking a span and masking a token using the generative module. This distance reflects the closeness between the masked token and the masked span (separately). Then the discriminative module transforms the word boundaries suggested by the generative module into explicit segmentation labels to enable the self-training process. To combine the advantages of both generative and discriminative modules, two modules are iteratively trained with the word boundaries suggested by the updated counterpart with high confidence. To decide when to stop this iterative self-training procedure, an evaluation module is proposed to evaluate the segmented sequence, which early stops the iterative process with the model parameters that yields the best performance.
+
+# 3.2 Generative Module
+
+The proposed generative module works by probing a pre-trained language model (e.g., BERT) with masks on tokens. Assume the input sequence to be $[x_1, x_2, \dots, x_n]$ . We first mask one token at a time in order. The representation at $i$ -th position given
+
+by BERT after masking $x_{i}$ is $H_{i}$ . Then we mask two successive tokens at a time in order. $H_{i,j}$ is the representation given by BERT at $i$ -th position after masking both $x_{i}$ and $x_{j}$ . Note that it is different for the representation at $j$ -th position after masking both $x_{i}$ and $x_{j}$ , which we denote as $H_{j,i}$ .
+
+The intuition behind the generative model is that we assume if two tokens $x_{i}$ , $x_{j}$ are inherently close and should be combined as a word, the difference between masking $i$ -th and $j$ -th token together and solely masking $i$ -th token should be large, which is reflected by the probing distance $d$ ,
+
+$$
+d = \frac {\left(\left| H _ {i , j} - H _ {i} \right| + \left| H _ {j , i} - H _ {j} \right|\right)}{2}
+$$
+
+On the contrary, if two tokens are loosely connected, $d$ should be small. This assumption follows the intuition that if $x_{i}$ is largely dependent on $x_{j}$ , masking $x_{j}$ should bring a relatively big influence on the representation.
+
+This indicator is applied to segment token sequence with a threshold, that is to say, if $d \geq threshold$ , we combine the two tokens $x_{i}$ and $x_{j}$ , if $d \leq threshold$ , we segment $x_{i}$ and $x_{j}$ .
+
+# 3.3 Discriminative Module
+
+The generative module can only exploit the implicit segmentation revealed by BERT. Furthermore, it is not very friendly when the word length gets longer. To overcome these drawbacks, we propose to transform the segmentation information provided by the generative module with high confidence into traditional supervised sequence labeling scheme with "BI" labels, which indicates the role (position) of the token to be "beginning" ("B") or "inside" ("I") of a word.
+
+We train the discriminative module by finetuning BERT on the transformed sequence labels with an additional output layer projecting the representation into "BI" labels. Since the results given by the generative module can be noisy, we only adopt the combined words with relatively high confidence, which is realized by strict thresholds for the generative module. If $d \geq threshold_{l}$ , we combine the two tokens $x_{i}$ and $x_{j}$ , if $d \leq threshold_{h}$ , we segment $x_{i}$ and $x_{j}$ . threshold indicates lower bound, threshold indicates higher bound.
+
+# 3.4 Iterative Training and Evaluation Module
+
+We assume that the generative module and the discriminative module can capture segmentation information from different aspects. Therefore, we
+
+propose a self-training procedure, which promotes both the generative module and the discriminative module by making them learn from the high confident predictions of the counterpart.
+
+To make the generative module learn from the discriminative module, we design a Euclidean distance based MSE loss function
+
+$$
+l o s s _ {g e n e r a t i v e} = \left\| d - t h r e s h o l d \right\| ^ {2}
+$$
+
+to push the distance between two tokens predicted to be in the same word to be larger than a threshold and vice versa. The loss is activated only when the generative module makes different predictions from the discriminative module.
+
+To prevent the self-training procedure from being over-fitting, we propose to keep the MLM objective while fine-tuning on the word segmentation objectives, and early stop the training with an evaluation module. The intuition behind the evaluation module is that predicting a masked token with the token inside the same word is much easier than predicting this masked token with the token outside that word. Formally, let the cross-entropy of predicting the $i$ -th token $x_{i}$ with the masked language modeling ability of BERT when masking two adjacent tokens $x_{i,j}$ be $CE_{i,j}$ , we assume that
+
+$$
+C E _ {i - 1, i} < C E _ {i, i + 1}
+$$
+
+if $x_{i,i+1}$ rather than $x_{i-1,i}$ belongs to the same word, because $x_{i+1}$ provides more information for prediction when masking $x_{i-1,i}$ .
+
+We apply this principle to inspect the segmentation results from either the discriminative module or the generative module. When the evaluation module detects performance decline, the training procedure stops, and the discriminative module with the best performance is used as the final word segmentation model.
+
+# 4 Experiment
+
+In this section, we show the results and analysis on two CWS benchmark datasets, PKU and MSR for a fair comparison, which are provided by the Second Segmentation Bake-off (SIGHAN 2005) (Emerson, 2005). There are 104K and 107K words in the test set of PKU and MSR datasets respectively.
+
+# 4.1 Settings
+
+In this paper, we use the pre-trained BERT (base) model for Chinese and the corresponding tokenizer
+
+| F1 score | PKU | MSR |
| HDP (Goldwater et al., 2009) | 68.7 | 69.9 |
| NPY-3 (Mochihashi et al., 2009) | - | 80.7 |
| NPY-2 (Mochihashi et al., 2009) | - | 80.2 |
| ESA (Wang et al., 2011) | 77.8 | 80.1 |
| nVBE (Magistry and Sagot, 2012) | 80.0 | 81.3 |
| HDP + HMM (Chen et al., 2014) | 75.3 | 76.3 |
| Joint (Chen et al., 2014) | 81.1 | 81.7 |
| SLM-2 (Sun and Deng, 2018) | 80.2 | 78.5 |
| SLM-3 (Sun and Deng, 2018) | 79.8 | 79.4 |
| MSLM (Downey et al., 2021) | 62.9 | - |
| Proposal | 84.1 | 83.0 |
+
+
+Figure 1: The relation between evaluation score and F1 score on the development set. The evaluation score shows good coherence with F1 score. We select the model with best evaluation score, which also achieves the best F1 score on the development set.
+
+released by Huggingface. The tokenizer tokenizes the sentence into Chinese characters, which involves with no word (segmentation) information. We randomly initialize the discriminative module, which is trained for 2 epochs using sequence labels transformed from the generative module with high confidence. threshold is 8 and threshold is 12. We use AdamW (Loshchilov and Hutter, 2019) optimizer with the learning rate of 1e-4.
+
+# 4.2 Results
+
+In Table 1 we show the F1 score on PKU and MSR. From the results, we can see that our model yields much better results than the previous models and achieves state-of-the-art results. We assume the reason behind is that our model can take advantage of the large pre-trained language model, which encodes abundant language matching knowledge and can better model the context with big model capac
+
+Table 1: F1 score on two word segmentation benchmark datasets. Our proposed method achieves the state-of-the-art performance on all the datasets. We take the results reported in the original paper.
+
+| F1 score | PKU | MSR |
| Generative Only | 74.8 | 72.5 |
| +Discriminative | 79.7 | 78.3 |
| +Discriminative & iterative | 80.5 | 78.9 |
| +Discriminative & mlm | 82.0 | 82.1 |
| Full Model | 84.1 | 83.0 |
+
+Table 2: Ablation study results. " MLM" means using mlm loss as a regularization mentioned in Section 3.4. "iterative" means using iterative training mentioned in Section 3.4. "Full model" means using Discriminative & mlm & iterative training.
+
+ity. Moreover, we can observe that the neural-based model SLM does not outperform the traditional statistical Joint method, but gives better results than other traditional generative models. This indicates that combining generative and discriminative methods can benefit the results. Moreover, our model does not need to constrain the longest word length compared with SLM-2, SLM-3, etc., which provides more flexibility. This is achieved by introducing the discriminative module, which segments the words under the sequence labeling scheme.
+
+# 4.3 Ablation Study
+
+In Table 2 we show the results for removing the designed modules. "Generative only" means we only use the generative module described in section 3.2, where a hard threshold of 10 is used to decide the word boundary. "+"Discriminative" means we use the discriminative module after learning from the generative module described in section 3.3 without iterative training and mlm loss. From the results, we can see that revealing the implicit word boundary information by probing BERT can only provide performance comparable to traditional statistical models. Transforming the implicit knowledge into explicit segmentation labels (+Discriminative) can give big promotion, which makes better use of the big amount of semantic knowledge encoded in PLM. Moreover, the proposed iterative training process and mlm loss further help improve the overall performance by combining the advantages of both generative and discriminative modules.
+
+Effect of Evaluation Module In Figure 1, we show the relation between the evaluation score described in section 3.4 and the development F1 score. We can see that the model with the best evaluation score achieves the best F1 score in the development set, and it generally coordinates with the variation trend of the F1 score, which makes the evaluation
+
+score a reasonable indicator to select the best model in the unsupervised setting.
+
+# 4.4 Case Study
+
+In Table 3 we show a concrete example of the segmentation results of SLM and our proposed method. Both two methods basically give correct word segments. The disagreement mainly lies in "送交市政府" (give to the city government). Compared with other words, "送交" can be relatively rare and bears very similar meaning with the single character "送", which makes SLM wrongly segment "送交" apart. On the contrary, our method is built based on BERT trained on a large corpus, which makes our model able to recognize these relatively rare words. For the part "市政府", where our model chooses to split, we assume that this is because similar contexts are often seen such as "北京市" (Beijing City), where "市" should be separated from "政府" (government). Furthermore, separating "市政府" into two words does not affect the understanding of the original text, and is more dependent on the segmentation fineness.
+
+# 5 Conclusion
+
+In this paper, we propose a BERT oriented Probing and Transformation method for unsupervised Word Segmentation. Our proposed model reveals the semantic information encoded in PLM into word boundary information by probing and transforming the token representations into explicit sequence labels. Experiment results on two benchmark CWS datasets show that our method achieves state-of-the-art F1 score. The proposed method works in an unsupervised manner, which can help understand low resource and endangered languages and thus protecting language diversity.
+
+# Acknowledgements
+
+This research project is supported by National Key R&D Program of China (No. 2019YFC1521200), the National Natural Science Foundation of China (No. 61872402), the Humanities and Social Science Project of the Ministry of Education (No. 17YJAZH068), Science Foundation of Beijing Language and Culture University (supported by “the Fundamental Research Funds for the Central Universities”) (No. 21YBB19)
+
+# References
+
+Jason S. Chang and Tracy Lin. 2003. Unsupervised word segmentation without dictionary. In ROCLING 2003 Poster Papers, pages 355-359, Hsinchu, Taiwan. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP).
+Miaohong Chen, Baobao Chang, and Wenzhe Pei. 2014. A joint model for unsupervised Chinese word segmentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 854-863, Doha, Qatar. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+C. Downey, Fei Xia, Gina-Anne Levow, and Shane Steinert-Threlkeld. 2021. A masked segmental language model for unsupervised natural language segmentation. ArXiv, abs/2104.07829.
+Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing.
+Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2009. A bayesian framework for word segmentation: Exploring the effects of context. Cognition, 112(1):21-54.
+Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
+Pierre Magistry and Benoit Sagot. 2012. Unsupervised word segmentation: the case for Mandarin Chinese. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 383-387, Jeju Island, Korea. Association for Computational Linguistics.
+Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 100-108, Suntec, Singapore. Association for Computational Linguistics.
+Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of
+
+| Model | Segmentation |
| Gold | 她保证,学生们的意见将送交市政府领导机关。 |
| SLM | 她保证,学生们的意见将送交市政府领导机关。 |
| Proposal | 她保证,学生们的意见将送交市政府领导机关。 |
+
+Table 3: Segmentation results of SLM and our proposed method. The gold content can be loosely translated as "She proposed that the suggestions of the students would be transferred to the leading agency of the city government."
+
+the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
+
+Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+
+Zhiqing Sun and Zhi-Hong Deng. 2018. Unsupervised neural word segmentation for Chinese via segmental language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4915-4920, Brussels, Belgium. Association for Computational Linguistics.
+
+Hanshi Wang, Jian Zhu, Shiping Tang, and Xiaozhong Fan. 2011. A new unsupervised approach to word segmentation. Computational Linguistics, 37(3):421-454.
+
+Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed masking: Parameter-free probing for analyzing and interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166-4176, Online. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/images.zip b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a6587f7a485bf606014e611761ae478b20d7fe03
--- /dev/null
+++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ba11e9b17da0cad262d9e315c478848ad0a644477ede5f885f1c06fe864e16f
+size 139117
diff --git a/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/layout.json b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..32af911af64c81229ac0ba6fd11ee52d45bd9e77
--- /dev/null
+++ b/unsupervisedchinesewordsegmentationwithbertorientedprobingandtransformation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f027f0e90ae22d7020e19f4e901be2eb717ce61b96ba8f21ba70712ccf5d85c8
+size 213870
diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_content_list.json b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9bf33f9d7b0da86f1b2041cac8aad7218fe6ae2a
--- /dev/null
+++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ffa7d66fb67e8fd2ad8d441098016d5291644dfe6a054154bd904bce63823bd2
+size 98586
diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_model.json b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..24051715b6701add63d1cc1da72c998f48934363
--- /dev/null
+++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bd3ca617dbd78a40cde0a5e8d3c324cbb0dd16e88840749bcdea6ed5daad67e8
+size 117025
diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_origin.pdf b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ddb4a3eb11837f84d44381e19ad6c69374bd9d35
--- /dev/null
+++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/19b39625-376b-4052-8044-4736861fac1b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c53de68dc396372dc346e420f8d7ee268d6fe0ad07a2ad86a3664ee9fe522a47
+size 455217
diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/full.md b/unsupervisednaturallanguageinferenceusingphltripletgeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..00d396ae111647064042ba09918777430e28f5ee
--- /dev/null
+++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/full.md
@@ -0,0 +1,411 @@
+# Unsupervised Natural Language Inference Using PHL Triplet Generation
+
+# Neeraj Varshney, Pratyay Banerjee, Tejas Gokhale, Chitta Baral
+Arizona State University
+
+{nvarshn2, pbanerj6, tgokhale, cbaral}@asu.edu
+
+# Abstract
+
+Transformer-based models achieve impressive performance on numerous Natural Language Inference (NLI) benchmarks when trained on respective training datasets. However, in certain cases, training samples may not be available or collecting them could be time-consuming and resource-intensive. In this work, we address the above challenge and present an explorative study on unsupervised NLI, a paradigm in which no human-annotated training samples are available. We investigate it under three settings: $PH$ , $P$ , and $NPH$ that differ in the extent of unlabeled data available for learning. As a solution, we propose a procedural data generation approach that leverages a set of sentence transformations to collect PHL (Premise, Hypothesis, Label) triplets for training NLI models, bypassing the need for human-annotated training data. Comprehensive experiments with several NLI datasets show that the proposed approach results in accuracies of up to $66.75\%$ , $65.9\%$ , $65.39\%$ in PH, P, and NPH settings respectively, outperforming all existing unsupervised baselines. Furthermore, fine-tuning our model with as little as $\sim 0.1\%$ of the human-annotated training dataset (500 instances) leads to $12.2\%$ higher accuracy than the model trained from scratch on the same 500 instances. Supported by this superior performance, we conclude with a recommendation for collecting high-quality task-specific data.
+
+# 1 Introduction
+
+Natural Language Inference (NLI) is the task of determining whether a "hypothesis" is true (Entailment), false (Contradiction), or undetermined (Neutral) given a "premise". State-of-the-art models have matched human performance on several NLI benchmarks, such as SNLI (Bowman et al., 2015), Multi-NLI (Williams et al., 2018), and Dialogue NLI (Welleck et al., 2019). This high performance can be partially attributed to the availability of large training datasets; SNLI (570k), Multi-NLI (392k),
+
+
+Figure 1: Illustrating our procedural data generation approach for unsupervised NLI. A sentence is treated as premise, and multiple hypotheses conditioned on each label (Entailment- E, Contradiction- C, and Neutral- N) are generated using a set of sentence transformations.
+
+and Dialogue-NLI (310k). For new domains, collecting such training data is time-consuming and can require significant resources. What if no training data was available at all?
+
+In this work, we address the above question and explore Unsupervised NLI, a paradigm in which no human-annotated training data is provided for learning the task. We study three different unsupervised settings: $PH$ , $P$ , and $NPH$ that differ in the extent of unlabeled data available for learning. In PH-setting, unlabeled premise-hypothesis pairs are available i.e. data without ground-truth labels. In P-setting, only a set of premises are available i.e. unlabeled partial inputs. The third setting NPH does not provide access to any training dataset, and thus it is the hardest among the three unsupervised settings considered in this work.
+
+We propose to solve these unsupervised settings using a procedural data generation approach. Given a sentence, our approach treats it as a premise (P)
+
+
+Figure 2: Comparing supervised NLI with our three unsupervised settings. For unsupervised settings, we procedurally generate PHL triplets to train the NLI model. In NPH setting, a premise pool is collected from raw text corpora such as Wikipedia and then used for generating PHL triplets. In P setting, we directly apply these transformations on the available premises. In PH setting, we leverage the P-setting model to pseudo-label and filter the provided unlabeled PH pairs and then train the NLI model using this pseudo-labeled dataset.
+
+and generates multiple hypotheses (H) corresponding to each label $(\mathrm{L} = \text{Entailment, Contradiction, and Neutral})$ using a set of sentence transformations (refer to Figure 1). This results in creation of Premise-Hypothesis-Label (PHL) triplets that can be used for training the NLI model. In the P and PH settings, we directly apply our sentence transformations over the available premises to generate PHL triplets. However, in the NPH setting, premises are not available. We tackle this challenge by incorporating a premise generation step that extracts sentences from various raw text corpora such as Wikipedia and short stories. We use these extracted sentences as premises to generate PHL triplets. In Figure 2, we compare the four settings (one supervised and three unsupervised) and show our approach to develop an NLI model for each setting.
+
+To evaluate the efficacy of the proposed approach, we conduct comprehensive experiments with several NLI datasets. We show that our approach results in accuracies of $66.75\%$ , $65.9\%$ , and $65.39\%$ on SNLI dataset in PH, P, and NPH settings respectively, outperforming all existing unsupervised methods by $\sim 13\%$ . We also conduct experiments in low-data regimes where a few human-annotated labeled instances are provided and show that further fine-tuning our models with these instances consistently achieves higher performance than the models fine-tuned from scratch. For example, with just 500 labeled instances, our models achieve $8.4\%$ and $10.4\%$ higher accuracy on SNLI and MNLI datasets respectively. Finally, we show that fine-tuning with
+
+'adversarial' instances instead of randomly selected human-annotated instances further improves the performance of our models; it leads to $12.2\%$ and $10.41\%$ higher accuracy on SNLI and MNLI respectively.
+
+In summary, our contributions are as follows:
+
+1. We explore three unsupervised settings for NLI and propose a procedural data generation approach that outperforms the existing approaches by $\sim 13\%$ and raises the state-of-the-art unsupervised performance on SNLI to $66.75\%$ .
+2. We also conduct experiments in low-data regimes and demonstrate that further fine-tuning our models with the provided instances achieves $8.4\%$ and $10.4\%$ higher accuracy on SNLI and MNLI datasets respectively.
+3. Finally, we show that using 'adversarial' instances for fine-tuning instead of randomly selected instances further improves the accuracy. It leads to $12.2\%$ and $10.41\%$ higher accuracy on SNLI and MNLI respectively. Supported by this superior performance, we conclude with a recommendation for collecting high-quality task-specific data.
+
+We release the implementation1 of our procedural data generation approach and hope that our work will encourage research in developing techniques that reduce reliance on expensive human-annotated data for training task-specific models.
+
+# 2 Related Work
+
+Unsupervised Question-Answering: The unsupervised paradigm where no human-annotated training data is provided for learning has mostly been explored for the Question Answering (QA) task in NLP. The prominent approach involves synthesizing QA pairs and training a model on the synthetically generated data. Lewis et al. (2019); Dhingra et al. (2018); Fabbri et al. (2020) propose a template-based approach, while Puri et al. (2020) leverage generative models such as GPT-2 (Radford et al., 2019) to synthesize QA pairs. Banerjee and Baral (2020) create synthetic graphs for commonsense knowledge and propose knowledge triplet learning. Wang et al. (2021) leverage few-shot inference capability of GPT-3 (Brown et al., 2020) to synthesize training data for SuperGLUE (Wang et al., 2019) tasks. For visual question answering, Gokhale et al. (2020) use template-based data augmentation methods for negation, conjunction, and Banerjee et al. (2021) utilize image captions to generate training data. Gokhale et al. (2021) use linguistic transformations in a distributed robust optimization setting for vision-and-language inference models.
+
+Unsupervised NLI: In NLI, Cui et al. (2020) propose a multimodal aligned contrastive decoupled learning method (MACD) and train a BERT-based text encoder. They assign a label (E, C, N) based on the cosine similarity between representations of premise and hypothesis learned by their text encoder. Our approach differs from MACD as we leverage a procedural data generation step based on a set of sentence transformations and do not leverage data from other modalities. We use MACD as one of the baselines in our experiments.
+
+# 3 Unsupervised NLI
+
+In NLI, a premise-hypothesis pair $(P,H)$ is provided as input and the system needs to determine the relationship $L\in \{\text{Entailment},\text{Contradiction},\text{Neutral}\}$ between $P$ and $H$ . In the supervised setting, a labeled dataset $D_{train} = \{(P_i,H_i),L_i\}_{i = 1}^M$ consisting of $M$ instances which are usually human-annotated is available for training. However in the unsupervised setting, labels $L_{i}$ are not available, thus posing a significant challenge for training NLI systems. Along with this standard unsupervised setting (referred to as PH), we
+
+consider two novel unsupervised settings (P and NPH) that differ in the extent of unlabeled data available for learning:
+
+PH-setting: It corresponds to the standard unsupervised setting where an unlabeled dataset of PH pairs $\{(P_i, H_i)\}_{i=1}^M$ is provided.
+
+P-setting: In this setting, only premises from $D_{train}$ i.e $\left(\{(P_i)\}_{i=1}^M\right)$ are provided. It is an interesting setting as the large-scale NLI datasets such as SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) have been collected by presenting only the premises to crowd-workers and asking them to write a hypothesis corresponding to each label. Furthermore, this setting presents a harder challenge for training NLI systems than the PH-setting as only partial inputs are provided.
+
+NPH-setting: Here, no datasets (even with partial inputs) are provided. Thus, it corresponds to the hardest unsupervised NLI setting considered in this work. This setting is of interest in scenarios where we need to make inferences on a test dataset but its corresponding training dataset is not available in any form.
+
+From the above formulation, it can be inferred that the hardness of the task increases with each successive setting $(\mathrm{PH} \rightarrow \mathrm{P} \rightarrow \mathrm{NPH})$ as lesser and lesser information is made available. In order to address the challenges of each setting, we propose a two-step approach that includes a pipeline for procedurally generating PHL triplets from the limited information provided in each setting (Section 4), followed by training an NLI model using this procedurally generated data (Section 5). Figure 2 highlights the differences between four NLI settings (one supervised and three unsupervised) and summarizes our approach to develop an NLI model for each setting.
+
+# 4 PHL Triplet Generation
+
+To compensate for the absence of labeled training data, we leverage a set of sentence transformations and procedurally generate PHL triplets that can be used for training the NLI model. In P and PH settings, we apply these transformations on the provided premise sentences. In the NPH setting where premises are not provided, we extract sentences from various raw text corpora and apply these transformations on them to generate PHL triplets.
+
+# 4.1 $\mathcal{P}$ : Premise Generation
+
+We extract sentences from raw text sources, namely, COCO captions (Lin et al., 2014), ROC stories (Mostafazadeh et al., 2016), and Wikipedia to compile a set of premises for the NPH setting. We use these text sources as they are easily available and contain a large number of diverse sentences from multiple domains.
+
+ROC Stories is a collection of short stories consisting of five sentences each. We include all these sentences in our premise pool. MS-COCO is a dataset consisting of images with five captions each. We add all captions to our premise pool. From Wikipedia, we segment the paragraphs into individual sentences and add them to our premise pool.
+
+We do not perform any sentence filtration during the premise collection process. However, each transformation (described in subsection 4.2) has its pre-conditions such as presence of verbs/ adjectives/nouns that automatically filter out sentences from the premise pool that can not be used for PHL triplet generation.
+
+# 4.2 $\mathcal{T}$ : Transformations
+
+Now, we present our sentence transformations for each NLI label. Table 1 illustrates examples of PHL triplets generated from these transformations.
+
+# 4.2.1 Entailment:
+
+In NLI, the label is entailment when the hypothesis must be true if the premise is true.
+
+Paraphrasing (PA): Paraphrasing corresponds to expressing the meaning of a text (restatement) using other words and hence results in entailment premise-hypothesis pairs. We use the Pegasus (Zhang et al., 2019) tool to generate up to 10 paraphrases of a sentence and use them as hypothesis with the original sentence as the premise2.
+
+Extracting Snippets (ES): We use dependency parse tree to extract meaningful snippets from a sentence and use them as hypothesis with the original sentence as the premise. Specifically, we extract sub-trees that form a complete phrase or a sentence. For example, from the sentence "A person with red shirt is running near the garden", we create entailing hypotheses "A person is running near the garden", "A person is running", "A person is near the garden", etc. We implement 10 such techniques using spacy (Honnibal et al., 2020) $^2$ .
+
+Hypernym Substitution (HS): A hypernym of a word is its supertype, for example, "animal" is a hypernym of "dog". We use WordNet (Miller, 1995) to collect hypernyms and replace noun(s) in a sentence with their corresponding hypernyms to create entailment hypothesis. For example, from the premise "A black dog is sleeping", we create "A black animal is sleeping". Note that swapping the premise and hypothesis in this case gives us another PH pair that has a 'Neutral' relationship.
+
+Pronoun Substitution (PS): Here, we leverage Part-of-Speech (POS) tagging of spacy to heuristically substitute a noun with its mapped pronoun. For example, substituting "boy" with "he" in the sentence "boy is dancing in arena" results in an entailing hypothesis "he is dancing in arena".
+
+Counting (CT): Here, we count nouns with common hybernyms and use several templates such as "There are {count} {hypernym}s present" to generate entailing hypotheses. For instance, from the sentence "A motorbike and a car are parked", we create hypothesis "Two automobiles are parked". We also create contradiction hypotheses using the same templates by simply changing the count value such as "There are five automobiles present".
+
+# 4.2.2 Contradiction:
+
+The label is contradiction when the hypothesis can never be true if the premise is true.
+
+Contradictory Words (CW): We replace noun(s) and/or adjective(s) (identified using spacy POS tagging) with their corresponding contradictory words. For example, replacing the word 'big' with 'small' in "He lives in a big house" results in a contradictory hypothesis "He lives in a small house". For contradictory adjectives, we collect antonyms from wordnet and for nouns, we use the function 'most_similar' from gensim (Rehurek and Sojka, 2011)².
+
+Contradictory Verb (CV): We collect contradictory verbs from gensim and create hypothesis in the following two ways: (i) substituting verb with its contradictory verb: for example, from “A girl is walking”, we create hypothesis “A girl is driving” and (ii) selecting other sentences from the premise pool that have the same subject as the original sentence but have contradictory verbs: for example, sentences like “A young girl is driving fast on the street” and “There is a girl skiing with
+
+| Transformation | Original Sentence (Premise) | Hypothesis | Label |
| PA | Fruit and cheese sitting on a black plate | There is fruit and cheese on a black plate | E |
| PA + ES + HS | A large elephant is very close to the camera | Elephant is close to the photographic equipment | E |
| CW-noun | Two horses that are pulling a carriage in the street | Two dogs that are pulling a carriage in the street | C |
| CV | A young man sitting in front of a TV | A man in green jersey jumping on baseball field | C |
| PA + CW | A woman holding a baby while a man takes a picture of them | A kid is taking a picture of a male and a baby | C |
| FCon | A food plate on a glass table | A food plate made of plastic on a glass table | N |
| PA + AM | Two dogs running through the snow | The big dogs are outside | N |
+
+Table 1: Illustrative examples of PHL triplets generated from our proposed transformations. E,C, and N correspond to the NLI labels Entailment, Contradiction, and Neutral respectively.
+
+her mother". The second approach adds diversity to our synthetically generated PHL triplets $^2$ .
+
+Subject Object Swap (SOS): We swap the subject and object of a sentence to create a contradictory hypothesis. For example, from the sentence "A clock is standing on top of a concrete pillar", we create a contradictory hypothesis "a pillar is standing on top of a concrete clock".
+
+Negation Introduction (NI): We introduce negation into a sentence to create a contradictory hypothesis. For example, from the sentence "Empty fog covered streets in the night", we create hypothesis "Empty fog did not cover streets in the night".
+
+Number Substitution (NS): Here, we change numbers (tokens with dependency tag ‘nummod’ in the parse tree) in a sentence. For example, changing ‘four’ to ‘seven’ in the sentence “Car has four red lights” results in a contradictory hypothesis.
+
+Irrelevant Hypothesis (IrH): We sample sentences that have different subjects and objects than the premise sentence. For example, for the premise "Sign for an ancient monument on the roadside", we sample "A man goes to strike a tennis ball" as a contradictory hypothesis.
+
+# 4.2.3 Neutral:
+
+The label is neutral when the premise does not provide enough information to classify a PH pair as either entailment or contradiction.
+
+Adding Modifiers (AM): We introduce a relevant modifier for noun(s) in premise to generate a neutral hypothesis. For instance, in the sentence "A car parked near the fence", we insert modifier 'silver' for the noun 'car' and create hypothesis "A silver car parked near the fence". We collect relevant modifiers for nouns by parsing sentences in the premise pool and selecting tokens with dependency tag 'amod' and POS tag 'ADJ'.
+
+ConceptNet (Con): We add relevant information from ConceptNet (Speer et al., 2017) relations ('At-Location', 'DefinedAs', etc.) to the premise and create a neutral hypothesis. For instance, from the sentence "Bunch of bananas are on a table", we create hypothesis "Bunch of bananas are on a table at kitchen" using the 'AtLocation' relation.
+
+Same Subject but Non-Contradictory Verb (SS-NCV) : For a premise, we select sentences from the premise pool that have the same subject as the premise, contain additional noun(s) but no contradictory verbs as neutral hypotheses. For instance, for premise “A small child is sleeping in a bed with a bed cover”, we sample “A child laying in bed sleeping with a chair near by” as a hypothesis.
+
+We create more examples by swapping premise and hypothesis of the collected PHL triplets and accordingly change the label. For instance, swapping $P$ and $H$ in HS, ES, etc. results in neutral examples, swapping $P$ and $H$ in AM, Con results in entailment examples. Furthermore, we note that transformations ES, HS, PS, SOS, NI result in PH pairs with high word overlap between premise and hypothesis sentences, whereas, transformation PA, CV, IrH, SSNCV, etc. result in PH pairs with low word overlap. In order to add more diversity to the examples, we use composite transformations on the same sentence such as PA + ES ( $L = E$ ), PA + CW ( $L = C$ ) as shown in Table 1.
+
+# 4.3 Data Validation
+
+In order to measure the correctness of our procedurally generated PHL triplets, we validate randomly sampled 50 instances for each transformation. We find that nearly all the instances get correct label assignments in case of PA, HS, PS, NI, NS, IrH, AM transformations. While transformations CW, Con, SSNCV result in a few mislabeled instances. Specifically, SSNCV transformation results in the
+
+maximum errors (5). Appendix Section B provides examples of such instances. While it is beneficial to have noise-free training examples, doing so would require more human effort and increase the data collection cost. Thus, in this work, we study how well we can do solely using the procedurally generated data without investing human effort in either creating instances or eliminating noise.
+
+# 5 Training NLI Model
+
+In this section, we describe our approach to develop NLI models for each unsupervised setting. Table 13 (in Appendix) shows sizes of the generated PHL datasets for each setting.
+
+# 5.1 NPH-Setting
+
+We use the Premise Generation function $(\mathcal{P})$ over raw-text sources, namely, COCO captions, ROC stories, and Wikipedia i.e., $\mathcal{P}(\mathrm{COCO})$ , $\mathcal{P}(\mathrm{ROC})$ , and $\mathcal{P}(\mathrm{Wiki})$ to compile a set of premises and apply the transformations $(\mathcal{T})$ over them to generate PHL triplets. We then train a transformer-based 3-class classification model (Section 6.1) using the generated PHL triplets for the NLI task.
+
+# 5.2 P-Setting
+
+In this slightly relaxed unsupervised setting, premises of the training dataset are provided. We directly apply the transformation functions $(\mathcal{T})$ on the given premises and generate PHL triplets. Similar to the NPH setting, a 3-class classification model is trained using the generated PHL triplets.
+
+# 5.3 PH-Setting
+
+In this setting, unlabeled training data is provided. We present a 2-step approach to develop a model for this setting. In the first step, we create PHL triplets from the premises and train a model using the generated PHL triplets (same as the P-setting). In the second step, we pseudo-label the unlabeled PH pairs using the model trained in Step 1.
+
+Here, a naive approach to develop NLI model would be to train using this pseudo-labeled dataset. This approach is limited by confirmation bias i.e. overfitting to incorrect pseudo-labels predicted by the model (Arazo et al., 2020). We address this by filtering instances from the pseudo-labeled dataset based on the model's prediction confidence. We use the maximum softmax probability (maxProb) as the confidence measure and select only the instances that have high prediction confidence for training the
+
+final NLI model. This approach is based on prior work (Hendrycks and Gimpel, 2017) showing that correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified examples. Furthermore, we investigate two ways of training the final NLI model:
+
+Augmenting with $\mathcal{T}(P)$ : Train using the selected pseudo-labeled dataset and the PHL triplets generated in Step 1.
+
+Further Fine-tune P-Model: Further fine-tune the model obtained in Step 1 with the selected pseudo-labeled dataset instead of fine-tuning one from scratch.
+
+# 6 Experiments
+
+# 6.1 Experimental Setup
+
+Datasets: We conduct comprehensive experiments with a diverse set of NLI datasets: SNLI (Bowman et al., 2015) (sentence derived from only a single text genre), Multi-NLI (Williams et al., 2018) (sentence derived from multiple text genres), Dialogue NLI (Welleck et al., 2019) (sentences from context of dialogues), and Breaking NLI (Glockner et al., 2018) (adversarial instances).
+
+Model: We use BERT-BASE model (Devlin et al., 2019) with a linear layer on top of [CLS] token representation for training the 3-class classification model. We trained models for 5 epochs with a batch sizes of 32 and a learning rate ranging in $\{1 - 5\} e - 5$ . All experiments are done with Nvidia V100 16GB GPUs.
+
+Baseline Methods: We compare our approach with Multimodal Aligned Contrastive Decoupled learning (MACD) (Cui et al., 2020), Single-modal pre-training model BERT (Devlin et al., 2019), Multi-modal pre-training model LXMERT (Tan and Bansal, 2019), and VilBert (Lu et al., 2019).
+
+# 6.2 Results
+
+NPH-Setting: We utilize three raw text sources: COCO, ROC, and Wikipedia to compile a premise pool and then generate PHL triplets from those premises. Table 2 shows the accuracy of models in this setting. We use equal number of PHL triplets (150k class-balanced) for training the NLI models. We find that the model trained on PHL triplets generated from COCO captions as premises outperforms ROC and Wikipedia models on all datasets. We attribute this superior performance
+
+| Model | SNLI | MNLI mat. | MNLI mis. | DNLI | BNLI |
| BERT* | 35.09 | - | - | - | - |
| LXMERT* | 39.03 | - | - | - | - |
| VilBert* | 43.13 | - | - | - | - |
| T(P(C)) | 64.8 | 49.01 | 50.0 | 50.26 | 74.73 |
| T(P(R)) | 58.51 | 45.44 | 45.93 | 47.4 | 67.9 |
| T(P(W)) | 55.06 | 44.15 | 44.25 | 48.48 | 62.58 |
| T(P(C+R)) | 65.39 | 46.83 | 46.92 | 47.95 | 77.37 |
| T(P(C+R+W)) | 65.09 | 46.63 | 46.83 | 44.74 | 56.11 |
+
+Table 2: Comparing accuracy of models in the NPH-setting. C, R, and W correspond to the premise sources COCO, ROC, and Wikipedia respectively. Results marked with * have been taken from (Cui et al., 2020).
+
+| Approach | SNLI | MNLI mat. | MNLI mis. | DNLI | BNLI |
| BERT* | 35.09 | - | - | - | - |
| LXMERT* | 39.03 | - | - | - | - |
| VilBert* | 43.13 | - | - | - | - |
| MACD* | 52.63 | - | - | - | - |
| T(SNLI) | 65.72 | 49.56 | 50.00 | 43.27 | 67.78 |
| +T(P(C)) | 65.36 | 49.91 | 49.24 | 46.25 | 70.07 |
| +T(P(R)) | 65.90 | 48.53 | 48.36 | 44.97 | 66.43 |
+
+to the short, simple, and diverse sentences present in COCO that resemble the premises of SNLI that were collected from Flickr30K (Plummer et al., 2015) dataset. In contrast, Wikipedia contains lengthy and compositional sentences resulting in premises that differ from those present in SNLI, MNLI, etc. Furthermore, we find that combining the PHL triplets of COCO and ROC leads to a slight improvement in performance on SNLI $(65.39\%)$ , and BNLI $(77.37\%)$ datasets.
+
+P-Setting: Cui et al. (2020) presented MACD that performs multi-modal pretraining using COCO and Flickr30K caption data for the unsupervised NLI task. It achieves $52.63\%$ on the SNLI dataset. Our approach outperforms MACD and other single-modal and multi-modal baselines by $\sim 13\%$ on SNLI as shown in Table 3. We also experiment by adding PHL triplets generated from COCO and ROC to the training dataset that further improves the accuracy to $65.90\%$ and establish a new state-of-the-art performance in this setting.
+
+Table 3: Comparing accuracy of various approaches in the P-Setting. Results marked with * have been taken from (Cui et al., 2020). Note that we utilize the premises of the SNLI training dataset only but evaluate on SNLI (in-domain), and MNLI, DNLI, BNLI (out-of-domain).
+
+| Method | Data | SNLI | MNLI mat. | MNLI mis. |
| From Scratch | MaxProbFilt | 66.67 | 53.37 | 55.17 |
| From Scratch | MaxProbFilt+T(P) | 66.75 | 50.22 | 50.37 |
| Finetune P-model | MaxProbFilt | 65.60 | 52.97 | 53.44 |
+
+Table 4: Comparing accuracy of our proposed approaches in the PH-Setting. Note that the models are trained using PH pairs only from the SNLI train-set but evaluated on MNLI (out-of-domain dataset) also.
+
+PH-Setting: Here, we first pseudo-label the given unlabeled PH pairs using the P-model and then select instances based on the maximum softmax probability (Section 5.3). We refer to this set of selected instances as MaxProbFilt dataset. This approach results in accuracy of $66.67\%$ on the SNLI dataset as shown in Table 4. We investigate two more approaches of training the NLI model. In the first approach, we train using MaxProbFilt and PHL triplets generated from premises. In the second approach, we further fine-tune the P-model with MaxProbFilt dataset. We find that the first approach slightly improves the accuracy to $66.75\%$ . This also represents our best performance across all the unsupervised settings. Furthermore, we observe improvement in the Out-of-domain datasets also $(53.37\%)$ and $55.17\%$ on MNLI matched and mismatched datasets respectively).
+
+# 6.3 Low-Data Regimes
+
+We also conduct experiments in low-data regimes where a few labeled instances are provided. We select these instances from the training dataset of SNLI/MNLI using the following two strategies:
+
+Random: Here, we randomly select instances from the corresponding training dataset. Further fine-tuning our NPH model with the selected instances consistently achieves higher performance than the models fine-tuned from scratch as shown in Table 5. With just 500 SNLI instances i.e. $\sim 0.1\%$ of training dataset, our models achieve $8.4\%$ and $8.32\%$ higher accuracy on SNLI (in-domain) and MNLI (out-of-domain) respectively. Furthermore, with 500 MNLI instances, our models achieve $10.37\%$ and $18.07\%$ higher accuracy on MNLI (in-domain) and SNLI (out-of-domain) respectively.
+
+Adversarial: Here, we select those instances from the training dataset on which the NPH model makes incorrect prediction. This is similar to the ad
+
+| Training Dataset | Method | 100 | 200 | 500 | 1000 | 2000 |
| SNLI | MNLI | SNLI | MNLI | SNLI | MNLI | SNLI | MNLI | SNLI | MNLI |
| SNLI | BERT | 44.62 | 37.36 | 48.97 | 34.71 | 58.54 | 44.01 | 65.36 | 37.24 | 72.51 | 45.59 |
| NPH (Random) | 64.82 | 49.72 | 65.06 | 50.48 | 66.97 | 52.33 | 70.61 | 56.75 | 73.7 | 59.0 |
| NPH (Adv.) | 68.21 | 51.93 | 69.23 | 56.55 | 70.85 | 58.46 | 73.62 | 59.47 | 74.31 | 60.43 |
| MNLI | BERT | 35.12 | 36.01 | 35.14 | 36.58 | 46.16 | 47.1 | 47.64 | 56.21 | 53.68 | 63.3 |
| NPH (Random) | 63.87 | 52.85 | 63.87 | 53.61 | 64.23 | 57.47 | 65.62 | 60.42 | 66.87 | 62.89 |
+
+Table 5: Comparing performance of various methods on in-domain and out-of-domain datasets in low-data regimes (100-2000 training instances). 'BERT' method corresponds to fine-tuning BERT over the provided instances from SNLI/MNLI, 'NPH (Random)' corresponds to further fine-tuning our NPH model with the randomly sampled instances from SNLI/MNLI, 'NPH (Adv.)' corresponds to further fine-tuning our NPH model with the adversarially selected instances from SNLI/MNLI.
+
+| Approach | Δ Accuracy |
| NPH model | 64.8% |
| - CV | -5.88% |
| - CW | -3.07% |
| - SSNCV | -2.63% |
| - Neg. | -0.70% |
| - IrH | -0.50% |
| - PS | -0.00% |
+
+versarial data collection strategy (Nie et al., 2020; Kiela et al., 2021) where instances that fool the model are collected. Here, we do not simply fine-tune our NPH model with the adversarial examples as it would lead to catastrophic forgetting (Carpenter and Grossberg, 1988). We tackle this by including 20000 randomly sampled instances from the generated PHL triplets and fine-tune on the combined dataset. It further takes the performance to $70.85\%$ , $58.46\%$ on SNLI and MNLI respectively with 500 instances.
+
+# 6.4 Analysis
+
+Ablation Study: We conduct ablation study to understand the contribution of individual transformations on NLI performance. Table 6 shows the performance drop observed on removing PHL triplets created using a single transformation in the NPH-Setting. We find that Contradictory Words (CW) and Contradictory Verbs (CV) lead to the maximum drop in performance, $5.88\%$ and $3.07\%$ respectively. In contrast, Pronoun Substitution (PS) transformation doesn’t impact the performance significantly. Note that this does not imply
+
+Table 6: Ablation Study of transformations in the NPH-Setting. Each row corresponds to the drop in performance on the SNLI dataset when trained without PHL triplets created using that transformation.
+
+| Setting | Metric | Label |
| C | E | N |
| NPH | Precision | 0.65 | 0.71 | 0.6 |
| Recall | 0.68 | 0.77 | 0.51 |
| P | Precision | 0.66 | 0.72 | 0.58 |
| Recall | 0.67 | 0.78 | 0.52 |
| PH | Precision | 0.64 | 0.74 | 0.60 |
| Recall | 0.73 | 0.77 | 0.50 |
+
+Table 7: Precision and Recall values achieved by our models under each unsupervised setting.
+
+| NC | RS | SNLI-RS | SNLI-NC |
| 84.22 | 50.07 | 58.59 | 75.39 |
+
+Table 8: Performance of our NPH model on Names-Changed (NC) and Roles-Switched (RS) adversarial test sets (Mitra et al., 2020).
+
+that this transformation is not effective, it means that the evaluation dataset (SNLI) does not contain instances requiring this transformation.
+
+NC and RS Evaluation: We evaluate our model on NER-Changed (NC) and Roles-Switched (RS) datasets presented in (Mitra et al., 2020) that test the ability to distinguish entities and roles. Our model achieves high performance on these datasets. Specifically, $84.22\%$ on NC and $75.39\%$ on SNLI-NC as shown in Table 8.
+
+Label-Specific Analysis: Table 7 shows the precision and recall values achieved by our models. We observe that our models perform better on Entailment and Contradiction than Neutral examples. This suggests that neutral examples are relatively more difficult. We provide examples of instances where our model makes incorrect predictions and conduct error analysis in Appendix.
+
+# 7 Conclusion and Discussion
+
+We explored three different settings in unsupervised NLI and proposed a procedural data generation approach that outperformed the existing unsupervised methods by $\sim 13\%$ . Then, we showed that fine-tuning our models with a few human-authored instances leads to a considerable improvement in performance. We also experimented using adversarial instances for this fine-tuning step instead of randomly selected instances and showed that it further improves the performance. Specifically, in presence of just 500 adversarial instances, the proposed method achieved $70.85\%$ accuracy on SNLI, $12.2\%$ higher than the model trained from scratch on the same 500 instances.
+
+This improvement in performance suggests possibility of an alternative data collection strategy that not only results in high-quality data instances but is also resource efficient. Using a model-in-the-loop technique has been shown to be effective for adversarial data collection (Nie et al., 2020; Kiela et al., 2021; Li et al., 2021; Sheng et al., 2021; Arunkumar et al., 2020). In these techniques, a model is first trained on a large dataset and then humans are instructed to create adversarial samples that fool the model into making incorrect predictions. Thus, requiring the crowd-sourcing effort twice. However, in our method, a dataset designer can develop a set of simple functions (or transformations) to procedurally generate training data for the model and can directly instruct humans to create adversarial samples to fool the trained model. This is resource efficient and allows dataset designers to control the quality of their dataset.
+
+# Ethical Considerations
+
+We use existing public-domain text corpora such as Wikipedia, ROC Stories, and MS-COCO, and follow the protocol to use and adapt research data to generate our weakly-labeled dataset. We will release the code to generate our dataset. Any bias observed in NLI systems trained using our methods can be attributed to the source data and our transformation functions. However, no particular sociopolitical bias is emphasized or reduced specifically by our methods.
+
+# Acknowledgements
+
+We thank the anonymous reviewers for their insightful feedback. This research was supported by
+
+DARPA SAIL-ON and DARPA CHESS programs. The views and opinions of the authors expressed herein do not necessarily state or reflect those of the funding agencies and employers.
+
+# References
+
+Eric Arazo, Diego Ortega, Paul Albert, Noel E O'Connor, and Kevin McGuinness. 2020. Pseudolabeling and confirmation bias in deep semi-supervised learning. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.
+Anjana Arunkumar, Swaroop Mishra, Bhavdeep Sachdeva, Chitta Baral, and Chris Bryan. 2020. Real-time visual feedback for educative benchmark creation: A human-and-metric-in-the-loop workflow.
+Pratyay Banerjee and Chitta Baral. 2020. Self-supervised knowledge triplet learning for zero-shot question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 151-162, Online. Association for Computational Linguistics.
+Pratyay Banerjee, Tejas Gokhale, Yezhou Yang, and Chitta Baral. 2021. WeaQA: Weak supervision via captions for visual question answering. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 3420-3435, Online. Association for Computational Linguistics.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
+Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
+Gail A. Carpenter and Stephen Grossberg. 1988. The art of adaptive pattern recognition by a self-organizing neural network. Computer, 21(3):77-88.
+Wanyun Cui, Guangyu Zheng, and Wei Wang. 2020. Unsupervised natural language inference via decoupled multimodal contrastive learning. In Proceedings of the 2020 Conference on Empirical Methods
+
+in Natural Language Processing (EMNLP), pages 5511-5520, Online. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Bhuwan Dhingra, Danish Danish, and Dheeraj Rajagopal. 2018. Simple and effective semi-supervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 582-587, New Orleans, Louisiana. Association for Computational Linguistics.
+Alexander Fabbri, Patrick Ng, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. Template-based question generation from retrieved sentences for improved unsupervised question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4508-4513, Online. Association for Computational Linguistics.
+Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.
+Tejas Gokhale, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2020. Vqa-lol: Visual question answering under the lens of logic. In European conference on computer vision, pages 379-396. Springer.
+Tejas Gokhale, Abhishek Chaudhary, Pratyay Banerjee, Chitta Baral, and Yezhou Yang. 2021. Semantically distributed robust optimization for vision-and-language inference.
+Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. Proceedings of International Conference on Learning Representations.
+Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrialstrength Natural Language Processing in Python.
+Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In
+
+Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110-4124, Online. Association for Computational Linguistics.
+Patrick Lewis, Ludovic Denoyer, and Sebastian Riedel. 2019. Unsupervised question answering by cloze translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4896-4910, Florence, Italy. Association for Computational Linguistics.
+Linjie Li, Jie Lei, Zhe Gan, and Jingjing Liu. 2021. Adversarial vqa: A new benchmark for evaluating the robustness of vqa models. In International Conference on Computer Vision (ICCV).
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and Larry Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV. European Conference on Computer Vision.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.
+A. Mitra, Ishan Shrivastava, and Chitta Baral. 2020. Enhancing natural language inference using new and expanded training data sets and new learning models. In AAAI.
+Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguistics.
+Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.
+Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649.
+
+Raul Puri, Ryan Spring, Mohammad Shoeybi, Mostofa Patwary, and Bryan Catanzaro. 2020. Training question answering models from synthetic data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5811-5826, Online. Association for Computational Linguistics.
+
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+
+Radim Rehurek and Petr Sojka. 2011. Gensim-python framework for vector space modelling. NLP Centre, Faculty of Informatics, Masaryk University, Brno, Czech Republic, 3(2).
+
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics.
+
+Sasha Sheng, Amanpreet Singh, Vedanuj Goswami, Jose Alberto Lopez Magana, Wojciech Galuba, Devi Parikh, and Douwe Kiela. 2021. Human-adversarial visual question answering.
+
+Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.
+
+Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100-5111, Hong Kong, China. Association for Computational Linguistics.
+
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In NeurIPS.
+
+Zirui Wang, Adams Wei Yu, Orhan First, and Yuan Cao. 2021. Towards zero-label language learning. arXiv preprint arXiv:2109.09193.
+
+Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731-3741, Florence, Italy. Association for Computational Linguistics.
+
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American
+
+Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
+
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.
+
+# Appendix
+
+# A Transformations
+
+In this section, we provide details about the proposed sentence transformations.
+
+# A.1 Entailment
+
+Table 9 shows examples of our transformations.
+
+Paraphrasing (PA): It is an effective way of creating entailment examples as the hypothesis which is simply a paraphrased version of the premise is always entailed. Furthermore, since the Pegasus tool is trained for abstractive text summarization, it often removes some information from the original sentence while paraphrasing. For instance, a paraphrase of the sentence "A boy is playing with a red ball" could be "Boy is playing with a ball". This restricts us from using the paraphrased sentence as the premise with the original sentence as the hypothesis as the formed $PH$ pair does not represent an entailment scenario (neutral in this case). It is non-trivial to detect such instances in an automated way. Hence, in order to avoid noisy examples, we only use the original sentence as premise and paraphrased sentences as hypothesis. We also explore back-translation (Sennrich et al., 2016) but it often results in noisy outputs and provides less diversity than the Pegasus tool. Hence, we use only the Pegasus tool for generating paraphrases of sentences.
+
+Extracting Snippets (ES): Here, we provide details of the techniques used for extracting snippets from a text. Note that we use dependency parse tree of the sentence to select/skip the tokens to create the hypothesis.
+
+(i) We skip modifiers (tokens with dependency amod) that have no children in the parse tree. For example, from the sentence "The male surfer is riding a small wave", we create "The surfer is riding a small wave", "The male surfer is riding a wave", and "The surfer is riding a wave" as entailing hypotheses.
+(ii) Similar to the previous technique, we skip adverb modifier (advmod). For example, from the
+
+sentence "A very beautiful girl is standing outside the park", we create an entailment hypothesis "A beautiful girl is standing outside the park".
+
+(iii) We skip adjectives that do not have dependency token conj and also have 0 children in the parse tree. For example, from the sentence "A middle-aged man in a beige vest is sleeping on a wooden bench," we create "A middle-aged man in a vest is sleeping on a bench."
+(iv) In another technique, we select the root token and all the tokens to the left of it. If this results in selection of at least 3 tokens and if one of them is a verb then we consider it to be a valid sentence and use it as an entailing hypothesis. For example, from the sentence "The male surfer is riding a small wave", we create "surfer is riding".
+
+Hypernym Substitution (HS): Examples of hypernyms:
+
+'alcohol': ['beverage', 'drink']
+'apple': ['fruit']
+'axe': ['edge tool']
+'banana': ['fruit']
+etc.
+
+Pronoun Substitution (PS): For words in the list [‘man’, ‘boy’, ‘guy’, ‘lord’, ‘husband’, ‘father’, ‘boyfriend’, ‘son’, ‘brother’, ‘grandfather’, ‘uncle’], we use ('he'/ ‘someone’/ ‘they’, etc.) and for words in the list [‘woman’, ‘girl’, ‘lady’, ‘wife’, ‘mother’, ‘daughter’, ‘sister’, ‘girlfriend’, ‘grandmother’, ‘aunt’], we use 'she'/ ‘someone’/ ‘they’, etc.). In other cases, we use the pronoun ‘they’ or ‘someone’ or ‘somebody’.
+
+Counting (CT): We provide examples of templates we use to create counting hypotheses:
+
+"There are {count} {hypernym} present",
+{"count} {hypernym} are present",
+"Several {hypernym} present",
+
+"There are multiple {hypernym} present",
+
+"There are more than $\{\mathrm{count}\} \{\mathrm{hypernym}\}$ present",
+
+"There are at least {count} {hypernym} present",
+
+etc.
+
+We also substitute the hypernym in the original sentence directly to create hypotheses as shown in Table 9.
+
+# A.2 Contradiction
+
+Table 10 shows examples of our transformations.
+
+Contradictory Words (CW): For contradictory adjectives, we collect antonyms from wordnet and for contradictory nouns, we use the function 'most_similar' fromgensim (Rehurek and Sojka, 2011) library. that returns words close (but distinct) to a given word². For instance, it returns words like 'piano', 'flute', 'saxophone' when given the word 'violin' In order to filter out the inflected forms of the same word or its synonyms from the list returned by most_similar function, we remove words that have high STS with the given word. This step removes noisy contradictory word pairs to a large extent. Here, we provide examples of contradictory words:
+
+'stove': ['heater']
+
+'cucumber': ['onion', 'carrot', 'melon', 'turnip', 'eggplant', 'watermelon', 'radish']
+
+'motorcycle': ['truck', 'scooter', 'car']
+
+'kitchen': ['bedroom', 'bathroom', 'toilet'] etc.
+
+Contradictory Verb (CV): We provide examples of contradictory verbs:
+
+stand': ['sprint', 'cycle', 'drive', 'jump', 'sit', etc.]
+
+'play': ['sleep', 'cry', 'fight', 'drink', 'hunt', etc.]
+
+smile': ['cry', 'anger', 'frown', etc.] etc.
+
+# A.3 Neutral
+
+Table 11 shows examples of our transformations.
+
+Adding Modifiers (AM): We provide examples of modifiers collected using our approach:
+
+'metal': ['large', 'circular', 'galvanized', 'heavy', 'dark', etc.]
+
+'vegetable': ['steamed', 'cruciferous', 'green', 'uncooked', 'raw', etc.]
+
+'park': ['quiet', 'neglected', 'vast', 'square', 'crowded', etc.]
+
+etc.
+
+ConceptNet: We use ConceptNet relations At-Location, DefinedAs, etc. and insert the node connected by these relations to the sentence resulting in a neutral hypothesis.
+
+| Category | Original Sentence (Premise) | Hypothesis |
| PA | Fruit and cheese sitting on a black plate. | There is fruit and cheese on a black plate. |
| ES | person relaxes at home while holding something. | person relaxes while holding something. |
| HS. | A girl is sitting next to a blood hound. | A girl is sitting next to an animal. |
| PS | People are walking down a busy city street. | they are walking down a busy city street |
| CT | A man and woman setup a camera. | Two people setup a camera |
| Composite | A large elephant is very close to the camera. | elephant is close to the photographic equipment. |
+
+Table 9: Illustrative examples of entailment transformations.
+
+| Category | Original Sentence (Premise) | Hypothesis |
| CW-noun | A small bathroom with a sink under a cabinet. | a small kitchen with a sink under a cabinet. |
| CW-adj | A young man is doing a trick on a surfboard. | A old man is doing a trick on a surfboard. |
| CV | A couple pose for a picture while standing next to a couch. | A couple sit in a chair on laptops |
| SOS | A man is flying a kite on the beach. | a beach is flying a kite on the man |
| NS | Two green traffics lights in a European city. | nine green traffics lights in a European city |
| IrH. | A flock of sheep grazing in a field. | A man having fun as he glides across the water. |
| NI. | A boy with gloves on a field throwing a ball. | a boy with gloves on a field not throwing a ball |
| Composite | A woman holding a baby while a man takes a picture of them | a kid is taking a picture of a male and a baby. |
+
+Table 10: Illustrative examples of contradiction transformations.
+
+| Category | Original Sentence (Premise) | Hypothesis |
| AM | two cats are eating next to each other out of the bowl | two cats are eating next to each other out of the same bowl |
| SSNCV | A man holds an electronic device over his head. | man is taking photo with a small device |
| FCon | a food plate on a table with a glass. | a food plate on a table with a glass which is made of plastic. |
| Composite | two dogs running through the snow. | The big dogs are outside. |
+
+Table 11: Illustrative examples of neutral transformations.
+
+| Trans. | Premise | Hypothesis | Assigned Label | True Label |
| PS | Two dogs on leashes sniffing each other as people walk in a outdoor market | Two dogs on leashes sniffing each other as they walk in a market | E | N |
| CT | Adult woman eating slice of pizza while standing next to building | There are 2 humans present | E | C |
| CW | Meal with meat and vegetables served on table | There is a meal with cheese and vegetables | C | N |
| SSNCV | A person riding skis down a snowy slope | A person riding skis in a body of water | N | C |
| SSNCV | A person on a skateboard jumping up into the air | A person jumping up in the air on a snow-board | N | C |
| CV | A male surfer riding a wave on the ocean | A surfer is surfing in the ocean near some swimmers | C | N |
+
+Table 12: Examples of mis-labeled PHL triplets generated by our transformations.
+
+| Transformation T | NPH-Setting | P-Setting |
| T(P(C)) | T(P(R)) | T(P(W)) | T(SNLI) |
| Raw Sentences | 591 | 490 | 600 | 548 |
| PA | 5083 | 3072 | 273 | 475 |
| ES | 2365 | 196 | 87 | 516 |
| PS | 37 | 41 | 137 | 38 |
| CT | 25 | 8 | 2 | 43 |
| Neg. | 1175 | 1175 | 2053 | 990 |
| CW | 978 | 119 | 116 | 265 |
| CV | 1149 | 63 | 5 | 505 |
| NS | 73 | 16 | 224 | 91 |
| SOS | 428 | 180 | 229 | 76 |
| AM | 1048 | 125 | 535 | 327 |
| SSNCV | 1363 | 2 | 7 | 405 |
+
+Table 13: Sizes of PHL triplet datasets generated by our transformations for the unsupervised settings. All numbers are in thousands. C, R, W denote COCO, ROC Stories, and Wikipedia respectively. For P-Setting, we show stats for SNLI dataset. We do not include PH-Setting in this table because we leverage the PHL triplets generated using the P-Setting to solve it as described in Section 5.3.
+
+# B Data Validation
+
+Table 12 shows examples of mis-labeled instances generated by our transformations.
+
+# C Training NLI Model
+
+Table 13 shows sizes of the generated PHL datasets for each setting.
\ No newline at end of file
diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/images.zip b/unsupervisednaturallanguageinferenceusingphltripletgeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0cac50dd0268bd0940c7896a386f4d3fae3b80a8
--- /dev/null
+++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71d38dec59b0d412bde51569d23b5416427d58b205e1f6e6d107ad5a315948a6
+size 601308
diff --git a/unsupervisednaturallanguageinferenceusingphltripletgeneration/layout.json b/unsupervisednaturallanguageinferenceusingphltripletgeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9f2c87786e108e03fdb806d6c25446534cf90b6f
--- /dev/null
+++ b/unsupervisednaturallanguageinferenceusingphltripletgeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:069dc08cb3572cc0c0e93f436ff172330fa77eadc635f0b3d46b8b894ae62e37
+size 435665
diff --git a/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_content_list.json b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0526c5889728d26f88510e51f2ce5becf2cad627
--- /dev/null
+++ b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61291092ddc3d1587535f3768eb218af7dda279a46b90db248eea703e8f33c7d
+size 42398
diff --git a/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_model.json b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..72ad2209651fd15cf9e0f63373c62769a276623b
--- /dev/null
+++ b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:29e5a79ef09522364c92a2676dc82faab173e933ba8e8ed8a68cca64c3f42685
+size 52867
diff --git a/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_origin.pdf b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c5dfe73032cca0ce837b083d20f702e73c4e2379
--- /dev/null
+++ b/unsupervisedpreferenceawarelanguageidentification/63217194-2d95-47cc-9b34-7637fa3c2268_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c0140a5df3ed31c1c5647e3a77c5204e45dd6a9f6b5a3b4f0136406c6901f42
+size 487522
diff --git a/unsupervisedpreferenceawarelanguageidentification/full.md b/unsupervisedpreferenceawarelanguageidentification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a5b4ef10c53a16c8b0ecc6c0eeda4ade35fbf208
--- /dev/null
+++ b/unsupervisedpreferenceawarelanguageidentification/full.md
@@ -0,0 +1,166 @@
+# Unsupervised Preference-Aware Language Identification
+
+Xingzhang Ren Baosong Yang* Dayiheng Liu Haibo Zhang Xiaoyu Lv Liang Yao Jun Xie DAMO Academy, Alibaba Group
+
+{xingzhang.rxz, yangbaosong.ybs, liudayiheng.1dyh, zhanhui.zhb,anzhi.lxy, yaoliang.yl, qingjing.xj}@alibaba-inc.com
+
+# Abstract
+
+Recognizing the language of ambiguous texts has become a main challenge in language identification (LID). When using multilingual applications, users have their own language preferences, which can be regarded as external knowledge for LID. Nevertheless, current studies do not consider the inter-personal variations due to the lack of user annotated training data. To fill this gap, we introduce preference-aware LID and propose a novel unsupervised learning strategy. Concretely, we construct pseudo training set for each user by extracting training samples from a standard LID corpus according to his/her historical language distribution. Besides, we contribute the first user labeled LID test set called "U-LID". Experimental results reveal that our model can incarnate user traits and significantly outperforms existing LID systems on handling ambiguous texts. Our code and benchmark have been released.
+
+# 1 Introduction
+
+Language identification (LID) is widely applied in a range of web services where a multitude of languages may be presented, such as translation systems, search engines, and social media (Yao et al., 2020a; Sun et al., 2020; Li et al., 2020; Bi et al., 2020; Xu et al., 2021). It predicts the natural language that a text is written in, and decides which language-specific model to invoke in downstream natural language processing (NLP) tasks (Lui et al., 2014; Yao et al., 2020b; Tambi et al., 2020).
+
+Several recent studies have well tackled LID by designing a feature set for a traditional or neural classifier (Kocmi and Bojar, 2017; Vo and Khoury, 2020; Jauhiainen et al., 2021). However, these researches merely explore textual information regardless of external knowledge about the user. In a real-world scenario, there exists large amount of
+
+| User Input Text | Label | Prefer. | Baseline | Ours |
| velo | es (veil) | es | en | es |
| velo | fr (bike) | fr | en | fr |
| fundas huawei y7 | es (huawei y7 cases) | es | en | es |
| kello kitty | en (hello kitty) | de | it | en |
+
+Table 1: Examples of ambiguous text that are difficult to be accurately recognized. "Label" shows the language label that is annotated by a user and conforms to his/her input intention. "Prefer." denotes the language most frequently used by the corresponding user. "Baseline" and "Ours" indicate the predictions of baseline LID system and the proposed model, respectively.
+
+ambiguous user inputs, such as texts with false-friend, code-switching, and misspelling, as shown in Table 1. On the one hand, the languages of these texts are difficult (even impossible) to be explicitly identified without external knowledge. On the other hand, for different users, a good LID should flexibly give different results to the same ambiguous input, thus conforming to users' intention (Lin et al., 2021). It can be said that classifying ambiguous user inputs remains a main challenge in LID (Xia et al., 2010; Stiller et al., 2010).
+
+When drawing on a multilingual NLP application, every person has his/her own accustomed languages. The historical behavior implicitly mirrors the user language preference and can be exploited for LID. To this end, we propose a task named preference-aware LID, where the historical language distribution of a user is leveraged for the disambiguation of mistakeable texts, and guides LID to predict different languages for different users.
+
+A major bottleneck for this task lies in the lack of well-labeled training data. In particular, it is unavailable to obtain large amount of ambiguous texts labeled with different languages by different users. To overcome this issue, we propose a novel unsupervised strategy that builds synthetic data for each user via sampling natural training examples according to his/her historical language distribution.
+
+We build our model upon Transformer (Vaswani et al., 2017) and introduce two kinds of extensions. One is directly revising the predicted probability of LID using the user language preference. In order to maintain the robustness, the other encodes the user traits into inductive bias.
+
+Our models are trained using a publicly available dataset extracted from Wikipedia. Towards evaluating the effectiveness, we construct a user-driven LID test set "U-LID". The benchmark consists of 21 languages, each of which contains 500 examples collected from a real-world translation system and labeled by users. Extensive analyses demonstrate the superiority and the robustness of our approach on recognizing error-prone cases.
+
+# 2 Preliminary
+
+Problem Formulation Given an input text $X$ , the vanilla LID model with parameter $\theta$ predicts the probability of the language $y$ by $P(y|X;\theta)$ . As an extension of conventional LID, preference-aware LID considers the traits of each user, thus facilitating the classifying of ambiguous texts. In this paper, we treat the language preference of user as the external knowledge, which can be implicitly embodied in historical language distribution $D^{(u)}$ of user $u$ . Consequently, our task aims to model $P(y^{(u)}|X,D^{(u)};\theta)$ , as illustrated in Figure 1.
+
+User Annotated Test Set In order to assess the effectiveness of the proposed method, we construct a preference-aware LID test set called "U-LID". The training instance is represented as a triplet $\langle X, D^{(u)}, y^{(u)} \rangle$ . The samples are collected from a real-world translation system - Alibaba Translate. We mine user annotated data as follows: Given a user input, the translation system first returns a predicted language label and the associated translation results. When the user is dissatisfied with the prediction result, he/she may change the predicted language label. We argue that this operation not only reflects the user intention concerning the language, but also implies that the classification of the current input is error-prone. Accordingly, we collect texts whose predicted labels are revised by users. The test set is further manually checked and carefully desensitized by linguistic experts to maintain the data quality. Finally, the benchmark consists of 21 languages and 11,031 samples.
+
+
+Figure 1: Illustration of the preference-aware LID task. The input text "basket" is a false-friend in English and French. Our model considers user language preference $D^{(u)}$ , thus being able to identify ambiguous text and generate distinct results for different users.
+
+average word count in each sample is 2.08, and the average number with respect to character is 13.27.
+
+# 3 Methodology
+
+# 3.1 Preference-Aware Model
+
+Our model is built upon the advanced neural-based model - Transformer (Vaswani et al., 2017). Given an input query $X$ , the output token representations can be formally expressed as: $Z = \operatorname{Transformer}(X)$ .
+
+The final probability distribution is calculated by assigning an output layer:
+
+$$
+Y = \operatorname {s o f t m a x} \left(W _ {o} \bar {Z} + b _ {o}\right), \tag {1}
+$$
+
+where $\overline{Z}$ denotes the mean of the token representations $Z$ . $W_{o} \in \mathbb{R}^{L \times H}$ , $b_{o} \in \mathbb{R}^{L}$ are trainable parameters with $H$ being the hidden size and $L$ being the number of languages. $\mathrm{softmax}(\cdot)$ represents a non-linear function that is used to normalize the probability distribution of labels.
+
+We propose the preference-aware model to leverage user language preference into LID includes two types of approaches:
+
+Revision-Based Model Intuitively, we can multiply the output $Y$ and the user language preference $D^{(u)}$ directly. The final distribution is revised as:
+
+$$
+Y ^ {(u)} = \operatorname {s o f t m a x} \left(Y D ^ {(u)}\right). \tag {2}
+$$
+
+In this paradigm, we regard $D^{(u)}$ as a reviser at the model training time. Note that, revision-based model can be also exploited in a plug-and-play fashion without any model training.
+
+
+Figure 2: Illustration of the construction of synthetic data. We use smoothed language preference of a user to sample examples from the standard training corpus.
+
+Representation-Based Model A natural alternative is to encode language preference into a representation, which is then served as an inductive bias in the output layer. Here, we assign $L$ trainable language embeddings $W_{e}\in \mathbb{R}^{L\times L}$ . The user representation is the weighted sum of language embeddings regarding to user language distribution: $W_{e}D^{(u)}$ . We modified Equation 1 as follows:
+
+$$
+Y ^ {(u)} = \operatorname {s o f t m a x} \left(W _ {o} \bar {Z} + W _ {e} D ^ {(u)} + b _ {o}\right). \quad (3)
+$$
+
+# 3.2 Unsupervised Training
+
+The main challenge of our task lies in the lack of user annotated training data. It is hard to construct large amount of training examples in the triplet form $\langle X, D^{(u)}, y^u \rangle$ . Although we construct a test set by mining user operations on switching languages, such kind of approach depends on expensive manual review due to the massive noises.
+
+To tackle this problem, we propose a novel unsupervised training strategy, as illustrated in Figure 2. In an existing LID training corpus $T$ , each text is labeled to a language. Given the user historical language distribution $D^{(u)}$ , we sample a subset $T^{(u)}$ from $T$ and guarantee the language distribution of $T^{(u)}$ to be consistent with $D^{(u)}$ . Nevertheless, most people only use one or two languages, making their historical distribution concentrated on a few languages. Immediately utilizing $D^{(u)}$ to sample examples for training may cause overconfidence problem. Firstly, the model may tend to overlook either the user information or the input text. Secondly, texts of which language frequency is relatively low in $D^{(u)}$ may fail to be correctly classified, especially for those languages not appearing in the user's historical inputs. Accordingly, we borrow the idea of up-sampling (Pereyra et al.,
+
+2017; Wan et al., 2022) into our approach. The final sampling distribution can be calculated as:
+
+$$
+S ^ {(u)} = \operatorname {s o f t m a x} ((1 - \alpha) D ^ {(u)} + \alpha / L). \tag {4}
+$$
+
+Here, we set $\alpha = 0.01$ and collect 100 examples for each user as default. Besides, in order to maintain the robustness and cope with the situation that the user's historical input is none or inaccessible, we treat the uniform distribution as $D^{(u)}$ , then supplement the same number of standard training examples to that in current synthetic corpus.
+
+# 4 Experiments
+
+# 4.1 Experimental Setting
+
+Data Setting We collect 100 thousand (K) users from the log of the real-world translation system Alibaba Translate. Considering the standard LID corpus $T$ , we follow Vo and Khoury (2020) to extract the natural training data from the released datasets: W2C corpus (Majlis and Zabokrtsky, 2012), Common Crawl corpus (Schafer, 2016) and Tatoeba (Tiedemann and Thottingal, 2020). Finally $T$ consists of 21 languages, each of which contains 5 million (M) samples. We examine models on U-LID test set. Moreover, in order to investigate the robustness of our methods on conventional LID task, we further collect a publicly available test set KB-21 from Kocmi and Bojar (2017), using a subset of 21 languages. KB-21 consists of 2,100 samples, the average amounts of words and characters in each sample are 4.47 and 34.90, respectively.
+
+Implementation Details We follow the Base model setting as Vaswani et al. (2017), excepting that the number of layers is set to 1 for the computational efficiency. To avoid the problem of out-of-vocabulary, we follow existing LID approaches to exploit character-based embedding (Jauhiainen et al., 2019), in which vocabulary size is set to 15K.
+
+In this study, 1-Layer TRANSFORMER model is served as baseline. We reimplement widely used text classification models, FASTTEXT (Joulin et al., 2017) and TEXTCNN (Kim, 2014) as well as recent LID approach ATTENTIONCNN (Vo and Khoury, 2020), as listed in Table 2. In addition, we reproduced a state-of-the-art model Naive Bayes (Jauhiainen et al., 2021) in VarDial2021 task (Chakravarthi et al., 2021). Moreover, we also examine popular LID systems on our LID tasks,
+
+| Model | U-LID | KB-21 |
| Existing LID Systems |
| Langid.py (Lui and Baldwin, 2012) | 63.52 | 91.33 |
| LanideNN (Kocmi and Bojar, 2017) | 67.23 | 92.71 |
| Reimplemented Models |
| NAIVE BAYES (Jauhiainen et al., 2021) | 60.53 | 89.91 |
| FASTTEXT (Joulin et al., 2017) | 59.25 | 88.69 |
| TEXTCNN (Kim, 2014) | 61.58 | 91.24 |
| ATTENTIONCNN (Vo and Khoury, 2020) | 62.16 | 91.41 |
| Ours |
| TRANSFORMER (Baseline) | 67.35 | 92.81 |
| +Revision-Based Model | 89.23†† | 91.19 |
| +without training | 84.79†† | 92.81 |
| +Representation-Based Model | 88.74†† | 93.09† |
+
+Table 2: Classification accuracy (ACC) on test sets. For reference, when immediately regarding the user preference language as the predicted result, the ACC on U-LID is 66.42. The proposed preference-aware LID models show significant improvements on U-LID tasks. Experimental results of neural-based models own averaged over 5 independent runs:"†" and "††" indicate the improvement over TRANSFORMER is statistically significant $(p < 0.05$ and $p < 0.01$ , respectively), estimated by bootstrap sampling (Koehn, 2004).
+
+including Langid.py $^5$ (Lui and Baldwin, 2012) and LanideNN $^6$ (Kocmi and Bojar, 2017).
+
+For training, we used Adam optimizer (Kingma and Ba, 2015) with the same learning rate schedule as Vaswani et al. (2017) and 8k warmup steps. Each batch consists of 1,024 examples and dropout rate is set to a constant of 0.1. Models are trained on a single Tesla P100 GPU.
+
+Considering the compared models, we exploit 1-3 gram to extract characters and words for FASTTEXT (Joulin et al., 2017). As to TEXTCNN (Kim, 2014), we apply six filters with the size of 3, 3, 4, 4, 5, 5 and a hidden size of 512. For computational efficiency, 1 layer network is used as default if no confusion is possible. Other configurations of our implementations are same to common settings described in corresponding literature or the released source codes.
+
+# 4.2 Results
+
+The results are concluded in Table 2. Our models significantly outperform the compared methods over $17\% -22\%$ accuracy on U-LID task, indicating the effectiveness of the utilization of user information. Specifically, treating user's language preference as a reviser performs best on U-LID, while
+
+
+Figure 3: Effects of the number of historical inputs on U-LID. Representation-based model is more robust.
+
+declining the quality on KB-21. We attribute this to the overconfidence of revision-based model on user historical language distribution, which weakens the learning of LID model on original text classification. It is encouraging to see that revision-based model without training can yields considerable result on U-LID, in the meanwhile, does not affect the quality on KB-21 by feeding the uniform historical distribution. By contrast, representation-based model alleviates the overconfidence problem and achieves good performance in both U-LID and KB-21. Accordingly, we use representation-based model as the default setting in subsequent analyses.
+
+# 4.3 Analysis
+
+Robustness Analysis User's language preference greatly affects our model. The less the user historical inputs, the higher the uncertainty of user preference is. Accordingly, the robustness of our model is necessary to be assessed. We plot Figure 3 to show the effects of the number of historical inputs. Obviously, revision-based model yields lower accuracy when there exists relatively bare user historical information, verifying our hypothesis that the model suffers from the problem of overconfidence on historical language distribution. On the contrary, representation-based model draws a more smooth line, which demonstrates its robustness.
+
+Qualitative Analysis Table 1 shows several identification results. In the first two cases, "velo" is a Spanish and French false-friend. The third example is code-switching in which "huawei y7" is a mobile phone module, preceded by a Spanish word which means "case". For the last case, "kello" presents a misspelled English word "hello". Results indicate that vanilla LID model fails to correctly identify these cases, while our model can exactly predict distinct results that conform to the user intention.
+
+# 5 Conclusion
+
+We explore preference-aware LID. Major contributions of our work are four-fold: 1) We introduce preference-aware LID task that leverages user language preference to improve LID. We hope our work can attract more attention to explore techniques on this topic; 2) We propose a novel unsupervised strategy to guide model to take user historical language distribution into account; 3) We collect U-LID and make it publicly available, which may contribute to the subsequent researches on LID; and 4) Extensive analyses indicate the effectiveness and robustness of our method, verifying that LID can profit from personality information to make the results conform to user intention.
+
+# Acknowledgments
+
+We thank anonymous reviewers for valuable comments. This research was supported by National Key R&D Program of China under Grant No.2018YFB1403202.
+
+# References
+
+Tianchi Bi, Liang Yao, Baosong Yang, Haibo Zhang, Weihua Luo, and Boxing Chen. 2020. Constraint translation candidates: A bridge between neural query translation and cross-lingual information retrieval. CoRR, abs/2010.13658.
+Andrea Ceolin. 2021. Comparing the performance of cnns and shallow models for language identification. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 102-112.
+Bharathi Raja Chakravarthi, Gaman Mihaela, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lindén, Nikola Ljubesic, Niko Partanen, Ruba Priyadharshini, Christoph Purschke, et al. 2021. Findings of the vardial evaluation campaign 2021. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 1-11.
+Tommi Jauhiainen, Heidi Jauhiainen, and Krister Lindén. 2021. Naive bayes-based experiments in Romanian dialect identification. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 76-83.
+Tommi Jauhiainen, Marco Lui, Marcos Zampieri, Timothy Baldwin, and Krister Lindén. 2019. Automatic language identification in texts: A survey. volume 65, pages 675-782.
+Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomás Mikolov. 2017. Bag of tricks for efficient
+
+text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 427-431. Association for Computational Linguistics.
+Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar; A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751. ACL.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Tom Kocmi and Ondrej Bojar. 2017. Lanidenn: Multilingual language identification on character window. CoRR, abs/1701.03338.
+Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP.
+Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, and Rui Yan. 2020. Cross-lingual low-resource set-to-description retrieval for global e-commerce. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8212-8219. AAAI Press.
+Huan Lin, Liang Yao, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Degen Huang, and Jinsong Su. 2021. Towards user-driven neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4008-4018.
+Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In *The 50th Annual Meeting of the Association for Computational Linguistics*, Proceedings of the System Demonstrations, July 10, 2012, Jeju Island, Korea, pages 25-30. The Association for Computer Linguistics.
+Marco Lui, Joy Han Lau, and Timothy Baldwin. 2014. Automatic detection and language identification of multilingual documents. Trans. Assoc. Comput. Linguistics, 2:27-40.
+Martin Majlis and Zdenek Zabokrtsky. 2012. Language richness of the web. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul,
+
+Turkey, May 23-25, 2012, pages 2927-2934. European Language Resources Association (ELRA).
+Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net.
+Roland Schäfer. 2016. Commoncow: Massively huge web corpora from commoncrawl data and a method to distribute them freely under restrictive EU copyright laws. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portooroz, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA).
+Juliane Stiller, Maria Gäde, and Vivien Petras. 2010. Ambiguity of queries and the challenges for query language detection. In CLEF 2010 LABs and Workshops, Notebook Papers, 22-23 September 2010, Padua, Italy, volume 1176 of CEUR Workshop Proceedings. CEUR-WS.org.
+Shuo Sun, Suzanne Sia, and Kevin Duh. 2020. Clireval: Evaluating machine translation as a cross-lingual information retrieval task. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 134-141. Association for Computational Linguistics.
+Ritzi, Ajinkya Kale, and Tracy Holloway King. 2020. Search query language identification using weak labeling. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 3520-3527. European Language Resources Association.
+Jörg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT - building open translation services for the world. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, EAMT 2020, Lisboa, Portugal, November 3-5, 2020, pages 479-480. European Association for Machine Translation.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Duy-Tin Vo and Richard Khoury. 2020. Language identification on massive datasets of short messages using an attention mechanism CNN. In IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2020, The Hague, Netherlands, December 7-10, 2020, pages 16-23. IEEE.
+
+Yu Wan, Baosong Yang, Derek Fai Wong, Lidia Sam Chao, Liang Yao, Haibo Zhang, and Boxing Chen. 2022. Challenges of Neural Machine Translation for Short Texts. Computational Linguistics, pages 1-21.
+Fei Xia, Carrie Lewis, and William D. Lewis. 2010. The problems of language identification within hugely multilingual data sets. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010, 17-23 May 2010, Valletta, Malta. European Language Resources Association.
+Linlong Xu, Baosong Yang, Xiaoyu Lv, Tianchi Bi, Dayiheng Liu, and Haibo Zhang. 2021. Leveraging advantages of interactive and non-interactive models for vector-based cross-lingual information retrieval. CoRR, abs/2111.01992.
+Liang Yao, Baosong Yang, Haibo Zhang, Boxing Chen, and Weihua Luo. 2020a. Domain transfer based data augmentation for neural query translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4521-4533, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Liang Yao, Baosong Yang, Haibo Zhang, Weihua Luo, and Boxing Chen. 2020b. Exploiting neural query translation into cross lingual information retrieval. CoRR, abs/2010.13659.
\ No newline at end of file
diff --git a/unsupervisedpreferenceawarelanguageidentification/images.zip b/unsupervisedpreferenceawarelanguageidentification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..faf17715fa5cfafcf1d8fb20a08519eeb829e773
--- /dev/null
+++ b/unsupervisedpreferenceawarelanguageidentification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:030679b25a0a9fb880bdf021317035c94cdf0f577fa645dc0955688688383e88
+size 186365
diff --git a/unsupervisedpreferenceawarelanguageidentification/layout.json b/unsupervisedpreferenceawarelanguageidentification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f38120428ce6581fe621693d63a2556d8d1bbc62
--- /dev/null
+++ b/unsupervisedpreferenceawarelanguageidentification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2822fe03ea1b1594fc2940cc8581ded790e4c3ad9369ed0501b0e90071d66431
+size 203869
diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_content_list.json b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5daa0fa36c097d768ccc5ae646267f124ed33f7a
--- /dev/null
+++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b421d7074f03099632420a148cb3303fc19d71103235f52a2e9fa81d477936b
+size 82699
diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_model.json b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ad7a64238eb17e752e1986904dd18d50788a01f5
--- /dev/null
+++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:baef046b6ae08f15d464534d4cba9052b36bc3492d1e0da06a705afcb86228f3
+size 98164
diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_origin.pdf b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cdc74890123de19d7fb80c9064334cd14413ae3b
--- /dev/null
+++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/c466d470-f7d8-427f-91ca-dd8d761f94e5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e8b9e4b9f694cc4598f15a67f2570fd07bfb35fcf254065eb51b5b1291bd707f
+size 1510862
diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/full.md b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c4c306a33dd4dbfd8c84b84e22684141fac43e1c
--- /dev/null
+++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/full.md
@@ -0,0 +1,326 @@
+# Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment
+
+Zichao Li $^{1}$ , Prakhar Sharma $^{2}$ , Xing Han Lu $^{1}$ , Jackie C.K. Cheung $^{1}$ , Siva Reddy $^{1}$
+
+$^{1}$ Mila, McGill University
+
+$^{2}$ University of California, Los Angeles
+
+zichao.li@mila.quebec
+
+# Abstract
+
+Most research on question answering focuses on the pre-deployment stage; i.e., building an accurate model for deployment. In this paper, we ask the question: Can we improve QA systems further post-deployment based on user interactions? We focus on two kinds of improvements: 1) improving the QA system's performance itself, and 2) providing the model with the ability to explain the correctness or incorrectness of an answer. We collect a retrieval-based QA dataset, FEEDBACKQA, which contains interactive feedback from users. We collect this dataset by deploying a base QA system to crowdworkers who then engage with the system and provide feedback on the quality of its answers. The feedback contains both structured ratings and unstructured natural language explanations. We train a neural model with this feedback data that can generate explanations and re-score answer candidates. We show that feedback data not only improves the accuracy of the deployed QA system but also other stronger non-deployed systems. The generated explanations also help users make informed decisions about the correctness of answers.1
+
+# 1 Introduction
+
+Much of the recent excitement in question answering (QA) is in building high-performing models with carefully curated training datasets. Datasets like SQuAD (Rajpurkar et al., 2016), NaturalQuestions (Kwiatkowski et al., 2019) and CoQA (Reddy et al., 2019) have enabled rapid progress in this area. Most existing work focuses on the pre-deployment stage; i.e., training the best QA model before it is released to users. However, this stage is only one stage in the potential lifecycle of a QA system.
+
+In particular, an untapped resource is the large amounts of user interaction data produced after the initial deployment of the system. Gathering this
+
+data should in practice be relatively cheap, since users genuinely engage with QA systems (such as Google) for information needs and may provide feedback to improve their results.2
+
+Exploiting this kind of user interaction data presents new research challenges, since they typically consist of a variety of weak signals. For example, user clicks could indicate answer usefulness (Joachims, 2002), users could give structured feedback in the form of ratings to indicate the usefulness (Stiennon et al., 2020), or they could give unstructured feedback in natural language explanations on why an answer is correct or incorrect. User clicks have been widely studied in the field of information retrieval (Joachims, 2002). Here we study the usefulness of interactive feedback in the form of ratings and natural language explanations.
+
+Whilst there are different variants of QA tasks, this paper focuses primarily on retrieval-based QA (RQA; Chen et al. 2017; Lee et al. 2019). Given a question and a set of candidate answer passages, a model is trained to rank the correct answer passage the highest. In practice, when such a system is deployed, an user may engage with the system and provide feedback about the quality of the answers. Such feedback is called interactive feedback. Due to the lack of a dataset containing interactive feedback for RQA, we create FEEDBACKQA.
+
+FEEDBACKQA is a large-scale English QA dataset containing interactive feedback in two forms: user ratings (structured) and natural language explanations (unstructured) about the correctness of an answer. Figure 1 shows an example from FEEDBACKQA. The dataset construction has two stages: We first train a RQA model on the questions and passages, then deploy it on a crowdsourcing platform. Next, crowdworkers engage with this system and provide interactive feedback. To make our dataset practically useful, we focus on
+
+
+Figure 1: Users interact with the deployed QA model and give feedback. Feedback contains a rating (bad, good, could be improved, excellent) and a natural language explanation.
+
+question answering on public health agencies for the Covid-19 pandemic. The base model for FEEDBACKQA is built on 28k questions and 3k passages from various agencies. We collect 9k interactive feedback data samples for the base model.
+
+We investigate the usefulness of the feedback for improving the RQA system in terms of two aspects: answer accuracy and explainability. Specifically, we are motivated by two questions: 1) Can we improve the answer accuracy of RQA models by learning from the interactive feedback? and 2) Can we learn to generate explanations that help humans to discern correct and incorrect answers?
+
+To address these questions, we use feedback data to train models that rerank the original answers as well as provide an explanation for the answers. Our experiments show that this approach not only improves the accuracy of the base QA model for which feedback is collected but also other strong models for which feedback data is not collected. Moreover, we conduct human evaluations to verify the usefulness of explanations and find that the generated natural language explanations help users make informed and accurate decisions on accepting or rejecting answer candidates.
+
+Our contributions are as follows:
+
+1. We create the first retrieval-based QA dataset containing interactive feedback.
+2. We demonstrate a simple method of using the feedback data to increase the accuracy and explainability of RQA systems.
+3. We show that the feedback data not only improve the deployed model but also a stronger non-deployed model.
+
+# 2 FEEDBACKQA Dataset
+
+Recently, there have been efforts to collect feedback data in the form of explanations for natural language understanding tasks (Camburu et al. 2018; Rajani et al. 2019, inter alia). These contain explanations only for ground-truth predictions for a given input sampled from the training data without any user-system interaction. Instead, we collect user feedback after deploying a RQA system thereby collecting feedback for both correct and incorrect predictions. Table 1 presents a comprehensive comparison of FEEDBACKQA and existing natural language understanding (NLU) datasets with explanation data.
+
+# 2.1 Dataset collection
+
+In order to collect post-deployment feedback as in a real-world setting, we divide the data collection into two stages: pre-deployment (of a RQA model) and post-deployment.
+
+Stage 1: Pre-deployment of a QA system We scrape Covid-19-related content from the official websites of WHO, US Government, UK Government, Canadian government, $^{3}$ and Australian government. We extract the questions and answer passages in the FAQ section. To scale up the dataset, we additionally clean the scraped pages and extract additional passages for which we curate corresponding questions using crowdsourcing as if users were asking questions. We present details on this annotation process in Appendix A. We use this dataset to train a base RQA model for each source separately and deploy them. For the base model, we use a BERT-based dense retriever (Karpukhin
+
+| Datasets | Task | Feedback Type | Interactive Feedback | Feedback for incorrect predictions |
| e-SNLI (Camburu et al., 2018) | NLI | Free-form | X | X |
| CoS-E (Rajani et al., 2019) | Commonsense QA | Free-form | X | X |
| LIAR-PLUS (Alhindi et al., 2018) | Fact checking | Free-form | X | X |
| QED (Lamm et al., 2021) | Reading comprehension | Structured | X | X |
| NExT (Wang et al., 2019) | Text classification | Structured | X | X |
| FEEDBACKQA | Retrieval-based QA | Structured & Free-form | ✓ | ✓ |
+
+Table 1: Comparison of FEEDBACKQA with existing NLU datasets containing feedback in the form of structured representations (according to a schema) or natural language explanations (free-form).
+
+ | #Passages | #Questions | #Feedback |
| Australia | 584 | 1783 | 2264 |
| Canada | 587 | 8844 | / |
| UK | 956 | 2874 | 3668 |
| US | 598 | 13533 | 2628 |
| WHO | 226 | 688 | 874 |
| Overall | 2951 | 27722 | 9434 |
+
+Table 2: Number of samples in different domains of FEEDBACKQA. We split the data into train/validation/test sets in the ratio of $0.7:0.1:0.2$ .
+
+et al., 2020) combined with Poly-encoder (Miller et al., 2017) (more details are in Section 3.1).
+
+Stage 2: Post-deployment of a QA system Since each domain has several hundred passages (Table 2), it is hard for a crowdworker to ask questions that cover a range of topics in each source. We thus collect questions for individual passages beforehand similar to Stage 1 and use these as interactive questions. The question and top-2 predictions of the model are shown to the user and they give feedback for each question-answer pair. The collected feedback consists of a rating, selected from excellent, good, could be improved, bad, and a natural language explanation elaborating on the strengths and/or weaknesses of the answer. For each QA pair, we elicit feedback from three different workers. We adopted additional strategies to ensure the quality of the feedback data, the details of which are available in Appendix B. The resulting dataset statistics are shown in Table 2. In order to test whether interactive feedback also helps in out-of-distribution settings, we did not collect feedback for one of the domains (Canada).
+
+# 2.2 FEEDBACKQA analysis
+
+Table 3 shows examples of the feedback data, including both ratings and explanations. We find that explanations typically contain review-style text indicating the quality of the answer, or state-
+
+ments summarizing which parts are correct and why. Therefore, we analyze a sample of explanations using the following schema:
+
+Review Several explanations start with a generic review such as This directly answers the question or It is irrelevant to the question. Sometimes users also highlight aspects of the answer that are good or can be improved. For instance, ... could improve grammatically... suggests that the answer could be improved in terms of writing.
+
+Summary of useful content refers to the part of answer that actually answers the question;
+
+Summary of irrelevant content points to the information that is not useful for the answer, such as off-topic or addressing incorrect aspects;
+
+Summary of missing content points the information the answer fails to cover.
+
+We randomly sample 100 explanations and annotate them. Figure 2 shows the distribution of the types present in explanations for each rating label. All explanations usually contain some review type information. Whereas explanations for answers labeled as excellent or acceptable predominantly indicate the parts of the answer that are useful. The explanations for answers that can be improved indicate parts that are useful, wrong or missing. Whereas bad answers often receive explanations that highlight parts that are incorrect or missing as expected.
+
+# 3 Experimental Setup
+
+FEEDBACKQA contains two types of data. One is pre-deployment data $\mathcal{D}_{\mathrm{pre}} = (Q,A^{+},\mathcal{A})$ , where $Q$ is a question paired with its gold-standard answer passage $A^{+}$ from the domain corpus $\mathcal{A}$ . The other is post-deployment feedback data $\mathcal{D}_{\mathrm{feed}} =$ $(Q,A,Y,E)$ , where $Q$ is a question paired with a candidate answer $A\in \mathcal{A}$ and corresponding feedback for the answer. The feedback consists of a rating $Y$ and an explanation $E$ .We build
+
+| Rating label | Explanation |
| Excellent | This answers the question directly. This answer provides information and recommendation on how people and adolescent can protect themselves when going online during the Covid-19 pandemic. |
| Acceptable | This answer, while adequate, could give more information as this is a sparse answer for a bigger question of what one can do for elderly people during the pandemic. |
| Could be improved | The answer relates and answers the question, but could improve grammatically and omit the "yes" |
| Could be improved | The answer is about some of the online risks but not about how to protect against them. |
| Bad | This does not answer the question. This information is about applying visa to work in critical sector. It does not provide any information on applying for Covid-19 pandemic visa event as asked in the question. |
+
+Table 3: Examples of explanation and its associated rating label. Span color and their types of components: generic and aspect review; summary of useful content; summary of irrelevant content; summary of missing content
+
+
+Figure 2: Distribution of component number in 100 natural language feedback of different rating labels.
+
+two kinds of models on pre- and post-deployment data: RQA models on the pre-deployment data that can retrieve candidate answers for a given question, and feedback-enhanced RQA models on the post-deployment data that can rate an answer for a given question as well as generate an explanation for the answer. We use this rating to rerank the answer candidates. Therefore, in our setting, a feedback-enhanced RQA model is essentially a reranker. Keeping in mind the fact that real-world QA systems evolve quickly, we decouple the reranker model from the RQA model by using separate parameters for the reranker independent of the RQA model. We train this reranker on the feedback data. This allows for the reranker to be reused across many RQA models. We leave other ways to enhance RQA models with feedback data for future work. Below, we describe the architectures for the RQA models and feedback-based rerankers.
+
+# 3.1 RQA Models (Pre-deployment)
+
+We use dense passage retrievers (Karpukhin et al., 2020) to build the RQA models, where the similarity between the question embedding and the passage embedding is used to rank candidates. We use two variants of pre-trained models to obtain the
+
+embeddings: 1) BERT (Devlin et al., 2019), a pretrained Transformer encoder; and 2) BART (Lewis et al., 2020), a pretrained Transformer encoder-decoder. For BERT, we use average pooling of token representations as the embedding, whereas for BART we use the decoder's final state. While Karpukhin et al. use question-agnostic passage representations, we use a poly-encoder (Humeau et al., 2020) to build question-sensitive document representations. In a poly-encoder, each passage is represented as multiple encodings, first independent of the question, but then a simple attention between the question and passage embeddings is used to compute question-sensitive passage representation, which is later used to compute the relevance of the passage for a given query. Humeau et al. show that the poly-encoder architecture is superior to alternatives like the bi-encoder (Karpukhin et al., 2020) without much sacrifice in computational efficiency. $^4$
+
+Given pre-deployment training data $\mathcal{D}_{\mathrm{pre}} = (Q, A^{+}, \mathcal{A})$ , the RQA model parameterized by $\theta$ is trained to maximize the log-likelihood of the correct answer:
+
+$$
+\mathcal {J} _ {\theta} = \log P _ {\theta} (A ^ {+} | Q, \mathcal {A})
+$$
+
+$$
+P _ {\theta} \left(A ^ {i} \mid Q, \mathcal {A}\right) = \frac {\exp \left(S \left(Q , A ^ {i}\right)\right)}{\sum_ {A \in \mathcal {A}} \exp \left(S \left(Q , A\right)\right)} \tag {1}
+$$
+
+Here $S(Q, A)$ denotes the dot product similarity between the question and passage embedding. As it is inefficient to compute the denominator over all passages during training, we adopt an in-batch negative sampling technique (Humeau et al., 2020), merging all of the $A^{+}$ in the same minibatch into a set of candidates.
+
+# 3.2 Feedback-enhanced RQA models (Post-deployment)
+
+On the post-deployment data $\mathcal{D}_{\mathrm{feed}} = (Q, A, Y, E)$ , we train a reranker that assigns a rating to an answer and also generates an explanation. We use BART parameterized by $\phi$ as the base of EXPLAINRATE because it is easy to adapt it to both explanation generation and rating classification. The encoder of the BART model takes as input the concatenation $[Q; \mathrm{SEP}; A]$ , and the decoder generates an explanation $E$ ; after that, an incremental fully-connected network predicts the rating $Y$ given the last hidden states of decoder. The rating is used to score QA pairs, whereas the generated explanation is passed to humans to make an informed decision of accepting the answer. We also implement a variant where the model directly produces a rating without generating an explanation. Since each candidate answer is annotated by different annotators, an answer could have multiple rating labels. To account for this, we minimize the KL-divergence between the target label distribution and the predicted distribution:
+
+$$
+\begin{array}{l} \mathcal {J} _ {\phi^ {\prime}} = - D _ {\mathrm {K L}} \left(P (Y | Q, A) \mid \mid P _ {\phi} (Y | Q, A)\right), \\ P (Y _ {i} = y | Q _ {i}, A _ {i}) = \frac {C _ {y , i}}{\sum_ {y} C _ {y , i}} \tag {2} \\ \end{array}
+$$
+
+where $C_{y,i}$ is the count of the rating label $y$ for the $i$ -th feedback.
+
+In order to enhance an RQA model with the reranker, we first select the top- $k$ candidates according to the RQA model (in practice we set $k = 5$ ). The reranker then takes as input the concatenation of the question and each candidate, then generates a rating for each answer. We simply sum up the scores from the RQA model and the reranker model. In practice, we found that using the reranker probability of excellent worked better than normalizing the expectation of the rating score (from score 0 for label bad to 3 for excellent). So, we score the candidate answers as follows:
+
+$$
+\begin{array}{l} S (A | \mathcal {A}, Q) = P _ {\theta} (A = A ^ {+} | \mathcal {A}, Q) \tag {3} \\ + P _ {\phi} (y = e x c e l l e n t | A, Q) \\ \end{array}
+$$
+
+# 4 Experiments and Results
+
+We organize the experiments based on the following research questions:
+
+- RQ1: Does feedback data improve the base RQA model accuracy?
+
+- RQ2: Does feedback data improve the accuracy of RQA models that are stronger than the base model?
+- RQ3: Do explanations aid humans in discerning between correct and incorrect answers?
+
+We answer these questions by comparing the RQA models with the feedback-enhanced RQA models. The implementation and hyper-parameter details of each model are included in Appendix D.
+
+# 4.1 RQ1: Does feedback data improve the base RQA model?
+
+Model details. Our base model is a BERT RQA model which we deployed to collect feedback data to train the other models (Section 3.1).
+
+For the feedback-enhanced RQA model, we use the BART-based reranker described in Section 3.2. We train one single model for all domains. We call this FEEDBACKRERANKER. We compare two variants of FEEDBACKRERANKER on validation set, one of which directly predicts the rating while the other first generates an explanation and then the rating. And we found the first one performs slightly better (Appendix Table 10). We conjecture that learning an explanation-based rating model from the limited feedback data is a harder problem than directly learning a rating model. Therefore, for this experiment, we only use the rating prediction model (but note that explanation-based rating model is already superior to the base RQA model).
+
+To eliminate the confounding factor of having a larger number of model parameters introduced by the reranker, we train another reranker model on the pre-deployment data VANILLARERANKER and compare against the reranker trained on the feedback data. To convert the pre-deployment data into the reranker's expected format, we consider a correct answer's rating label to be excellent, and the randomly sampled answer candidates5 to be bad. Note that this dataset is much larger than the feedback data.
+
+Finally, we combine the training data of FEEDBACKRERANKER and VANILLARERANKER and train the third reranker called COMBINEDRERANKER.
+
+To measure retrieval accuracy, we adopt Precision@1 (P@1) as our main metric.
+
+| Methods | Australia | US | Canada | UK | WHO | All | Beats |
| BERT RQA model ◆ | 47.25 | 65.30 | 81.49 | 48.50 | 81.19 | 64.75 | None |
| + FEEDBACKRERANKER * | 55.13 | 65.97 | 83.74 | 51.07 | 77.05 | 66.59 | ◆※ |
| + VANILLARERANKER ◆ | 54.29 | 64.80 | 83.20 | 49.63 | 77.96 | 65.98 | ◆ |
| + COMBINEDRERANKER ◆ | 55.63 | 67.54 | 84.99 | 53.21 | 78.51 | 67.97 | ◆※※ |
+
+Table 4: Accuracy of the BERT RQA model, i.e., the deployed model, and its enhanced variants on the test set. FEEDBACKRERANKER is trained on the post-deployment feedback data, VANILLAERANKER is trained on the pre-deployment data and COMBINEDRERANKER is trained on both. The column Beats indicates that the model significantly outperforms $(p$ -value $< 0.05)$ the competing methods. All of the results are averaged across 3 runs.
+
+| Methods | Australia | US | Canada | UK | WHO | All | Beats |
| BART RQA model Y | 52.88 | 68.47 | 82.49 | 51.29 | 81.97 | 67.42 | None |
| + FEEDBACKRERANKER Y | 54.78 | 70.45 | 84.38 | 53.47 | 82.51 | 69.12 | Y II |
| + VANILLARERANKER II | 53.09 | 70.40 | 82.76 | 53.08 | 82.33 | 68.33 | Y |
| + COMBINEDRERANKER | 55.27 | 71.45 | 85.35 | 54.83 | 83.61 | 70.10 | Y Y II |
+
+Table 5: Accuracy of the BART RQA model and its enhanced variants on the test set. Results are averaged across 3 runs.
+
+Results. As shown in Table 4, the feedback-enhanced RQA model is significantly better than the base RQA model by 1.84 points. Although VANILLARERANKER improves upon the base model, it is weaker than FEEDBACKRERANKER, and COMBINEDRERANKER is a much stronger model than any of the models, indicating that learning signals presented in feedback data and the predeployment data are complementary to each other. Moreover, we also see improved performance on the Canada domain, although feedback data was not collected for that domain.
+
+From these experiments, we conclude that feedback data can improve the accuracy of the base RQA model, not only for the domains for which feedback data is available but also for unseen domains (Canada).
+
+# 4.2 RQ2: Does feedback data improve the accuracy of RQA models that are stronger than the base model?
+
+If feedback data were only useful for the base RQA model, then its usefulness would be questionable, since the RQA development cycle is continuous and the base RQA model will eventually be replaced with a better model. For example, we find that BART-based dense retriever is superior than the BERT RQA model: Table 9 in Appendix E shows the results on validation set which indicate that BART RQA model overall performance is nearly 4 points better than the BERT RQA model.
+
+To answer RQ2, we use the same FEEDBACK-ERANKER and VANILLARERANKER to rescore the BART RQA predictions, even though feedback data is not collected for this model. We observe that the resulting model outperforms the BART RQA model in Table 5, indicating that the feedback data is still useful. Again, FEEDBACK-ERANKER is superior to VANILLARERANKER although the feedback data has fewer samples than the pre-deployment data, and the COMBINED-ERANKER has the best performance.
+
+These results suggest that the feedback data is useful not only for the base RQA model but also other stronger RQA models.
+
+# 4.3 RQ3: Do explanations aid humans in discerning between correct and incorrect answers?
+
+We conduct a human evaluation to investigate whether explanations are useful from the perspective of users. Unfortunately, rigorous definitions and automatic metrics of explainability remain open research problems. In this work, we simulate a real-world scenario, where the user is presented an answer returned by the system as well as an explanation for the answer, and they are asked to determine whether the answer is acceptable or not. Jacovi and Goldberg (2020) advocate utility metrics as proxies to measure the usefulness of explanations instead of directly evaluating an explanation since plausible explanations does not necessarily increase the utility of the resulting system. Inspired by their findings, we measure if explana
+
+| Explanation | Accuracy | Agreement |
| Blank | 69.17 | 0.31 |
| Human-written | 88.33 | 0.80 |
| BART feedback model | 81.67 | 0.71 |
| BART summarization model | 74.17 | 0.30 |
+
+Table 6: Human evaluation results of the usefulness of explanations. Accuracy measures the utility of explanations in selecting the correct rating label for an answer, whereas agreement measures whether explanations invoke same behaviour pattern across users.
+
+tions can: 1) help users to make accurate decisions when judging an answer (with respect to a ground truth) and 2) improve the agreement among users in accepting/rejecting an answer candidate. The former measures the utility of an explanation and the latter measures if the explanations invoke the same behavioral pattern across different users irrespective of the utility of the explanation. Note that agreement and utility are not tightly coupled. For example, agreement can be higher even if the utility of an explanation is lower when the explanation misleads end users to consistently select a wrong answer (González et al., 2021; Bansal et al., 2021).
+
+We sample 60 feedback samples from the hidden split of the feedback data $\mathcal{D}_{\mathrm{feed}} = (Q, A, Y, E)$ for evaluation purposes. We evaluate four experimental setups on these samples which vary in the type of explanation shown to the end users: 1) no explanation; 2) human-written explanations; 3) explanations generated by the BART model trained on the feedback data (Section 3.2); and 4) summary of the answer candidate generated by a strong finetuned BART-based summarization model. The last setting is inspired from the observation in Section 2.2 that a large portion of explanations contain summary of questions/answers. We investigate if conventional summary of an answer is as useful as an explanation. For each of these setups, two crowdworkers assign a rating label to each answer candidate indicating the quality of the answer. Each setup has its own set of workers in order to avoid information-leakage across setups (this simulates A/B testing often used by production systems).
+
+We measure the workers' accuracy (average of the two workers) in determining the correctness of an answer with respect to the original annotation
+
+Question: What are the restrictions to travelers landing in the US?
+
+Answer: ... Under current federal regulations, pilots must report all illnesses and deaths to CDC before arriving to a US destination. According to CDC disease protocols, if a sick traveler is considered to be a public health risk, CDC works with local and state health departments and international public health agencies to contact passengers and crew exposed to that sick traveler. Be sure to give the airline your current contact information when booking your ticket so you can be notified if you are exposed to a sick traveler on a flight. ...
+
+Generated explanation: This answer is irrelevant to the question. The question is asking about restrictions on travelers landing in the US and this answer provides information on what to do if there is a sick passenger on an international or domestic flight.
+
+Human explanation: This information does not answer the question. The question is asking for information on the restrictions to travelers landing in the US and this answer contains information on pilots reports on illnesses and deaths before arriving to a US destinations and airline notification when passengers are exposed to sick travelers on a flight.
+
+Generated Summary: The Centers for Disease Control and Prevention (CDC) is working with airlines to help prevent illness and death on flights.
+
+Question: Should I wear gloves when I go out in public to prevent Covid-19?
+
+Answer: No, WHO does not recommend the regular use of gloves by people in the community. The wearing of gloves may increase risks, since it can lead to self-infection or transmission to others through touching contaminated surfaces which is not followed by removal of gloves and hand hygiene. Therefore, in public places such as supermarkets, in addition to physical distancing...
+
+Generated Explanation: This answer is excellent because it unequivocally states that gloves are not recommended. Therefore, it answers the question directly. It goes on to explain why gloves aren't recommended. This is a great answer.
+
+Human explanation: The question is answered directly. The answer states not to wear gloves in public.
+
+Generated Summary: The World Health Organization (WHO) has issued a guidance on the use of gloves in public places.
+
+Question: Can students work in aged care?
+
+Answer:...The Australian Government has announced that residential aged care facilities should take extra precautions to keep residents safe from COVID-19. Both individuals and management need to take responsibility for the health of visitors and staff at facilities to protect our most vulnerable community members. Aged care facilities will: closely monitor the health of staff...
+
+Generated Explanation: This answer does not address whether or not students are allowed to work in aged care. This answer only provides information on precautions to be taken by aged care facilities to keep residents safe.
+
+Human Explanation: The information here give explanation on guideline that aged care facility staffs should put in place and did not say anything about student working in aged care facility.
+
+Generated Summary: Residents in aged care facilities across Australia are being urged to take extra precautions to prevent the spread of a deadly virus.
+
+Table 7: Examples of different explanation types: model-generated and human-written explanation and model-generated summary.
+
+in FEEDBACKQA, as well as compute the agreement of workers with each other using Spearman correlation. Table 6 presents the results. All explanation types improve accuracy compared to the model with no explanations. This could be because any explanation forces the worker to think more about an answer. The human-written explanations has the highest utility and also leads to the biggest agreement. Both the human-written explanations and the explanations generated by the BART feedback model have more utility and higher agreement than the BART summarization model. In fact, the summarization model leads to lower agreement.
+
+These results indicate that explanations based on feedback data are useful for end users in discerning correct and incorrect answers, and they also improve the agreement across users.
+
+Table 7 shows some examples of explanation that helps the users make more informed and accurate decision. In the first example, the model-generated explanation points out the gap between the question and the answer candidate, though there are a large number of overlapping keywords. Meanwhile, human explanations are generally more abstractive and shorter in nature (e.g., see the second example).
+
+# 5 Related work
+
+Retrieval-based question answering has been widely studied, from early work on rule-based systems (Kwok et al., 2001), to recently proposed neural-based models (Yang et al., 2019; Karpukhin et al., 2020). Most existing work focuses on improving the accuracy and efficacy by modification of a neural architecture (Karpukhin et al., 2020; Humeau et al., 2020), incorporation of external knowledge (Ferrucci et al., 2010), and retrieval strategy (Kratzwald and Feuerriegel, 2018). These methods focus on the pre-deployment stage of RQA models.
+
+By contrast, we investigate methods to improve a RQA model post-deployment with interactive feedback. The proposed methods are agnostic to the architecture design and training methods of the base RQA model.
+
+Learning from user feedback has been a long standing problem in natural language processing. Whilst earlier work proposes methods for using implicit feedback—for instance, using click-through data for document ranking (Joachims, 2002)—recent work has explored explicit feedback such as explanations of incorrect responses by chatbots (Li
+
+et al., 2016; Weston, 2016) and correctness labels in conversational question answering and text classification (Campos et al., 2020). However, the feedback in these studies is automatically generated using heuristics, whereas our feedback data is collected from human users. Hancock et al. (2019) collect suggested responses from users to improve a chatbot, while we investigate the effect of natural feedback for RQA models.
+
+Explainability and Interpretability has received increasing attention in the NLP community recently. This paper can be aligned to recent efforts in collecting and harnessing explanation data for language understanding and reasoning tasks, such as natural language inference (Camburu et al., 2018; Kumar and Talukdar, 2020), commonsense question answering (Rajani et al., 2019), document classification (Srivastava et al., 2017), relation classification (Murty et al., 2020), reading comprehension (Lamm et al., 2021), and fact checking (Alhindi et al., 2018). The type of feedback in FEEDBACKQA differs from the existing work in several aspects: 1) FEEDBACKQA has feedback data for both positive and negative examples, while most of other datasets only contains explanations of positive ones; 2) FEEDBACKQA has both structured and unstructured feedback, while previous work mainly focuses on one of them; 3) The feedback in FEEDBACKQA is collected post-deployment; 4) While previous work aims to help users interpret model decisions, we investigate whether feedback-based explanations increase the utility of the deployed system.
+
+# 6 Conclusion
+
+In this work, we investigate the usefulness of feedback data in retrieval-based question answering. We collect a new dataset FEEDBACKQA, which contains interactive feedback in the form of ratings and natural language explanations. We propose a method to improve the RQA model with the feedback data, training a reranker to select an answer candidate as well as generate the explanation. We find that this approach not only increases the accuracy of the deployed model but also other stronger models for which feedback data is not collected. Moreover, our human evaluation results show that both human-written and model-generated explanations help users to make informed and accurate decisions about whether to accept an answer.
+
+# 7 Limitations and Ethical consideration
+
+The training and inference of a reranker with feedback data increases the usage of computational resources. We note that our feedback collection setup is a simulation of a deployed model. The feedback in real-world systems may contain sensitive information that should be handled with care. Moreover, real-world feedback could be noisy and is prone to adversarial attacks.
+
+# 8 Acknowledgements
+
+We would like to thank Andreas Madsen, Nathan Schucher, Nick Meade and Makesh Narsimhan for their discussion and feedback on our manuscript. We would also like to thank the Mila Applied Research team, especially Joumana Ghosn, Mirko Bronzi, Jeremy Pinto, and Cem Subakan whose initial work on the Covid-19 chatbot inspired this work. This work is funded by Samsung Electronics. JC and SR acknowledge the support of the NSERC Discovery Grant program and the Canada CIFAR AI Chair program. The computational resource for this project is partly supported by Compute Canada.
+
+# References
+
+Tariq Alhindi, Savvas Petridis, and Smaranda Muresan. 2018. Where is your evidence: Improving fact-checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERIFICATION (FEVER), pages 85-90. Association for Computational Linguistics.
+Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-16.
+Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995-1005.
+Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language Explanations. In Advances in Neural Information Processing Systems 31, pages 9539-9549.
+Jon Ander Campos, Kyunghyun Cho, Arantxa Otegi, Aitor Soroa, Eneko Agirre, and Gorka Azkune.
+
+2020. Improving conversational question answering systems after deployment using feedback-weighted learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2561-2571.
+Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
+David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine, 31(3):59-79.
+Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer. 2021. Do explanations help users detect errors in open-domain QA? an evaluation of spoken vs. visual explanations. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1103-1116.
+Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3667-3684.
+Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. arXiv:1905.01969 [cs]. ArXiv: 1905.01969.
+Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205. Association for Computational Linguistics.
+Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In SIGKDD. Association for Computing Machinery.
+Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of
+
+the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781.
+Bernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive document retrieval for deep question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 576-581.
+Sawan Kumar and Partha Talukdar. 2020. Nile: Natural language inference with faithful natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8730-8742.
+Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.
+Cody CT Kwok, Oren Etzioni, and Daniel S Weld. 2001. Scaling question answering to the web. In Proceedings of the 10th international conference on World Wide Web, pages 150-161.
+Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, and Michael Collins. 2021. Qed: A framework and dataset for explanations in question answering. Transactions of the Association for Computational Linguistics, 9:790-806.
+Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880.
+Jiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823.
+Alexander H Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. In EMNLP (System Demonstrations).
+Shikhar Murty, Pang Wei Koh, and Percy Liang. 2020. Expert: Representation engineering with natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2106-2113.
+
+Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.
+Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266.
+Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 1527-1536.
+Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. In Advances in Neural Information Processing Systems, volume 33, pages 3008-3021.
+Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xiang Ren. 2019. Learning from explanations with neural execution tree. In International Conference on Learning Representations.
+Jason E Weston. 2016. Dialog-based language learning. Advances in Neural Information Processing Systems, 29:829-837.
+Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 72-77.
+
+# A Details of Data Collection
+
+Passage curating After we scraped the websites, we collect the questions and answers in the Frequently-Asked-Questions pages directly. For those pages without explicit questions and answers, we extract the text content as passages and proceed to question collection.
+
+Question collection We hire crowd-source workers from English-speaking countries at the Amazon MTurk platform to write questions conditioned on the extracted passages. The workers are instructed not to ask too generic questions or copy and paste directly from the passages.
+
+A qualification test with two sections is done to pick up the best performing workers. In the first section, the workers are asked to distinguish the good question from the bad ones for given passages. The correct and incorrect questions were carefully designed to test various aspects of low-quality submissions we had received in the demo run. The second section is that writing a question given a passage. We manually review and score the questions. We paid $0.2$ to workers for each question.
+
+# B Details of Feedback Collection
+
+We asked the workers to provide rating and natural language feedback for question-answer pairs. For qualification test, we labeled the rating for multiple pairs of questions and answers. The workers are selected based on their accuracy of rating labeling. We paid $0.4\mathrm{~\$}$ to workers for each feedback.
+
+# C Details of Human Evaluation
+
+The worker assignment is done to make sure a worker rates the same question-answer pair only once. Otherwise there is risk that the workers just blindly give the same judgement for a certain QA pair.
+
+We adopt the qualification test similar to the one for feedback collection. We also include some dummy QA pairs, whose answer candidate were randomly sampled from the corpora, and we filter out the workers who fail to recognize them. We paid $0.3\mathrm{~\$}$ to workers for each QA pair.
+
+# D Implementation Details
+
+Throughout the experiments, we have used 4 32-GB Nvidia Tesla V100. The hyperparameter (learning rate, dropout rate) optimisation is performed
+
+ | lr | Dropout |
| BERT (Bi-encoder) | 5.0e-05 | 0.1 |
| BERT (Poly-encoder) | 5.0e-05 | 0.1 |
| BART (Bi-encoder) | 9.53e-05 | 0.01026 |
| BART (Poly-encoder) | 4.34e-05 | 0.1859 |
| FEEDBACKRERANKER | 5.0e-05 | 0.1 |
+
+Table 8: Hyper-parameter setting of different variants of QA models as well as EXPLAINRATE and RATEONLY. There is no pooling operation in the latter two models.
+
+for the RQA models only and standard fine-tuning hyperparameters of BART are used for building the FEEDBACKRERANKER model. We set batch size as 16. We truncate the questions and passages to 50 and 512 tokens, respectively. The models are trained with 40 epochs. For our hyperparameter search, we have used 5 trials and while reporting the final results the best hyperparameter variant's performance was averaged across 3 different runs. All experiment runs were finished within 20 hours.
+
+# E Validation performance
+
+In addition to the Poly-encoders, we also explore Bi-encoder and we have found that its performance is consistently worse. Table 9 presents the performance of base QA models with different pretrained Transformer models and encoding methods on the validation set.
+
+| Methods | Australia | US | Canada | UK | WHO | All |
| BERT (Bi-encoder) | 44.57 | 64.24 | 81.12 | 50.55 | 81.85 | 64.47 |
| BERT (Poly-encoder) | 47.25 | 65.30 | 81.49 | 48.50 | 81.19 | 64.75 |
| BART (Bi-encoder) | 47.13 | 67.62 | 86.01 | 55.06 | 85.48 | 68.26 |
| BART (Poly-encoder) | 49.17 | 66.98 | 85.75 | 54.27 | 87.46 | 68.73 |
+
+Table 9: The accuracy of different RQA models on the validation set. All of the results are averaged across 3 runs.
+
+| Methods | Australia | US | Canada | UK | WHO | All |
| BART RQA model |
| BART RQA model | 49.17 | 66.98 | 85.75 | 54.27 | 87.46 | 68.73 |
| + FEEDBACKRERANKER with explanation-based rating | 51.34 | 69.09 | 84.20 | 56.87 | 87.79 | 69.86 |
| + FEEDBACKRERANKER with rating only | 51.09 | 68.57 | 86.84 | 58.21 | 88.78 | 70.70 |
| BERT RQA model |
| BERT RQA model | 47.25 | 65.30 | 81.49 | 48.50 | 81.19 | 64.75 |
| + FEEDBACKRERANKER with explanation-based rating | 51.34 | 70.15 | 83.72 | 53.71 | 84.49 | 68.68 |
| + FEEDBACKRERANKER with rating only | 51.09 | 68.46 | 84.18 | 55.69 | 85.15 | 68.91 |
+
+Table 10: Accuracy of PIPELINE models using different feedback data to train the re-ranker on the validation set. All of the results are averaged across 3 runs.
\ No newline at end of file
diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/images.zip b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f04b8c3ffa6a620e0dba51f885d518dc02322d92
--- /dev/null
+++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ba35c84defadb21c81b75d388045ab8374babb0a3f32daa663798716c99e9dc
+size 500073
diff --git a/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/layout.json b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2882a2912ad6675c819c527f854dbeabb1add3ef
--- /dev/null
+++ b/usinginteractivefeedbacktoimprovetheaccuracyandexplainabilityofquestionansweringsystemspostdeployment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b785562fbadc7aa81e73218ba0e00ce0f4b1ff9be0fcaaa167c8874dedcef72c
+size 333017
diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_content_list.json b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..50e51a03462425233555db8926cc77db27cd7775
--- /dev/null
+++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f531ad44ad7e400e52c13b63cfbd2fa8b306c95a466383e826d72b0b74a1aa4c
+size 68351
diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_model.json b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8659f59af5334d4f77a6b2aeb734d72d3fa2327f
--- /dev/null
+++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e9932df6245dc4fdb057f83f1d726d7ac4ff0ad42a9eb7ac5d2d8a3f89bc5188
+size 85437
diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_origin.pdf b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fa2ee7c781f056fed08162ce794a3fa0f7fddb94
--- /dev/null
+++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/f0fe5b64-f3f2-4c57-9ee9-f8b2e0d259aa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a2bdce2669056068b2bdf5c09cf4a8e1f222597888d7c1b20c74a6da4147a713
+size 3842090
diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/full.md b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f05d3ca7e8f218cac217e29e9ca6a784d9bece86
--- /dev/null
+++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/full.md
@@ -0,0 +1,269 @@
+# Using NLP to quantify the environmental cost and diversity benefits of in-person NLP conferences
+
+Piotr Przybyla
+
+Institute of Computer Science,
+
+Polish Academy of Sciences
+
+Warsaw, Poland
+
+piotr.przybyla@ipipan.waw.pl
+
+Matthew Shardlow
+
+Department of Computing and Mathematics,
+
+Manchester Metropolitan University
+
+Manchester, UK
+
+m.shardlow@mmu.ac.uk
+
+# Abstract
+
+The environmental costs of research are progressively important to the NLP community and their associated challenges are increasingly debated. In this work, we analyse the carbon cost (measured as CO2-equivalent) associated with journeys made by researchers attending in-person NLP conferences. We obtain the necessary data by text-mining all publications from the ACL anthology available at the time of the study $(n = 60,572)$ and extracting information about an author's affiliation, including their address. This allows us to estimate the corresponding carbon cost and compare it to previously known values for training large models. Further, we look at the benefits of in-person conferences by demonstrating that they can increase participation diversity by encouraging attendance from the region surrounding the host country. We show how the trade-off between carbon cost and diversity of an event depends on its location and type. Our aim is to foster further discussion on the best way to address the joint issue of emissions and diversity in the future.
+
+# 1 Introduction
+
+Figure 1 shows the increase in travel to the ACL annual meeting over the past 40 years. Whereas conferences used to be the privilege of a few academics, they are now attended by participants from companies, research institutes and universities across the world. This comes with an increase in the total volume of work published, and with it an increase in the carbon emissions attributed to travelling to in-person events.
+
+In this study we seek to quantify the impact of conferences that are increasingly diverse in terms of participation and location (undoubtedly beneficial) on the increased carbon emissions (undoubtedly detrimental). We base our analysis on publications spanning 55 years (1965-2020), taken from
+
+
+(a) ACL 1979: La Jolla, California, USA
+
+
+(b) ACL 1999: College Park, Maryland, USA
+
+
+(c) ACL 2019: Florence, Italy
+Figure 1: Visualisation of estimated journeys to the ACL annual meetings over 40 years. Maps for all major NLP conferences are included in the supplementary material.
+
+the ACL Anthology1. We use NLP tools to parse each document and identify the locations of the conference venues and lead researcher's institution. We answer the following questions:
+
+1. Where is NLP research performed and presented?
+2. What are the environmental costs?
+3. Do conferences increase local participation?
+4. Which events attract a diverse audience and how do they compare to non-physical venues?
+
+To the best of our knowledge, our work is the first to quantitatively explore the relationship between the location of conferences in a research field
+
+and diversity of participation. We make our dataset and code available2 to enable further discussion on the costs and benefits of in-person meetings.
+
+# 2 Related work
+
+Environmental cost of travel and conferences: It is a well established fact that conferences come with a climate cost (Ciers et al., 2019), which has recently become greater (Pierce et al., 2020). This has led to calls to reduce or cancel the physical academic conference calendar (Johnson et al., 2020; Reay, 2003; Achakulvisut et al., 2020; Jackle, 2019; Dwyer, 2013).
+
+The scientific discourse has included measuring and quantifying the emissions costs of conferences and the travel associated with them, from specific events (Astudillo and AzariJafari, 2018), to conference series (Neugebauer et al., 2020), or indeed looking at the total emissions of an entire discipline (Waring et al., 2014; Poom et al., 2017).
+
+Travel is not the only cost associated with academic conferences, or research in general, with one PhD accounting for 21.5 tonnes of CO2-equivalent emissions (Achten et al., 2013), of which $35\%$ was attributed to conferences. Recent work shows that in France, a typical research lab might dispense $64\%$ of its carbon outputs on conference travel, with the remaining $36\%$ made up mostly of commuting and energy usage (Mariette et al., 2021)
+
+In response to the pandemic, many conferences have moved temporarily online. A meta-analysis of these online conferences showed that a major result of online delivery was a reduction in the registration fee, promoting access (Mubin et al., 2021). Further, online delivery may allay fears of high travel costs (Raby and Madden, 2021) — as is often the case with top-tier conferences. The main barrier to online participation is a perception of reduced social (rather than academic) opportunities (Raby and Madden, 2021), although this may be overcome through facilitating interpersonal meetings, and social discussion (Achakulvisut et al., 2020). It should be noted that whilst travel is unnecessary in virtual conferences, there is still a quantifiable carbon cost due to the infrastructure required (Ong et al., 2012, 2014; Faber, 2021).
+
+Academic conferences are not without their benefits and a clear advantage of in-person conferences rather than online is the perceived value in social interaction (Raby and Madden, 2021). This argu
+
+ment is strengthened by the observation that citation rates are higher for work presented across longer distances (Chalvatzis and Ormosi, 2020). An important benefit of conferences is providing an opportunity for researchers to interact with peers from diverse cultural, linguistic, demographic and academic backgrounds. This goal is also recognised within the NLP field. $^3$
+
+The high climate cost of academic conferences has led to policy considerations (Bossdorf et al., 2010), including the adoption of carbon offsetting programmes for participants (Holden et al., 2017), wise choices of locations to reduce the average journey distance (Wenner et al., 2019) and mandated reporting of climate costs for conferences (Cugniere et al., 2020). Moving towards the adoption of any of these policies would help to begin the mitigation of the environmental impact of academic travel. Similar discussion has already started in computer science conference communities, e.g. ACM (Pierce et al., 2020).
+
+Environmental cost of ML and NLP research: In the field of ML and NLP, there has been an increasing trend towards openness in reporting of the emissions associated with AI research (Schwartz et al., 2020), especially that using deep learning (Henderson et al., 2020). Work has also been undertaken to estimate the overall cost of training machine learning (ML) models — taking into account not only the training time, but also the age of the hardware and server location (García-Martínez et al., 2019; Lacoste et al., 2019).
+
+There have been a few efforts within our own field of NLP to better understand the impact that modern techniques are having on the environment and specifically to quantify the emissions costs of training ever larger neural networks (Strubell et al., 2019). Benchmarking of NLP systems in terms of their energy consumption is a viable way to better understand the carbon cost of training such a model (Zhou et al., 2020). Taking into account factors such as resource utilisation can give a more accurate picture of the energy consumption of NLP models (Cao et al., 2020).
+
+A recent trend in NLP is to create low-resource models that provide sufficient performance. For example, light transformer models are quicker to train and consequently have a lower carbon footprint (Sanh et al., 2019; Anderson and Gomez-
+
+Rodriguez, 2020). Transfer learning presents an opportunity for massive carbon savings. If a model can be trained that requires only minimal retraining for various other subtasks, then this prevents further carbon expenditure down the line. Maximising model reusability is a good strategy for reducing carbon emissions (Kocmi and Bojar, 2020).
+
+# 3 Methods
+
+To be able to answer the questions that motivate this work, we need certain data about the research process, in particular regarding the location of researchers' affiliations and conference venues. Since no such single source of information existed, we decided to combine publicly available resources to create a new dataset containing the information we required. The process we used to create this resource is detailed below:
+
+Data structure: A publication is an independent piece of research presented to the community as a journal article or a presentation at a conference. For the purposes of this work, each publication is described by: (1) an identifier; (2) the first author's affiliation (identified by the domain name in their e-mail address); (3) the location of the first author's affiliation and (4) an event, to which the publication is assigned.
+
+An event could be a track at a conference, a co-located meeting (e.g. a workshop) or a volume of a journal. It is described by: (1) an identifier; (2) a name and (3) a location - physical place name in case of in-person events or a special tag (@) in case of journals and virtual conferences.
+
+Note that in this model, we always take into account the first author, while in fact one person may attend a conference to present several publications (resulting in less travels) or more than one author may attend to present a single publication (resulting in more travels). Resolving this issue would require conference registration data, which are not publicly available. Further, the address of the primary affiliation does not necessarily match the researcher's starting location when travelling to a conference.
+
+Text mining: In the process of gathering the data we rely on the XML version of the ACL Anthology available on GitHub4 (we used the version from 17.02.2021). From there we obtain the publications ( tag), associated events
+
+( tag) with titles and locations.
+
+The crucial information missing from the XML structure is the author's affiliation and their location. This information is mined from the publication text: we download the publication PDF and use $PyMuPDF^5$ to convert it to plain text. Next, we extract the first e-mail domain occurring in text through regular expressions (allowing for the curly brackets notation for account usernames) and treat it as affiliation identifier. Then, we use spaCy (Honnibal et al., 2020) to process the text with the en_core_web_trf pipeline, based on RoBERTa (Liu et al., 2019). Among the text spans recognised by the named entity recogniser as belonging to the category GPE (geopolitical entity), the one occurring first after the first author's last name is considered their location. Entities occurring close to each other are grouped, so that multipart names, such as Cambridge, Massachusetts (USA), are located correctly.
+
+Finally, to interpret the location names for affiliations and events, we use the Geocoding API of the Google Maps API. This allows us to obtain geographical coordinates (longitude and latitude) and country name for each location. We obtain continent information using the pycountry-convert Python package.
+
+Missing data: The process described above may leave some of the data fields empty. This may be caused by information being omitted in the XML (year or location for events) or PDF files (affiliation address not provided) or imperfect named entity recognition.
+
+In the case of events, we fill the missing data based on co-located events and manual investigation. We also check which of the conferences in 2020 took place as in-person events in the locations advertised. In the case of affiliations, we look at all other publications with the same affiliation and identify the most common location. We assume this location may also be used for the publication in question. Note that some of the PDF files of the oldest publications are based on scanned typescripts. Extracting information from these would require OCR techniques, but this was not attempted within the described work, resulting in a lower coverage of the earliest publications.
+
+Diversity computation: To quantify the participation diversity, we use the Gini coefficient $G$ . While it was originally proposed for assessing in
+
+
+Figure 2: Distribution of NLP publications between affiliation locations (countries) in each year with the diversity index (white line, right axis).
+
+
+Figure 3: Distribution of NLP publications between event locations (countries, light grey=non-physical venues) with the diversity index (white line, right axis).
+
+come inequality (Gini, 1912), it is widely used as a diversity measure, e.g. of ecosystems (Lexerød and Eid, 2006), research databases (Weidlich and Filippov, 2016) or citation patterns (Leydesdorff et al., 2019). Since $G$ measures concentration, we define the diversity coefficient as $D = 1.0 - G$ . $D$ takes values between 0.0 (least uniform distribution, i.e. all conferences happening in the same country) and 1.0 (perfectly uniform distribution, i.e. each country hosting the same number of events).
+
+# 4 Results
+
+The process described above results in a dataset of 60,572 publications associated with 1,991 events. In the following subsections we analyse them to answer some of the important questions about the costs and benefits of the NLP conference system.
+
+Where is NLP research done? Regarding affiliations (e-mail domains), we see 5,501 different values in our dataset. Unsurprisingly for literature dating back to 1965, no domain could be found
+
+in a significant portion $(22\%)$ of the publications. For the known affiliations, the research output is unequally distributed between them, with the top 207 domains $(3.76\%)$ responsible for $50\%$ of the publications. Our diversity index $D$ takes the value 0.2303.
+
+Regarding addresses, they are associated with 135 countries. Following the refining procedure described in the previous section, only $0.8\%$ unknown values remain. The concentration here is even larger than in the case of affiliations: half of the output is generated by just 3 countries (US, China and Germany) and the $D$ coefficient equals 0.1087, indicating an even lower diversity amongst international publication in NLP venues.
+
+The contribution varying across years is shown in Figure 2. Coloured bars show the fraction of publications from a given year associated with each country, sorted by their global contribution (US=blue, China=orange, Germany=gold, UK=green, Japan=grey, France=light blue). Additionally, we show the diversity coefficient for the years (white line, right axis). We can see the diversity was rising through most of the considered period, but since 2013, the trend is reversed.
+
+Where is NLP research presented? In total, the 1,991 events were held in 48 different countries. The distribution of publications presented at each country is more uniform than previously covered, with diversity index of 0.3838.
+
+Figure 3 shows how this distribution changed across the years. The bars correspond to the number of papers presented in each country in a given year, with the same colour coding as in Figure 2. We can see that the distribution changes drastically every year due to major conferences moving around the world. As previously, we see the increasing diversity through the increasing $D$ coefficient. Moreover, while the number of articles presented in the most common country (US) was consistently high throughout the studied period, its relative contribution to the overall publication volume was falling for many years. Similarly to the previous plot, a new trend of falling diversity is visible from 2015. Finally, we can observe the changing role of non-physical venues (light grey bars): the share generated by online journals falling over the years and the sudden change in 2020, when $96\%$ of work was presented online.
+
+
+Figure 4: Average emissions per publication at local, regional and international conferences between 1965 and 2019
+
+
+Figure 5: The average emission per publication (over 5-year periods) and total emission (yearly) between 1970 and 2019.
+
+What are the environmental costs? Our dataset includes 51,116 publications, for which both the location of research centre and conference venue are known. The average journey distance was $4,988\mathrm{km}$ and the longest distance travelled was $19,888\mathrm{km}$ from New Zealand to Spain.7
+
+To convert from the number of kilometres travelled (to the conference and back) to the carbon emissions costs, we turned to data from the UK Government for enabling companies to report their emissions. This resource provided us with 5 years of historic emissions data (2016-2020) for short-haul and long-haul flights giving the CO2 per passenger per kilometer for each given year. We trained a linear regression model to estimate the carbon cost of air travel beyond this time span. Gains in flight efficiency have led to the reduction of carbon emission, resulting in higher costs for his
+
+toric journeys. We used values for CO2-Equivalent with Radial Forcing, which give an estimate of the overall climate change impact of travel. We considered international flights as those longer than $3700\mathrm{km}$ in accordance with the guidelines associated with the data source. Journeys under this were considered short haul, except for those less than $500\mathrm{km}$ , where we assumed that another lower carbon means of travel would be more likely (in our case we used figures from the same data for train journeys). The data used to create the univariate linear regressions for predicting historic emissions are included in Appendix A.
+
+Each event could be simply represented through its total emissions, but there are several issues with this approach. Firstly, the size of a conference (number of attendees) dictates its overall emissions cost. Therefore, we use the mean carbon cost of a publication at each event instead. Secondly, we compared events according to their geographic reach. International conferences are those that can be hosted anywhere in the world. Regional conferences are those that are restricted to a specific region (we included LREC, which typically happens around the Mediterranean) and local conferences are those that happen in a single country (or a very narrow geographical region). The conferences included in each band are shown in Appendix B.
+
+Figure 4 shows that international and regional conferences are the main emitters of greenhouse gasses in the NLP field. Local conferences emit around a quarter of the CO2-Equivalent (per publication) compared to international or regional conferences. Whilst regional conferences have traditionally tracked below the average emissions of international conferences, the gap between them is narrowing, as these conferences are increasingly treated as international events.
+
+Figure 5 shows the discrepancy between the total CO2 emissions (in red, right axis) and the average CO2 emissions (in blue, left axis) over the same period across our entire dataset. We can see that whilst the average emissions fluctuate, they are generally stable around 0.8-1.2 tonnes of CO2 emitted per publication. This stability is possibly due to the fact that the increasing distances travelled are offset by increasing flight efficiency. In contrast, the total amount of CO2-equivalent emitted by conferences has risen exponentially hitting 1 million kg in 1998, 2 million kg in 2006, 3 million kg in 2016 and then jumping to over 6 million kg in 2018.
+
+6,000 Tonnes of CO2-Equivalent equates to...
+
+| 1,304 | cars driven for a year |
| 722 | homes powered for a year |
| 13,892 | barrels of oil (energy production) |
| 99,212 | new trees planted (CO2 capture) |
| 339,172 | NLP pipelines trained |
| 168 | NLP pipelines optimised |
| 68,894 | Generic Transformers trained |
| 22 | Generic Transformers optimised |
| 71 | Instances of GPT-3 trained |
+
+Table 1: Comparisons of recent annual conference emissions to familiar scenarios both within and outside of NLP.
+
+
+Figure 6: Comparison of the number of travels of certain distance (X axis, in km.) made in two scenarios: observed in the data and expected in case of random choice of events.
+
+To put the value of around 6,000 tonnes of CO2-equivalent (total emissions of NLP conferences in 2018) into context, we can compare to emissions for other activities. These are shown in Table 1 and were calculated using data from the website of the US Environmental Protection Agency9. Data estimating the amount of emissions used to train NLP models (Strubell et al., 2019; Lasse et al., 2020) are also included.
+
+What are the diversity benefits? We hypothesise that series of events occurring in different locations have the benefit of encouraging local researchers to attend, increasing the diversity of participation. In this section we seek to quantify this effect.
+
+Firstly, we verify this hypothesis by comparing the distances researchers travelled for conferences (blue bars) to the distances they would need to travel if they were choosing venues randomly (orange bars) in Figure 6. The results clearly confirm
+
+our assumptions: the number of observed short trips, especially a few hundred kilometers, is much higher than expected in a random choice scenario. The number of long trips, especially around 10,000 km, is greatly reduced. Using the data from the previous section, we can also estimate that thanks to these choices, the carbon cost of all travels was $27.21\%$ lower (a total saving of 19,104 tons of CO2 according to emission rates of 2020).
+
+Next, we can ask whether the priority given to local conferences depends on what country a researcher comes from. To that end, we compute the relative travel length by dividing the observed mean travel distance by the travel distance in a 'random choice' scenario. Figure 7 shows all countries with at least 15 publications according to their relative travel length and GDP per capita in 2018 (Bolt and van Zanden, 2020). We can see that the longest travels are made by countries in the middle-east, most of them considerably wealthy. Most countries that prefer nearby conferences have relatively low income, e.g. Serbia, Philippines or Bulgaria.
+
+Knowing that each event generates diversity by encouraging researchers from the nearby countries to participate, we can now measure how well this effect works for different conferences. It might be expected that achieving high diversity comes at a cost of longer journeys. We verify this by plotting the diversity of in-person events against travel distance (average per publication) in Figure 8. Most events are indeed arranged along an upward direction, but some do not belong to that trend. For example, we can see that EACL conferences deliver more diversity than others for the same travel distance. Some ACL meetings $^{10}$ , on the contrary, are associated with very long travel and not so much diversity. LREC events are clear outliers here, since they have by far the highest diversity for low distances. The dashed line corresponding to the diversity index of journals indicates that the diversity observed in many in-person events is much higher. Note that the online conferences are not included in this analysis, since their format was often unclear to authors in the moment of submission.
+
+In Figure 9 we compare the mean participation diversity of events organised in a given continent across the years. Consistently with Figure 2, we see an increasing diversity throughout most of the considered period for most continents. Europe is
+
+
+Figure 7: Relative travel length (mean distance of travels made divided by mean distance of travels expected in random venue choice) for countries with at least 15 publications with respect to their continent and GDP per capita.
+
+
+Figure 8: NLP events plotted with respect to the diversity of participation (Y axis), mean travel distance (X axis) and number of publications (disc size).
+
+the location of very diverse events, but the Asian ones appear to be catching up. The journals have seen relatively slow growth and remain much less diverse than in-person events, except for South America or Australia and Oceania, where too few conferences took place for our analysis.
+
+# 5 Discussion
+
+Our work covers the carbon cost and diversity gain associated with conferences in the ACL Anthology. We consider that it is timely to perform this analysis, given the shutdown in physical meetings brought on by the global COVID-19 pandemic and have focussed our analysis on conferences from before the pandemic began.
+
+We have made a number of assumptions in our
+
+
+Figure 9: Diversity of events held on each continent between 1965 and 2019. '@' refers to journals. Africa is not represented due to the lack of events there in the ACL Anthology.
+
+work. Most notably, we have assumed that only first authors travel from the location of their institution to the location of the conference (and back) without detour via the easiest means of transport available to them. Our assumptions are consistent between events and as such, our methodology gives a useful tool for comparing potential climate impact in the field of NLP and beyond.
+
+Figure 2 shows that whilst the diversity index has grown consistently from 1970 to 2014, it has dropped since then, with 2020 having the lowest diversity index since 2008. We cannot give an explanation for the drop over this period without speculating, however tracking this index will allow us to measure the change in diversity over the coming years.
+
+Whereas previous work has claimed that nonphysical venues promote diversity (Raby and Madden, 2021), our research broadens the picture, with Figure 8 demonstrating that whilst some events are below the mean diversity index of online journals, many are above; in particular LREC and RANLP attract an audience from many countries. We chose not to make a direct comparison between in-person events and the pandemic-era online conferences of 2020 and 2021, since some events of the latter type were (at the point of submission) advertised as physical meetings, while others were in the hybrid format. However, extending our analysis to pure online and hybrid events is a clear direction for future work.
+
+We were also able to quantify the carbon cost of travelling to physical events in terms of CO2-equivalent. Whilst this has unsurprisingly grown with the growth of the NLP field, the average carbon cost per paper has remained stable, indicating that gains in efficiency from better modes of transport are offset by an increased travel distance. The total emissions in recent years has been as high as 6,000 tonnes of CO2-equivalent. It must also be noted that other activities of NLP research contribute to the total carbon cost generated by the NLP field. For example, the carbon cost of all travel in a single year of NLP research equates to about 22 fully optimised transformer models trained from scratch (see Table 1). We must also address the carbon cost of research, as well as considering the cost of flying to conferences.
+
+Measuring the diversity impact contributed by a conference happening in a certain place is not possible directly, since we cannot know, who would
+
+participate, if the event took place elsewhere. However, our data indicate a preference for local events, which is the highest in low-income countries. Holding conferences across the globe allows researchers from diverse locations to attend an event without flying as far as in a scenario where all conferences were located in one region (as was the case in the early days of the ACL conferences). However, there is a cautionary tale to tell in our data relating to the year 2018. In Figure 5, a large spike on the right hand side corresponds to 2018, when a total of over 6 thousand tonnes of CO2-equivalent was attributed to conference travel. In this year the ACL annual meeting was held in Melbourne, Australia and LREC was held in Miyazaki, Japan. The effect of this is clear as researchers from Europe and North America — who usually attend these conferences — needed to travel further, increasing the emissions. Holding conferences in different locations will only lead to increased diversity if these events are advertised to and attended by a majority of people from the region they are held in.
+
+Our definition of diversity index only takes into account the countries from which authors have attended, and does not measure other important factors of diversity (gender, race, economic status, native language, etc.). Whilst some of this information may be discernable from our data, most of it would only be possible to discover by author disclosure, which was not possible in our context. Reporting on the country-based diversity allows us to better understand the diversity of NLP research across the last 50 years.
+
+Our work is designed as a focussed study on the ACL anthology, and a similar analysis of a broader scope (e.g., all computer science, all science publications) would yield results allowing comparisons between disciplines. We were able to perform this analysis due to the provision of the ACL Anthology, which only covers papers in our field. Whilst other resources indexing AI and wider computer science, or even generic scientific literature, do exist (e.g., DBLP, Google Scholar, repositories such as OpenAire, event websites etc.), these each have their own limitations, such as not including PDF links (only DOIs which point to journal websites), lack of a public API or covering only a subset of the literature. Event websites are a fruitful source for data mining, but each event has its own bespoke format and extracting data this way is slow.
+
+We have attempted to give a view of the data that
+
+allows policy makers to make informed decisions on where the next NLP conference should be. We have also made our data available to facilitate future research. Policy makers may wish to consider the high emissions impact of locating a conference in an area far away from the typical attendance base, and also weigh this against the potential diversity gain of locating a conference in a lower-wealth area. We expect that conference organisers will make different decisions based on the relative importance of the above factors to their communities.
+
+# Acknowledgements
+
+This work was supported by the Polish National Agency for Academic Exchange through a Polish Returns grant number PPN/PPO/2018/1/00006.
+
+# References
+
+Titipat Achakulvisut, Tulakan Ruangrong, Isil Bilgin, Sofie Van Den Bossche, Brad Wyble, Dan FM Goodman, and Konrad P Kording. 2020. Point of view: Improving on legacy conferences by moving online. *Elife*, 9.
+Wouter MJ Achten, Joana Almeida, and Bart Muys. 2013. Carbon footprint of science: More than flying. Ecological indicators, 34:352-355.
+Mark Anderson and Carlos Gómez-Rodríguez. 2020. Distilling neural networks for greener and faster dependency parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 2-13.
+Miguel F Astudillo and Hessam AzariJafari. 2018. Estimating the global warming emissions of the Icaxvii conference: connecting flights matter. The International Journal of Life Cycle Assessment, 23(7):1512-1516.
+Jutta Bolt and Jan Luiten van Zanden. 2020. Maddison style estimates of the evolution of the world economy. A new 2020 update. Technical report, Maddison Project.
+Oliver Bossdorf, Madalin Parepa, and Markus Fischer. 2010. Climate-neutral ecology conferences: just do it! Trends in ecology & evolution, 25(2):61.
+Qingqing Cao, Aruna Balasubramanian, and Niranjan Balasubramanian. 2020. Towards accurate and reliable energy measurement of nlp models. In Proceedings of SustainNLP: Workshop on Simple and Efficient Natural Language Processing, pages 141-148.
+Konstantinos Chalvatzis and Peter L Ormosi. 2020. The carbon impact of flying to economics conferences: is flying more associated with more citations? Journal of Sustainable Tourism, 29(1):40-67.
+
+Joachim Ciers, Aleksandra Mandic, Laszlo Daniel Toth, and Giel Op't Veld. 2019. Carbon footprint of academic air travel: A case study in switzerland. Sustainability, 11(1):80.
+Laure Cugniere, Diogo Veríssimo, Angeles Branas, and Guy Bigwood. 2020. From call to action: a roadmap to sustainable conferences. SocArXiv.
+James Dwyer. 2013. On flying to ethics conferences: Climate change and moral responsiveness. *IJFAB: International Journal of Feminist Approaches to Bioethics*, 6(1):1-18.
+Grant Faber. 2021. A framework to estimate emissions from virtual conferences. International Journal of Environmental Studies, 78(4):608-623.
+Eva García-Martín, Crefeda Faviola Rodrigues, Graham Riley, and Håkan Grahn. 2019. Estimation of energy consumption in machine learning. Journal of Parallel and Distributed Computing, 134:75-88.
+Corrado Gini. 1912. Variabilità e mutabilità. Rome: Libreria Eredi Virgilio Veschi.
+Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248):1-43.
+Matthew H Holden, Nathalie Butt, Alienor Chauvenet, Michaela Plein, Martin Stringer, and Iadine Chades. 2017. Academic conferences urgently need environmental policies. Nature ecology & evolution, 1(9):1211-1212.
+Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.
+Sebastian Jäckle. 2019. We have to change! the carbon footprint of ecpr general conferences and ways to reduce it. European Political Science, 18(4):630-650.
+Ruth Johnson, Andrada Fiscutean, and Serghei Mangul. 2020. Refining the conference experience for junior scientists in the wake of climate change. arXiv preprint arXiv:2002.12268.
+Tom Kocmi and Ondrej Bojar. 2020. Efficiently reusing old models across languages via transfer learning. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 19-28, Lisboa, Portugal. European Association for Machine Translation.
+Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700.
+
+F. Wolff Anthony Lasse, Benjamin Kanding, and Raghavendra Selvan. 2020. Carbontracker: Tracking and predicting the carbon footprint of training deep learning modelss. In Proceedings of the ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems. ICML.
+Nils L. Lexerød and Tron Eid. 2006. An evaluation of different diameter diversity indices based on criteria related to forest management planning. Forest Ecology and Management, 222(1-3):17-28.
+Loet Leydesdorff, Caroline S. Wagner, and Lutz Bornmann. 2019. Interdisciplinarity as diversity in citation patterns among journals: Rao-Stirling diversity, relative variety, and the Gini coefficient. Journal of Informetrics, 13(1):255-269.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:11907.11692.
+Jérôme Mariette, Odile Blanchard, Olivier Berné, and Tamara Ben Ari. 2021. An open-source tool to assess the carbon footprint of research. arXiv preprint arXiv:2101.10124.
+Omar Mubin, Fady Alnajjar, Abdullah Shamail, Suleman Shahid, and Simeon Simoff. 2021. The new norm: Computer science conferences respond to Covid-19. Scientometrics, 126(2):1813-1827.
+Sabrina Neugebauer, Maren Bolz, Rose Mankaa, and Marzia Traverso. 2020. How sustainable are sustainability conferences?—comprehensive life cycle assessment of an international conference series in europe. Journal of cleaner production, 242:118516.
+Dennis Ong, Tim Moors, and Vijay Sivaraman. 2012. Complete life-cycle assessment of the energy/co2 costs of videoconferencing vs face-to-face meetings. In 2012 IEEE Online Conference on Green Communications (GreenCom), pages 50-55. IEEE.
+Dennis Ong, Tim Moors, and Vijay Sivaraman. 2014. Comparison of the energy, carbon and time costs of videoconferencing and in-person meetings. Computer communications, 50:86-94.
+Benjamin C Pierce, Michael Hicks, Crista Lopes, and Jens Palsberg. 2020. Conferences in an era of expensive carbon. *Communications of the ACM*, 63(3):35-37.
+Age Poom, Kati Orru, and Rein Ahas. 2017. The carbon footprint of business travel in the knowledge-intensive service sector. Transportation Research Part D: Transport and Environment, 50:292-304.
+Cassandra L Raby and Joan R Madden. 2021. Moving academic conferences online: Aids and barriers to delegate participation. *Ecology and Evolution*, 11(8):3646-3655.
+
+David S Reay. 2003. Virtual solution to carbon cost of conferences. Nature, 424(6946):251-251.
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.
+Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. *Green ai. Communications of the ACM*, 63(12):54-63.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650.
+Timothy Waring, Mario Teisl, Eva Manandhar, and Mark Anderson. 2014. On the travel emissions of sustainability science research. *Sustainability*, 6(5):2718-2735.
+Iwona E. Weidlich and Igor V. Filippov. 2016. Using the gini coefficient to measure the chemical diversity of small-molecule libraries. Journal of Computational Chemistry, 37(22):2091-2097.
+Fabian Wenner, Freke Caset, and Bart De Wit. 2019. Conference locations and sustainability aspirations: Towards an integrative framework? disP-The Planning Review, 55(1):34-51.
+Xiyou Zhou, Zhiyu Chen, Xiaoyong Jin, and William Yang Wang. 2020. Hulk: An energy efficiency benchmark platform for responsible natural language processing. arXiv preprint arXiv:2002.05829.
+
+# A Values used in Calculations of Emissions per Passenger
+
+Table 2 shows the $\mathrm{Kg}$ of CO2-equivalent per passenger used in our calculations to train a univariate linear regression model for historic prediction.
+
+# B Conferences Analysed
+
+To produce Figure 4, we selected specific conferences that we denoted as either local, regional or international. Conferences were selected if they had a specific identifier in the ACL Anthology. The pythonic regular expressions used to match the identifiers and the categorisation of each conference is provided in Table 3. We also used these identifiers to produce the table of travel maps in the supplementary material.
+
+| Mode of Transport | 2020 | 2019 | 2018 | 2017 | 2016 |
| Long-Haul Flight | 0.09994 | 0.10244 | 0.11131 | 0.1034 | 0.10035 |
| Short-Haul Flight | 0.08145 | 0.08291 | 0.08503 | 0.08432 | 0.08821 |
| Train Journey | 0.03659 | 0.04077 | 0.04383 | 0.04636 | — |
+
+Table 2: Carbon cost (kg of CO2-equivalent per passenger) with respect to mode of transport and year.
+
+| Event Name | ACL Anthology Identifiers | Categorisation |
| ACL | r"P\d\d\.d", r"2020\.ac1\.main" | International |
| EMNLP | r"D\d\d\.[123]", r"2020\.emnlp\.main" | International |
| COLING | r"C\d\d\.d", r"2020\.coling\.main" | International |
| CoNLL | r"K\d\d\.d", r"2020\.conl1\.1" | International |
| NAACL | r"N\d\d\.d" | Regional |
| LREC | r"L\d\d\.d", r"2020\.lrec\.1" | Regional |
| EACL | r"E\d\d\.d" | Regional |
| IJCNLP | r"I\d\d\.d", "P15", "D19" | Regional |
| TALN | r"F\d\d\.d", "\\d\d\d\d\.jeptalnrecital...\*” | Local |
| RANLP | r"R\d\d\.d" | Local |
| ALTA | r"U\d\d\.d" | Local |
| PACLIC | r"Y\d\d\.d" | Local |
| ROCLING | r"O\d\d\.d" | Local |
| NoDaLiDa | r"W11\.46", r"W13\.56", r"W15\.18", r"W17\.2\$\", r"W19\.61" | Local |
+
+Table 3: Regular expressions used to match conferences.
\ No newline at end of file
diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/images.zip b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6b80e9b46d479ed360d8a7725e968cd01fed5d5e
--- /dev/null
+++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21c5d7736d1f5fb845b596546fd90621f610512b3e9c6d88e08b2869c89b5e04
+size 572796
diff --git a/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/layout.json b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a7cb1bb14c7f16f7f5211211a3f627df178606e7
--- /dev/null
+++ b/usingnlptoquantifytheenvironmentalcostanddiversitybenefitsofinpersonnlpconferences/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff19c887bcee27ab9ce2ff5a7b82e7bf1036b1611f87eedbbbc8b7f8bc15d152
+size 301788
diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_content_list.json b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1862bbd1c551b4b3792e1844c55137c3c04dac01
--- /dev/null
+++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b8e12690d93a0e2b3e4e8b1d68115f4bf30272b8796c0a5c25b128d368256960
+size 115376
diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_model.json b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d990114e3568c274101494a6de17313544bb1ec1
--- /dev/null
+++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:89df92391aa32efc3a5c5aedebc29aa21e7d82236e1e200101a511b118199c71
+size 138006
diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_origin.pdf b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2977556c888a44463d45bd41b75bc51610b8c236
--- /dev/null
+++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/3eb51916-e682-4bac-b7ed-5d95eb4d446e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:928569062c5fc6f305a657ee11df889636358a66808ce6a9af84c7694354c1d3
+size 466506
diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/full.md b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b09b2ce03ee16dde410d2f8809d99fea5d810f53
--- /dev/null
+++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/full.md
@@ -0,0 +1,420 @@
+# Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study
+
+Serra Sinem Tekiroglu $^{2}$ , Helena Bonaldi $^{1,2}$ , Margherita Fanton $^{1,2*}$ , Marco Guerini $^{2}$ $^{1}$ University of Trento, Italy
+
+Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento, Italy
+
+tekiroglu@fbk.eu,hbonaldi@fbk.eu,
+
+margherita.fanton@ims.uni-stuttgart.de,guerini@fbk.eu
+
+# Abstract
+
+In this work, we present an extensive study on the use of pre-trained language models for the task of automatic Counter Narrative (CN) generation to fight online hate speech in English. We first present a comparative study to determine whether there is a particular Language Model (or class of LMs) and a particular decoding mechanism that are the most appropriate to generate CNs. Findings show that autoregressive models combined with stochastic decodings are the most promising. We then investigate how an LM performs in generating a CN with regard to an unseen target of hate. We find out that a key element for successful 'out of target' experiments is not an overall similarity with the training data but the presence of a specific subset of training data, i.e. a target that shares some commonalities with the test target that can be defined $a$ -priori. We finally introduce the idea of a pipeline based on the addition of an automatic post-editing step to refine generated CNs.
+
+# 1 Introduction
+
+Hate Speech (HS) has found fertile ground in Social Media Platforms. Actions undertaken by such platforms to tackle online hatred consist in identifying possible sources of hate and removing them by means of content deletion, account suspension or shadow-banning. However, these actions are often interpreted and denounced as censorship by the affected users and political groups (Myers West, 2018). For this reason, such restrictions can have the opposite effect of exacerbating the hostility of the haters (Munger, 2017). An alternative strategy, that is looming on the horizon, is based on the use of Counter Narratives. CNs are "all communicative actions aimed at refuting hate speech through thoughtful and cogent reasons, and true and fact-bound arguments" (Schieb and Preuss, 2016). As a de-escalating
+
+measure, CNs have been proven to be successful in diminishing hate, while preserving the freedom of speech (Benesch, 2014; Gagliardone et al., 2015). An example of $$ pair is shown below:
+
+HS: Women are basically childlike, they remain this way most of their lives. Soft and emotional. It has devastated our once great patriarchal civilizations.
+
+CN: Without softness and emotions there would be just brutality and cruelty. Not all women are soft and emotional and many men have these characteristics. To perpetuate these socially constructed gender profiles maintains norms which oppress anybody.
+
+Based on their effectiveness, CNs have started being employed by NGOs to counter online hate. Since for NGO operators it is impossible to manually write responses to all instances of hate, a line of NLP research has recently emerged, focusing on designing systems to automatically generate CN suggestions (Qian et al., 2019; Tekiroğlu et al., 2020; Fanton et al., 2021; Chung et al., 2021a; Zhu and Bhat, 2021). In this study, our main goal is to compare pre-trained language models (LM) and decoding mechanisms in order to understand their pros and cons in generating CNs. Thus, we use various automatic metrics and manual evaluations with expert judgments to assess several LMs, representing the main categories of the model architectures, and decoding methods. We further test the robustness of the fine-tuned LMs in generating CNs for an unseen target. Results show that autoregressive models are in general more suited for the task, and while stochastic decoding mechanisms can generate more novel, diverse, and informative outputs, the deterministic decoding is useful in scenarios where more generic and less novel (yet 'safer') CNs are needed. Furthermore, in out-of-target experiments we find that the similarity of targets (e.g.
+
+JEWS and MUSLIMS as religious groups) plays a crucial role for the effectiveness of portability to new targets. We finally show a promising research direction of leveraging gold human edits for building an additional automatic post-editing step to correct errors made by LMs during generation. To the best of our knowledge, this is the first study systematically analysing state of the art pre-trained LMs in CN generation.
+
+# 2 Related Work
+
+In this section we first discuss standard approaches to hate countering and studies on CN effectiveness on Social Media Platforms, then the existing CN data collection and generation strategies.
+
+Hate countering. NLP has started addressing the phenomenon of the proliferation of HS by creating datasets for automatic detection (Mathew et al., 2021; Cao et al., 2020; Kumar et al., 2018; Hosseinmardi et al., 2015; Waseem, 2016; Burnap and Williams, 2016). Several surveys provide a review on the existing approaches on the topic (Poletto et al., 2020; Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018), also addressing the ethical challenges of the task (Kiritchenko et al., 2021). Still, automatic detection of HS presents some drawbacks (Vidgen and Derczynski, 2020). First of all, the datasets might include biases, and the models tend to replicate such biases (Binns et al., 2017; Davidson et al., 2019; Sap et al., 2019; Tsvetkov, 2020). Moreover, the end goals for which HS detection is employed are often charged with censorship of the freedom of speech by concerned users (Munger, 2017; Myers West, 2018). In this scenario, NGOs have started employing CNs to counter online hate. CNs have been shown to be effective in reducing linguistic violence (Benesch, 2014; Gagliardone et al., 2015; Schieb and Preuss, 2016; Silverman et al., 2016; Mathew et al., 2019); moreover, even if they might not influence the view of extremists, they are still effective in presenting alternative and non-hateful viewpoints to bystanders (Allison and Bussey, 2016; Anderson et al., 2014).
+
+CN data collection. The existing studies for collecting CN datasets employ four main approaches. Crawling consists in automatically scraping websites, starting from an HS content and searching for possible CNs among the responses (Mathew et al., 2018, 2019). With crowdsourcing CNs are
+
+written by non-expert paid workers as responses to provided hate content (Qian et al., 2019). Nichesourcing relies on a niche group of experts for data collection (De Boer et al., 2012), and it was employed by Chung et al. (2019) for CN collection using NGO's operators. Hybrid approaches use a combination of LMs and humans to collect data (Wallace et al., 2019; Dinan et al., 2019; Vidgen et al., 2020). Studies on CN collection are presented in more detail by Tekiroglu et al. (2020); Fanton et al. (2021).
+
+CN generation. Neural approaches to automatically generate CNs are beginning to be investigated. Fanton et al. (2021); Tekiroğlu et al. (2020); Qian et al. (2019) employ a mix of automatic and human intervention to generate CNs. Zhu and Bhat (2021) propose an entirely automated pipeline of candidate CN generation and filtering. Other lines of work include CN generation for under-resourced languages such as for Italian (Chung et al., 2020), and the generation of knowledge-bound CNs, which allows the production of CNs based on grounded and up-to-date facts and plausible arguments, avoiding the hallucination phenomena (Chung et al., 2021a). Instead, in our work we take a more foundational perspective, which is relevant for all the LM-based pipelines described above. Therefore, we compare and assess various state of the art pre-trained LMs in an end-to-end setting, which is developed as a downstream task for CN generation.
+
+# 3 Methodology
+
+In this section, we present the CN dataset, the language models, and the decoding mechanisms employed for our experiments.
+
+# 3.1 Dataset for fine-tuning
+
+For this study we rely on the dataset proposed by Fanton et al. (2021), which is the only available dataset that grants both the target diversity and the CN quality we aim for. The dataset was collected with a human-in-the-loop approach, by employing an autoregressive LM (GPT-2) paired with three expert human reviewers. It features $5003 < HS, CN>$ pairs, covering several targets of hate including DISABLED, JEWS, LGBT+, MIGRANTS, MUSLIMS, POC, WOMEN. The residual categories are collapsed to the label OTHER. We partitioned the dataset into training, validation, and test sets with the ratio: $8:1:1$ (i.e. 4003, 500 and 500 pairs), ensuring that all sets share the same
+
+target distribution, and no repetition of HS across the sets is allowed.
+
+# 3.2 Models
+
+We experiment with 5 Transformer based LMs (Vaswani et al., 2017) representing the main categories of the model mechanisms: autoregressive, autoencoder, and seq2seq.
+
+BERT. The Bidirectional Encoder Representations from Transformers was introduced by Devlin et al. (2019). It is a bidirectional autoencoder that can be adapted to text generation (Wang and Cho, 2019).
+
+GPT-2. The Generative Pre-trained Transformer 2 is an autoregressive model built for text generation (Radford et al., 2019).
+
+DiaLoGPT. The Dialogue Generative Pretrained Transformer is the extension of GPT-2 specifically created for conversational response generation (Zhang et al., 2020).
+
+BART. BART is a denoising autoencoder for pretraining seq2seq models (Lewis et al., 2020). The encoder-decoder architecture of BART is composed of a bidirectional encoder and an autoregressive decoder.
+
+T5. The Text-to-Text Transfer Transformer proposed by Raffel et al. (2020) is a seq2seq model with an encoder-decoder Transformer architecture.
+
+While all the other models could be fine-tuned directly for the generation task, for BERT we warm-started an encoder-decoder model using BERT checkpoints similar to the BERT2BERT model defined by (Rothe et al., 2020). The fine-tuning details and hyperparameter settings can be found in Appendix A.1.
+
+# 3.3 Decoding mechanisms
+
+We utilize 4 decoding mechanisms: a deterministic (Beam Search) and three stochastic (Top- $k$ , Top- $p$ , and a combination of the two).
+
+Beam Search (BS). The Beam Search algorithm is designed to pick the most-likely sequence (Li et al., 2016; Wiseman et al., 2017).
+
+Top- $\pmb{k}$ ( $\mathbf{Top}_k$ ). The sampling procedure proposed by Fan et al. (2018) selects a random word from the $k$ most probable ones, at each time step.
+
+Top- $p$ ( $\mathbf{Top}_p$ ). Also known as Nucleus Sampling, the parameter $p$ indicates the total probability for the pooled candidates, at each time step (Holtzman et al., 2020).
+
+Combining Top- $p$ and Top- $k$ ( $\mathbf{Top}_{pk}$ ). At decoding stage, it is possible to combine the parameters
+
+$p$ and $k$ . This is a Nucleus Sampling constrained to the Top- $k$ most probable words.
+
+In our experiments we used the following parameters as default: Beam-Search with 5 beams and repetition penalty $= 2$ ; Top- $k$ with $k = 40$ ; Top- $p$ with $p = .92$ ; Top $p_k$ with $k = 40$ and $p = .92$ .
+
+# 4 Evaluation metrics
+
+We use several metrics to evaluate various aspects of the CN generation.
+
+Overlap Metrics. These metrics depend on the $n$ -gram similarity of the generated outputs to a set of reference texts in order to assess the quality. We used our gold CNs as references and the CNs generated by the different models, as candidates. In particular, we employed three BLEU variants: BLEU-1 (B-1), BLEU-3 (B-3) and BLEU-4 (B-4) (Papineni et al., 2002), and ROUGE-L (ROU) (Lin, 2004).
+
+Diversity metrics. They are used to measure how diverse and novel the produced CNs are. In particular, we utilized Repetition Rate (RR) to measure the repetitiveness across generated CNs, in terms of the average ratios of non-singleton $n$ -grams present in the corpus (Bertoldi et al., 2013). It should be noted that RR is calculated as a corpus-based repetition score, i.e. inter-CN, instead of calculating intra-CN repetition of $n$ -grams only. We also used Novelty (NOV) (Wang and Wan, 2018), based on Jaccard similarity, to compute the amount of novel content that is present in the generated CNs as compared to the training data.
+
+Human evaluation metrics. Albeit more difficult to attain, human judgments provide a more reliable evaluation and a deeper understanding than automatic metrics (Belz and Reiter, 2006; Novikova et al., 2017). To this end, we specified the following dimensions for the evaluation of CNs. Suitableness (SUI): measures how suitable a CN is to the HS in terms of semantic relatedness and in terms of adherence to CN guidelines1; Grammaticality (GRM): how grammatically correct a generated CN is; Specificity (SPE): how specific are the arguments brought by the CN in response to the HS; Choose-or-not (CHO): determines whether the annotators would select that CN to post-edit and use it in a real case scenario as in the set up presented by Chung et al. (2021b); Is-best (BEST): whether the CN is the absolute best among the ones generated
+
+for an HS (i. e. whether the annotators would pick up exactly that CN if they had to use it in a real case scenario).
+
+The first three dimensions are rated with a 5-points Likert scale and follow the evaluation procedure described by Chung et al. (2020), whereas both choose-or-not and is-best are binary ratings (0, 1). Choose-or-not allows for multiple CNs to be selected for the same HS, while only one CN can be selected for is-best for each HS.
+
+Toxicity.2 It determines how "rude, disrespectful, or unreasonable" a text is. Toxicity has been employed both to detect the bias present in LMs (Gehman et al., 2020) and as a solution to mitigate such bias (Gehman et al., 2020; Xu et al., 2020).
+
+Syntactic metrics. A high syntactic complexity can be used as a proxy for an LM's ability of generating complex arguments. We used the syntactic dependency parser of spaCy3 For the task, focusing on the following measures: Maximum Syntactic Depth (MSD): the maximum depth among the dependency trees calculated over each sentence composing a CN. Average Syntactic Depth (ASD): the average depth of the sentences in each CN. Number of Sentences (NST): the number of sentences composing a CN.
+
+# 5 Experiments
+
+We performed two sets of experiments: first, we assessed how LMs perform in the task of generating CNs with different decoding mechanisms. Then, we selected the best model from the first round of experiments and tested its generalization capabilities when confronted with an unseen target of hate.
+
+# 5.1 LMs and decoding experiments
+
+For the first round of experiments, in order to avoid possible unfair assessments given by the open nature of the generative task (i.e. a highly suitable CN candidate could be scored low due to its difference from the single reference/gold CN), at test time we allowed the generation of several candidates for each HS+LM+decoding mechanism combination. We loosely drew inspiration from the Rank- $N$ Accuracy procedure and the 'generate, prune, select' procedure (Zhu and Bhat, 2021). In particular,
+
+given an LM and a decoding mechanism, we generated 5 CNs for each HS in the test set.
+
+Automated evaluation and selection We set up the automatic evaluation strategy as displayed in Figure 1. First, we scored each CN with the overlap metrics presented in Section 4, using the gold CN as a reference. Next, we ranked the candidate CNs with respect to the overlap scores and computed the mean of the rankings. Then, we selected the best ones according to the following criteria:
+
+BestLM selects the single best CN for an HS among the 20 generated by the 4 models.
+
+$\mathbf{Best}_{\mathbf{D}}$ selects the single best CN for an HS among the 25 generated by the 5 decoding configurations.
+
+$\mathbf{Best}_{\mathbf{LM} + \mathbf{D}}$ selects the single best CN among the 5 generated with each model-decoding combination. Moreover, we assessed the overall corpus-wise quality of the generated CNs with respect to the models, to the decoding mechanisms, and to the model-decoding combinations via the diversity metrics.
+
+
+Figure 1: Given an HS, 5 CNs are generated for each model-decoding combination. $\odot$ indicates the best CN per model $(\in \mathrm{Best}_{\mathrm{LM}})$ . $\triangle$ indicates the best CN per decoding $(\in \mathrm{Best}_{\mathrm{D}})$ . $\square$ indicates the best CN per model-decoding combination $(\in \mathrm{Best}_{\mathrm{LM} + \mathrm{D}})$ .
+
+Human evaluation on a sample To perform the human evaluation we referred to the BestLM generations and sampled 200 instances from it. Each instance comprises an HS and 5 relevant CNs, each generated by a different model. We recruited 2 annotators who were trained extensively for the task following the procedure used by Fanton et al. (2021). The expert annotators were asked to evaluate the 5 CNs corresponding to the HS, according to the dimensions described in Section 4. We en
+
+riched the evaluation of this subset with the toxicity and the syntactic metrics.
+
+# 5.2 Results of the first set of experiments
+
+The results of the experiments on the LMs and the decoding mechanisms are reported in this section4.
+
+Best Model The results of the comparison of the models on the BestLM generations can be found in Table 1. Regarding the overlap and diversity metrics, DialoGPT records the best or the second best score in all the metrics, apart from novelty where it still achieves a high score (0.643) close to the best performance (0.655). T5 also achieves high scores, especially on ROUGE, BLEU-1 and novelty.
+
+BART, instead, is the best model according to human evaluation metrics, apart from specificity. On the other hand, it shows poor performances in terms of diversity metrics, indicating that it tends to produce grammatical and suitable but very generic responses.
+
+BERT records the worst scores for all the overlap and diversity metrics apart from novelty. However, it also achieves the best syntactic metric results. Therefore, it is evident that BERT's output is more complex, but very repetitive. The combination of these aspects eventually affects the clarity of BERT's output such that it yields poor results in the human evaluation, in particular for grammaticality (4.2, while other models are above 4.6). This poor grammaticality can also explain the syntactic scores since the spaCy dependency parser was not trained to handle ungrammatical text and this could actually inflates the ASD and MSD scores.
+
+GPT-2 overall yields very competitive results for several groups of metrics. It obtains the second-highest novelty score (0.653) and the best RR (7.736). It also achieves the second best results on BLEU-3, maximum syntactic depth and number of sentences, and the best results on toxicity and specificity (2.880) indicating the ability to produce complex, suitable, focused and diverse CNs.
+
+After the human evaluation we ran a qualitative interview with the annotators, whose feedback on the data strengthened the results we observed and the conclusion we drew. For instance, they reported the repetition of simple, yet catch-them-all, expressions (e.g. "they are our brothers and sisters") regardless of the target. Further inspections found
+
+that those CNs were mainly produced by BERT, which is in line with BERT's RR score.
+
+Best Decoding mechanism. The results calculated on $\mathrm{Best_D}$ output are presented in Table 2. Top $k$ is the best performing decoding mechanism achieving the best results on the diversity metrics, BLEU-3 and BLEU-4. It is also the best performing for specificity, maximum syntactic depth and number of sentences, and the second best for average syntactic depth and toxicity.
+
+The other stochastic decoding mechanisms perform well too. $\mathrm{Top}_p$ yields competitive results on both diversity and overlap metrics; it is the second best for specificity, and achieves good results on the syntactic metrics. $\mathrm{Top}_{pk}$ has a good performance on the overlap metrics. It obtains the second-highest scores in most of the human evaluation metrics and the lowest in toxicity, and it reaches a reasonable specificity score.
+
+On the other hand, BS does not achieve particularly good results, except for the ROUGE score. Even if it is the best decoding with respect to the human evaluation, this comes at the cost of specificity and diversity. Through a post-hoc manual analysis we observed that it was due to the deterministic nature of BS, that tends to choose the most probable sequences, i.e. the "safest", thus resulting in vague and repetitive outputs.
+
+Best Model-Decoding combination Here we briefly discuss the results of the evaluation obtained on the $\mathrm{Best}_{\mathrm{LM} + \mathrm{D}}$ generations. In particular, the autoregressive models GPT-2 and DialogGPT behave similarly with similar decoding mechanisms, such that BS outputs the best results for almost all the overlap metrics, and the worst for the diversity metrics. On the contrary, for the other models, the results achieved with stochastic decoding mechanisms are the best for the overlap metrics. In almost all the cases, we observe that the stochastic decoding mechanisms perform better on syntactic and diversity metrics and on toxicity, while for the human evaluation metrics BS tends to be the best, except for specificity. A detailed discussion can be found in Appendix A.2.
+
+Discussion. In this set of experiments, we found that the autoregressive models perform the best according to a combination of several metrics that we deem particularly relevant (e.g. more novel, diverse, and informative outputs). Of course more repetitive and conservative outputs can be preferred
+
+ | Overlap | Diversity | Toxicity | Syntactic metrics | Human evaluation |
| ROU | B-1 | B-3 | B-4 | RR | NOV | - | ASD | MSD | NST | SUI | SPE | GRM | CHO | BEST |
| BART | 0.268 | 0.277 | 0.085 | 0.051 | 20.722 | 0.560 | 0.420 | 4.311 | 4.965 | 1.740 | 3.790 | 2.552 | 4.937 | 0.840 | 0.272 |
| BERT | 0.237 | 0.277 | 0.073 | 0.037 | 24.747 | 0.605 | 0.406 | 5.008 | 6.160 | 2.280 | 3.135 | 2.647 | 4.247 | 0.717 | 0.122 |
| T5 | 0.274 | 0.302 | 0.083 | 0.042 | 8.548 | 0.655 | 0.359 | 4.692 | 5.325 | 1.715 | 2.872 | 2.402 | 4.680 | 0.642 | 0.090 |
| DiaLoGPT | 0.273 | 0.304 | 0.093 | 0.052 | 8.248 | 0.643 | 0.343 | 4.677 | 5.575 | 1.895 | 3.392 | 2.755 | 4.880 | 0.767 | 0.245 |
| GPT-2 | 0.264 | 0.297 | 0.088 | 0.050 | 7.736 | 0.653 | 0.342 | 4.584 | 5.595 | 2.240 | 3.555 | 2.880 | 4.867 | 0.795 | 0.270 |
+
+Table 1: Results of the overlap and diversity metrics are calculated on the BestLM generations while the toxicity, the syntactic metrics and the human evaluation are calculated on the corresponding subset.
+
+ | Overlap | Diversity | Toxicity | Syntactic metrics | Human evaluation | |
| ROU | B-1 | B-3 | B-4 | RR | NOV | - | ASD | MSD | NST | SUI | SPE | GRM | CHO | BEST | n |
| BS | 0.287 | 0.299 | 0.096 | 0.059 | 21.579 | 0.561 | 0.398 | 4.415 | 5.048 | 1.684 | 3.936 | 2.497 | 4.925 | 0.826 | 0.222 | %18.7 |
| Toppk | 0.287 | 0.320 | 0.106 | 0.059 | 11.404 | 0.639 | 0.352 | 4.676 | 5.488 | 1.932 | 3.324 | 2.647 | 4.688 | 0.764 | 0.212 | %29.3 |
| Topk | 0.282 | 0.314 | 0.106 | 0.060 | 10.076 | 0.652 | 0.374 | 4.704 | 5.756 | 2.133 | 3.155 | 2.716 | 4.659 | 0.716 | 0.183 | %27.1 |
| Topp | 0.285 | 0.319 | 0.105 | 0.060 | 11.270 | 0.640 | 0.381 | 4.753 | 5.671 | 2.068 | 3.149 | 2.687 | 4.681 | 0.723 | 0.189 | %24.9 |
+
+Table 2: The results for the overlap and diversity metrics are calculated on the BestD generations: for each decoding mechanism, there are 2500 CNs. The remaining metrics are calculated on a subset of 1000 CNs: the distribution of which is shown in the column "n".
+
+when high precision of suitable CNs are required at the expense of being more generic and less novel. Still, for what concerns autoregressive models it could be argued that the good performance of the GPT-2 model we fine-tuned is due to the fact that generated CNs and gold CNs derive from a similar distribution (GPT-2 was employed in the human-in-the-loop process used to create the reference dataset from Fanton et al. (2021)). While we recognize that this could partially explain the performance of our GPT-2 model, it does not explain the performance of DialogoGPT, which is pre-trained on a completely different dataset. Therefore, we can reasonably conclude that autoregressive models are particularly suited for the task, regardless of the pre-training data.
+
+With respect to the decoding mechanisms, we record high repetitiveness and low novelty for the deterministic decoding BS. Even if it reaches high scores in most of the human evaluation metrics, it fails to produce specific CNs ending up in generating suitable, yet generic responses. On the contrary, stochastic decoding mechanisms produce more novel and specific responses.
+
+Example CNs generated in this session of experiments, along with some qualitative analysis, can be found in Appendix A.3.
+
+# 5.3 Leave One Target Out experiments
+
+In the second stage, we built a set of cross-domain experiments to capture the generalization capabilities of the best LM determined in the previous experiments. Specifically, we concentrate on as
+
+sessing how much a pre-trained language model fine-tuned on a pool of hate targets can generalize to an unseen target.
+
+Thus, for the out of target experiment we selected the LM that we deem the most prominent in order to reduce the number of LM configurations to compare. In particular, since we want to examine the generalization capability of the LM, the generation of novel CNs, in comparison to the training data, is given primary importance. Secondly, specificity is also crucial since it signifies the ability of the LM/decoding mechanism in generating accurate CNs and avoiding vague yet suitable, catch-all CNs. In contrast, repetitiveness is an undesirable feature of CNs, as it signals the tendency of a model to produce less flexible content. Given these considerations, we chose to employ GPT-2 with Top $k$ decoding for the Leave One Target Out (LOTO) experiments since it is the configuration achieving the best trade-off amongst all the others.
+
+This set of experiments is structured in 3 steps, replicated for each of the selected targets. We selected the targets with the highest number of examples (MUSLIMS, MIGRANTS, WOMEN, LGBT+ and JEWS) to have a sufficient sized test set for each configuration.
+
+First, we sampled from the Fanton et al. (2021) dataset 600 pairs for each LOTO target, in order to have a balanced setting. Additionally, POC and DISABLED were always kept in the training set, and we removed multi-target cases from OTHER. The resulting dataset consists of 3729 instances (further details are provided in Appendix A.4). Sec
+
+ondly, we fine-tuned 5 different configurations of the LM, and in each configuration one of the 5 LOTO targets is not present in the training data: LM-JEWS, LM-LGTB+, LM-MIGRANTS, LM-MUSLIMS and LM-WOMEN. Finally, we tested each LOTO model on the 600 HSs in the test set made of "left out" target examples. For instance, the model LM-JEWS is used for generating the CNs for the target JEWs, after being trained on $$ data without any instances with the label JEWS. We generated 5 CNs for each HS and selected the best CN according to the procedure described in Section 5.1.
+
+# Results of LOTO experiments
+
+We analyse the CNs generated with the LOTO models through overlap and diversity metrics (Table 3). We refer to Appendix A.4 for the comparison between RR calculated on the candidate CNs and the reference CNs of the Fanton et al. (2021) dataset.
+
+For all the targets we record higher novelty scores as compared to the previous experiments. Higher novelty ranges indicate that conditioning with new material (i.e. HS for the unseen targets) induces GPT-2 to produce new arguments. On the other hand, as expected, the overlap scores for LOTO are remarkably lower than those from the previous experiments (Table 3). Therefore, we can infer that generalizing to an unseen target is harder than generalizing to an unseen HS.
+
+| LOTO | Overlap | Diversity |
| Target | ROU | B-1 | B-3 | B-4 | RR | NOV |
| JEWS | 0.1609 | 0.1842 | 0.0134 | 0.0035 | 4.796 | 0.718 |
| LGBT+ | 0.1599 | 0.1828 | 0.0149 | 0.0055 | 4.620 | 0.718 |
| MIGRANTS | 0.1659 | 0.1915 | 0.0163 | 0.0038 | 4.707 | 0.720 |
| MUSLIMS | 0.1743 | 0.1934 | 0.0197 | 0.0059 | 5.314 | 0.712 |
| WOMEN | 0.1755 | 0.1988 | 0.0195 | 0.0068 | 4.632 | 0.729 |
+
+Table 3: The overlap and diversity metrics scores for the various LOTO configurations.
+
+We also found out that the CNs generated in the LM_MUSLIMS and LM_WOMEN configurations obtain the highest overlap scores (Table 3). We hypothesize that the high scores can be explained by the presence of a target in the LOTO training that is highly similar to the left out one. To this end, we computed the novelty between each target subset of the training data and the LOTO test data for that configuration (see Appendix A.4 for details). The reference CNs for LM_MUSLIMS record the lowest novelty scores with respect to the JEWS subset of the training set (i.e. 0.761). Thus, it
+
+
+
+
+
+
+Figure 2: The correlation between the novelty of the reference CNs and overlap metrics: in each plot, the dots and the darker line correspond to the most influential target; the triangles and the lighter line correspond to the results calculated without it.
+
+
+
+can be interpreted as the most influential portion of training data for the target MUSLIMS. On the other hand, for LM_WOMEN the highest influence is recorded with the LGBT+ subset of the training data (i.e. 0.763). These results can be explained by the semantic similarity of the target MUSLIMS to JEWS, both being religious groups; and of WOMEN to LGBT+, both being related to gender issues.
+
+As a complementary analysis, we consider two different computations of the reference CN novelty: with respect to the most influential target for each LOTO configuration, and with respect to the LOTO training data without the most influential target. We computed the Pearson correlation between the overlap metrics and each of the two novelty computations. In Figure 2, we observe that removing the influential target from the training data strongly decreases the correlation with the overlap metrics (from an average of -0.889 to -0.416). Consequently, we can conclude that to obtain high overlap results in the LOTO experiments, it is necessary that the training data contains a target strongly connected to the left out one. Most importantly, this connection is not arbitrarily decided but it is based on an $a$ -priori semantic similarity of the targets as exemplified before.
+
+Finally, we chose to generate also with the BS decoding mechanism, to use it as a baseline and compare it to the stochastic decoding mechanism (Top-k). In particular, we computed the Pearson correlation between the novelty of the reference
+
+CNs and the novelty of the candidate CNs with respect to the corresponding training data (Figure 3). We can observe that for the BS generation the novelty of the candidate CNs is lower than Top- $k$ (0.67-0.74 vs. 0.75-0.77) and the correlation with the novelty of the reference is weaker (0.53 vs. 0.59). This confirms the lower generalization ability with the deterministic decoding mechanism (as compared to the stochastic) that tends to produce generic and repetitive responses regardless of the semantic distances of the LOTO targets from the training data.
+
+
+Figure 3: Reference and candidate CNs novelty, for Top- $k$ and BS LOTO generations.
+
+
+
+# 6 Automatic Post-Editing
+
+In the previous experiments we fine-tuned our models making resort to $$ pairs alone. Still the Fanton et al. (2021) dataset contains additional information that can be useful for our task: i.e. the original GPT-2 generation before undergoing human post-editing.
+
+Thus, as a final experiment, we propose to further improve the CN generation by moving from an end-to-end framework to a two stage pipeline, by decoupling CN generation from its 'final refinement'. In particular we propose the adoption of an Automatic Post-Editing (APE) stage in order to capture and utilize the nuances among the machine generated CNs and their human post-edited versions. APE, which is used for automatically correcting errors made by machine translation (MT) systems before performing actual human post-editing, has been an important tool for MT (Knight and Chander, 1994; do Carmo et al., 2021). Considering its effectiveness in MT, we hypothesize that building a pipeline with CN generation and APE could alleviate the requirement of the final manual post-editing (Allen and Hogan, 2000; Chatterjee et al., 2019) to achieve better constructed CNs.
+
+To this end, we fine-tuned another instance of GPT-2 medium model specifically for the post-editing task using the $$ triplets $^5$ , where $CN_{or}$ and $CN_{pe}$ denote the CNs originally generated by an LM and their human post-edited versions, respectively. The triplets were then filtered by removing those for which $CN_{or} = CN_{pe}$ . More details about the experiment settings can be found in Appendix A.5.
+
+| Data | CNape | CNor | N/A |
| Fanton et al. (2021) | 26 | 14 | 60 |
| GPT-2 Topk | 37 | 19 | 44 |
+
+Table 4: The human annotation results for the APE experiments in terms of average preference percentages.
+
+We have conducted two human evaluations over the subsets of: i) the $CN_{or}$ of the Fanton et al. (2021) test samples, ii) the CN outputs of the best model and decoding mechanism combination provided as the results of the first set of experiments, that yielded the top 50 Translation Error Rate (TER) (Snover et al., 2006) scores with respect to the $CN_{or}$ . The two expert annotators were asked to state their preferences among the 2 randomly sorted CNs, $CN_{or}$ and $CN_{ape}$ (automatically post-edited output), for a given HS. The annotators were also allowed to decide on a tie. Results, shown in Table 4, indicate that, albeit there are often ties and only a subset of $CN_{or}$ is actually modified, when there is a preference, it is predominantly in favour of the automatically post-edited versions over the GPT-2 generated CNs (26% vs. 14% for the test set, and 37% vs. 19% for the GPT-2 Topk generations, on average). Regarding the experiment results, we believe that APE is a highly promising direction to increase the efficacy of the CN generation models where generation quality and diversity is crucial, and considering that obtaining/enlarging expert datasets to train better models is not simple.
+
+# 7 Conclusion
+
+In this work, we focus on automatic CN generation as a downstream task. First, we present a comparative study to determine the performances and peculiarities of several pre-trained LMs and decoding mechanisms. We observe that the best results (in term of novelty and specificity) overall are achieved
+
+by the autoregressive models with stochastic decoding: GPT-2 with the $\mathrm{Top}_k$ decoding mechanism, and DialogPT with the combination $\mathrm{Top}_{pk}$ . At the same time deterministic decoding can be used when more generic yet 'safer' CNs are preferred.
+
+Then, we investigate the performances of LMs in zero-shot generation for unseen targets of hate. Hence, we fine-tuned 5 different versions of GPT-2, leaving out the examples pertaining to one target at each turn. We find out that for each configuration/version, there is a subset of the training data which is more influential with respect to the generated data (i.e. a target that shares some commonalities with the test target that can be defined a-priori). Finally, we introduce an experiment by training an automatic post-editing module to further improve the CN generation quality. The notable human evaluation results paves the way for a promising future direction that decouples CN generation from its 'final refinement'.
+
+# Ethical Considerations
+
+Although tackling online hatred through CNs inherently protects the freedom of speech and has been proposed as a better alternative to the detect-remove-ban approaches, automatization of CN generation can still raise some ethical concerns and some measures must be taken to avoid undesired effects during research. Thus, we address the relevant ethical considerations and our remedies as follows:
+
+Annotation Guidelines: The well-being of the annotators was our top priority during the whole study. Therefore, we strictly followed the guidelines created for CN studies (Fanton et al., 2021) that were adapted from (Vidgen et al., 2019). The human evaluations have been conducted with the help of two expert annotators in CNs. These experts were already trained for the CN generation task and employed for the work presented by (Fanton et al., 2021). We further instructed them in the aims of each experiment, clearly explained the evaluation tasks, and then we exemplified proper evaluation of $$ pairs using various types of CNs. Most importantly, we limited the exposure to hateful content by providing a daily time limit of annotation. Concerning the demographics, due to the harmful content that can be found in the data, all annotators were adult volunteers, perfectly aware of the objective of the study.
+
+Dataset. We purposefully chose an expert-based dataset in order to avoid the risk of modeling the language of real individuals to (i) prevent any privacy issue, (ii) avoid to model inappropriate CNs (e.g. containing abusive language) that could be produced by non-experts. The dataset also focuses on the CN diversity while keeping the HSs as stereotypical as possible so that our CN generation models have a very limited diversity on the hateful language, nearly precluding the misuse.
+
+Computational Task. CN generation models are not meant to be used in an autonomous way, since even the best models can still produce substandard CNs containing inappropriate or negative language. Instead, following a Human-computer cooperation paradigm, our focus is on building models that can be helpful to NGO operators by providing them diverse and novel CN candidates for their hate countering activities and speed up the manual CN writing to a certain extent. This approach also gives ground to some of the measures we used during evaluation (namely choose-or-not and is-best).
+
+Model Distribution. In addition to the limited and simplified hateful content in the dataset we selected, we further reduce the risk of misuse by choosing a specific distribution strategy: i) we only make available the non-autoregressive models in order to eliminate the risk of using over-generation for hate speech creation, ii) we distribute such models only for research purposes and through a request based procedure in order to keep track of the possible users.
+
+# References
+
+Jeffrey Allen and Christopher Hogan. 2000. Toward the development of a post editing module for raw machine translation output: A controlled language perspective. In Third International Controlled Language Applications Workshop (CLAW-00), pages 62-71.
+Kimberley R Allison and Kay Bussey. 2016. Cyberbystanding in context: A review of the literature on witnesses' responses to cyberbullying. Children and Youth Services Review, 65:183-194.
+Jenn Anderson, Mary Bresnahan, and Catherine Musat-ics. 2014. Combating weight-based cyberbullying on facebook with the dissenter effect. Cyberpsychology, Behavior, and Social Networking, 17(5):281-286.
+Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of nlg systems. In 11th
+
+conference of the european chapter of the association for computational linguistics, pages 313-320.
+Susan Benesch. 2014. Countering dangerous speech: New ideas for genocide prevention. Washington, DC: United States Holocaust Memorial Museum.
+Nicola Bertoldi, Mauro Cettolo, and Marcello Federico. 2013. Cache-based online adaptation for machine translation enhanced computer assisted translation. In MT-Summit, pages 35-42.
+Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like trainer, like bot? inheritance of bias in algorithmic content moderation. In Social Informatics, pages 405-415, Cham. Springer International Publishing.
+Pete Burnap and Matthew L Williams. 2016. Us and them: identifying cyber hate on twitter across multiple protected characteristics. *EPJ Data Science*, 5(1):11.
+Rui Cao, Roy Ka-Wei Lee, and Tuan-Anh Hoang. 2020. Deepate: Hate speech detection via multi-faceted text representations. In 12th ACM Conference on Web Science, pages 11–20.
+Félix do Carmo, Dimitar Shterionov, Joss Moorkens, Joachim Wagner, Murhaf Hossari, Eric Paquin, Dag Schmidtke, Declan Groves, and Andy Way. 2021. A review of the state-of-the-art in automatic post-editing. Machine Translation, 35(2):101-143.
+Rajen Chatterjee, Christian Federmann, Matteo Negri, and Marco Turchi. 2019. Findings of the wmt 2019 shared task on automatic post-editing. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 11-28.
+Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN - COunter NArratives through nichesourcing: a multilingual dataset of responses to fight online hate speech. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2819-2829, Florence, Italy. Association for Computational Linguistics.
+Yi-Ling Chung, Serra Sinem Tekiroglu, and Marco Guerini. 2020. Italian counter narrative generation to fight online hate speech. In Proceedings of the Seventh Italian Conference on Computational Linguistics CLiC-it.
+Yi-Ling Chung, Serra Sinem Tekiroglu, and Marco Guerini. 2021a. Towards knowledge-grounded counter narrative generation for hate speech. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 899-914, Online. Association for Computational Linguistics.
+Yi-Ling Chung, Serra Sinem Tekiroglu, Sara Tonelli, and Marco Guerini. 2021b. Empowering ngos in countering online hate messages. Online Social Networks and Media, 24:100150.
+
+Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35.
+Victor De Boer, Michiel Hildebrand, Lora Aroyo, Pieter De Leenheer, Chris Dijkshoorn, Binyam Tesfa, and Guus Schreiber. 2012. Nichesourcing: harnessing the power of crowds of experts. In International Conference on Knowledge Engineering and Knowledge Management, pages 16-20. Springer.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
+Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4537-4546.
+Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.
+Margherita Fanton, Helena Bonaldi, Serra Sinem Tekiroglu, and Marco Guerini. 2021. Human-in-the-loop for data collection: a multi-target counter narrative dataset to fight online hate speech. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics.
+Paula Fortuna and Sérgio Nunes. 2018. A survey on automatic detection of hate speech in text. volume 51, page 85. ACM.
+Iginio Gagliardone, Danit Gal, Thiago Alves, and Gabriela Martinez. 2015. Countering online hate speech. Unesco Publishing.
+Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 3356-3369.
+Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
+
+Homa Hosseinmardi, Sabrina Arredondo Mattson, Rahat Ibn Rafiq, Richard Han, Qin Lv, and Shivakant Mishra. 2015. Detection of cyberbullying incidents on the instagram social network. arXiv preprint arXiv:1503.03909.
+Svetlana Kiritchenko, Isar Nejadgholi, and Kathleen C Fraser. 2021. Confronting abusive language online: A survey from the ethical and human rights perspective. Journal of Artificial Intelligence Research, 71:431-478.
+Kevin Knight and Ishwar Chander. 1994. Automated postediting of documents. In AAAI, volume 94, pages 779-784.
+Ritesh Kumar, Atul Kr Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 1-11.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
+Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Austin, Texas. Association for Computational Linguistics.
+Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
+Binny Mathew, Navish Kumar, Pawan Goyal, Animesh Mukherjee, et al. 2018. Analyzing the hate and counter speech accounts on twitter. arXiv preprint arXiv:1812.02712.
+Binny Mathew, Punyajoy Saha, Hardik Tharad, Subham Rajgaria, Prajwal Singhania, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherjee. 2019. Thou shalt not hate: Countering online hate speech. In Proceedings of the International AAAI Conference on Web and Social Media, volume 13, pages 369-380.
+Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. Hatexplain: A benchmark dataset for explainable hate speech detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14867-14875.
+Kevin Munger. 2017. Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 39(3):629-649.
+
+Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11):4366-4383.
+Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In 2017 Conference on Empirical Methods in Natural Language Processing, pages 2231-2242. Association for Computational Linguistics.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics.
+Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2020. Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evaluation, pages 1-47.
+Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, and William Yang Wang. 2019. A benchmark dataset for learning to intervene in online hate speech. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4757-4766, Hong Kong, China. Association for Computational Linguistics.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. Transactions of the Association for Computational Linguistics, 8:264-280.
+Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1668-1678.
+Carla Schieb and Mike Preuss. 2016. Governing hate speech by means of counterspeech on facebook. In 66th ICA Annual Conference, at Fukuoka, Japan, pages 1-23.
+Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International
+
+Workshop on Natural Language Processing for Social Media, pages 1-10.
+Tanya Silverman, Christopher J Stewart, Jonathan Birdwell, and Zahed Amanullah. 2016. The impact of counter-narratives. Institute for Strategic Dialogue, London. https://www.strategicdialogue.org/wp-content/uploads/2016/08/Impact-of-Counter-Narratives_ONLINE.pdf-73.
+Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas, volume 200, 6. Cambridge, MA.
+Serra Sinem Tekiroğlu, Yi-Ling Chung, and Marco Guerini. 2020. Generating counter narratives against online hate speech: Data and strategies. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1177-1190, Online. Association for Computational Linguistics.
+Mengzhou Xia Anjalie Field Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detection. SocialNLP 2020, page 7.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Bertie Vidgen and Leon Derczynski. 2020. Directions in abusive language training data, a systematic review: Garbage in, garbage out. Plos one, 15(12):e0243300.
+Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detection. In Proceedings of the third workshop on abusive language online, pages 80-93.
+Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2020. Learning from the worst: Dynamically generated datasets to improve online hate detection. arXiv preprint arXiv:2012.15761.
+Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adversarial question answering examples. Transactions of the Association for Computational Linguistics, 7(0):387-401.
+Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 30-36, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In *IJCAI*, pages 4446-4452.
+Zeerak Waseem. 2016. Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138-142.
+Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253-2263, Copenhagen, Denmark. Association for Computational Linguistics.
+Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2020. Recipes for safety in open-domain chatbots. arXiv e-prints, pages arXiv-2010.
+Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online. Association for Computational Linguistics.
+Wanzheng Zhu and Suma Bhat. 2021. Generate, prune, select: A pipeline for counterspeech generation against online hate speech. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 134-149.
+
+# A Appendix
+
+# A.1 Fine-tuning details
+
+Table 5 summarizes the details of the training of each model employed in the first session of experiments.
+
+ | BA | EP | PAR | LR | PER | TL | EL |
| BART (base) | 4 | 4 | 139 M | 2E-05 | 24.659 | 2.358 | 2.417 |
| BERT Seq2Seq (base) | 4 | 3 | 247 M | 3E-05 | 11.209 | 2.845 | 3.205 |
| T5 (base) | 2 | 3 | 223 M | 5E-05 | 10.9248 | 2.412 | 3.205 |
| DialoGPT (medium) | 4 | 2 | 355 M | 5E-05 | 6.085 | 1.425 | 1.806 |
| GPT-2 (medium) | 2 | 2 | 355 M | 5E-05 | 8.929 | 1.320 | 2.189 |
+
+Table 5: The training details for all the models employed for the first collection of experiment: the batch size (BA), number of training epochs (EP), parameters (PAR), the learning rate (LR), perplexity (PER), training and evaluation loss (TL and EL).
+
+Since LM sizes are very different for each model and since our main focus is not studying performances according to LM dimension growth, as a rule-of-thumb, we chose one version smaller than the large version of each model provided that they all have the same order of magnitude. This corresponds to the medium versions for both DialogGPT and GPT-2, and base versions for the other models. GPT-2 and DialogGPT achieve the lowest perplexity, training and evaluation loss, thus indicating a slightly more successful fine-tuning, which are reflected in the evaluations throughout the study.
+
+We conducted a hyper-parameter search during the training phase of each model using the search space: learning-rate: $\{1e - 5,2e - 5,3e - 5,4e - 5,5e - 5\}$ , warm-up ratio: $\{0,0.1\}$ , batch-size: $\{2,4\}$ , epochs: $\{2,3,4,5\}$ . It has been conducted using Optuna, with 10 trials, optimized on minimizing the evaluation loss during training.
+
+# A.2 Best models-decoding combination
+
+Here we discuss the results for the overlap and diversity metrics obtained on the $\mathrm{Best}_{\mathrm{LM} + \mathrm{D}}$ generations (Table 6), and those calculated on the human evaluation subset (Tables 7 and 8).
+
+BART. BART performs well with the stochastic decoding methods, in particular: $\mathrm{Top}_p$ for overlap, diversity, syntactic metrics, and grammaticality; $\mathrm{Top}_k$ for overlap metrics and toxicity, whereas $\mathrm{Top}_{pk}$ is the best decoding approach on human evaluation and RR, and the second best on ROUGE and BLEU-1. On the contrary, BART does not achieve good results with deterministic approaches (i.e. BS).
+
+ | Overlap | Diversity |
| ROU | B-1 | B-3 | B-4 | RR | NOV |
| BART BS | 0.2108 | 0.2129 | 0.0486 | 0.0283 | 21.1102 | 0.5692 |
| BART Toppk | 0.2331 | 0.2300 | 0.0605 | 0.0365 | 20.2645 | 0.5567 |
| BART Topk | 0.2349 | 0.2333 | 0.0652 | 0.0385 | 20.6587 | 0.5575 |
| BART Topp | 0.2329 | 0.2300 | 0.0621 | 0.0374 | 20.5476 | 0.5586 |
| BERT BS | 0.1735 | 0.2108 | 0.0249 | 0.0113 | 38.0349 | 0.5864 |
| BERT Toppk | 0.2034 | 0.2311 | 0.0484 | 0.0231 | 23.4417 | 0.6098 |
| BERT Topk | 0.2032 | 0.2320 | 0.0483 | 0.0229 | 22.2546 | 0.6129 |
| BERT Topp | 0.2044 | 0.2366 | 0.0500 | 0.0244 | 23.6447 | 0.6098 |
| T5 BS | 0.2144 | 0.2007 | 0.0409 | 0.0207 | 21.5518 | 0.5827 |
| T5 Toppk | 0.2236 | 0.2454 | 0.0466 | 0.0228 | 7.2996 | 0.6715 |
| T5 Topk | 0.2076 | 0.2384 | 0.0376 | 0.0136 | 5.3002 | 0.6922 |
| T5 Topp | 0.2159 | 0.2390 | 0.0430 | 0.0184 | 6.8353 | 0.6743 |
| DialoGPT BS | 0.2192 | 0.2272 | 0.0528 | 0.0312 | 21.6800 | 0.5280 |
| DialoGPT Toppk | 0.2132 | 0.2444 | 0.0437 | 0.0201 | 6.4158 | 0.6737 |
| DialoGPT Topk | 0.2023 | 0.2302 | 0.0320 | 0.0134 | 4.7278 | 0.6956 |
| DialoGPT Topp | 0.2093 | 0.2397 | 0.0385 | 0.0159 | 6.1472 | 0.6740 |
| GPT-2 BS | 0.2195 | 0.2132 | 0.0516 | 0.0313 | 23.0605 | 0.5402 |
| GPT-2 Toppk | 0.2055 | 0.2342 | 0.0384 | 0.0173 | 6.5899 | 0.6832 |
| GPT-2 Topk | 0.1956 | 0.2271 | 0.0345 | 0.0153 | 4.7624 | 0.7022 |
| GPT-2 Topp | 0.2014 | 0.2329 | 0.0388 | 0.0177 | 6.1944 | 0.6846 |
+
+Table 6: The results computed on the $\mathrm{Best}_{\mathrm{M} + \mathrm{D}}$ generations (2500 CN for each model-decoding mechanism combination).
+
+BERT. With BS, BERT achieves the best or second best result on all human evaluation metrics, except for specificity. For BERT the best decoding is $\mathrm{Top}_p$ : it is the best performing on overlap metrics and the second best for novelty. It achieves good results both on syntactic metrics and human evaluation too.
+
+T5. $\mathrm{Top}_{pk}$ is the best decoding mechanism. It records the best results for overlap metrics and toxicity, and it has good results on syntactic and human evaluation metrics. For what regards $\mathrm{Top}_k$ , it is the best for diversity, while $\mathrm{Top}_p$ is good on the syntactic metrics. BS achieves good results on human evaluation, except for specificity and is-best.
+
+GPT-2. With $\mathrm{Top}_{pk}$ , GPT-2 performs well on ROUGE, BLEU-1, suitability, grammaticality, and choose-or-not. With $\mathrm{Top}_p$ , GPT-2 records the second best result on BLEU scores and diversity metrics. With BS the model has the best performance on overlap metrics (except BLEU-1), and on suitability, grammaticality, and choose-or-not, but it has also the worst results on diversity metrics. Above all, $\mathrm{Top}_k$ is the decoding achieving the best compromise, reaching the best results for the diversity metrics, and with a superior specificity score (3.15) that is corroborated by the good performance on the other human evaluation metrics.
+
+DiaLoGPT. Top $k$ performs best with diversity metrics and specificity; it records the second high
+
+ | Toxicity | Syntactic metrics | |
| - | ASD | MSD | NST | n |
| BART BS | 0.4870 | 3.8919 | 4.6757 | 1.8919 | 37 |
| BART Toppk | 0.3911 | 4.3592 | 4.9483 | 1.6207 | 58 |
| BART Topk | 0.4021 | 4.3798 | 5.0656 | 1.7377 | 61 |
| BART Topp | 0.4263 | 4.5038 | 5.0909 | 1.7727 | 44 |
| BERT BS | 0.3954 | 4.5556 | 5.3750 | 1.9167 | 24 |
| BERT Toppk | 0.4026 | 5.2299 | 6.2069 | 2.1379 | 58 |
| BERT Topk | 0.4157 | 4.8969 | 6.2969 | 2.5625 | 64 |
| BERT Topp | 0.4032 | 5.1019 | 6.2963 | 2.2593 | 54 |
| T5 BS | 0.4127 | 4.4844 | 4.6562 | 1.3438 | 32 |
| T5 Toppk | 0.3211 | 4.7754 | 5.3768 | 1.7826 | 69 |
| T5 Topk | 0.3441 | 4.6767 | 5.4200 | 1.7400 | 50 |
| T5 Topp | 0.3934 | 4.7245 | 5.5918 | 1.8367 | 49 |
| DiaoGPT BS | 0.3635 | 4.2340 | 5.1277 | 1.8723 | 47 |
| DiaoGPT Toppk | 0.3361 | 4.7264 | 5.5094 | 1.7547 | 53 |
| DiaoGPT Topk | 0.3482 | 4.9333 | 6.1778 | 2.0000 | 45 |
| DiaoGPT Topp | 0.3274 | 4.7970 | 5.5273 | 1.9636 | 55 |
| GPT-2 BS | 0.3540 | 4.8901 | 5.3617 | 1.4468 | 47 |
| GPT-2 Toppk | 0.3119 | 4.2530 | 5.4182 | 2.4000 | 55 |
| GPT-2 Topk | 0.3416 | 4.6771 | 5.8627 | 2.5686 | 51 |
| GPT-2 Topp | 0.3659 | 4.5663 | 5.7447 | 2.4894 | 47 |
+
+est score on grammaticality. $\mathrm{Top}_p$ has the second best result on diversity metrics and BLEU scores. BS is the best on overlap metrics (except BLEU-1), and also on almost all human evaluation metrics: it is the worst on specificity and on diversity metrics. $\mathrm{Top}_{pk}$ is the one working best withDialoGPT, since it reaches very good scores with human and overlap metrics, and this does not invalidate diversity, for which it ranks 3rd out of 4.
+
+# A.3 Examples of the generations
+
+Hate speech: Islam is a violent religion that promotes terror and war.
+
+DialogueGPT - BS: The vast majority of Muslims in the world are peaceful and friendly people. I don't see how you can say such a thing about them.
+
+BART - Topk: The vast majority of Muslims are peaceful people who have lived in peace and harmony for many years. How can you say this about a religion that promotes terror and war?
+
+BERT - Topk: islam is a religion of peace, and it is not a peaceful religion of violence. It is about peace and understanding of the world's problems, and peace is about bringing peace and peace.
+
+Table 7: The results of the toxicity and the syntactic metrics calculated on the subset employed for the human evaluation and grouped by each combination of model and decoding mechanism. The size of each group is showed in the column "n".
+
+ | Human evaluation | |
| SUI | SPE | GRM | CHO | BEST | n |
| BART BS | 3.7568 | 2.5270 | 4.9459 | 0.8108 | 0.2297 | 37 |
| BART Toppk | 3.7931 | 2.6121 | 4.9483 | 0.8534 | 0.3707 | 58 |
| BART Topk | 3.9672 | 2.5410 | 4.9016 | 0.8607 | 0.2951 | 61 |
| BART Topp | 3.5682 | 2.5114 | 4.9659 | 0.8182 | 0.1477 | 44 |
| BERT BS | 3.5208 | 2.5208 | 4.7917 | 0.7708 | 0.1250 | 24 |
| BERT Toppk | 3.1810 | 2.5776 | 4.2328 | 0.7155 | 0.1121 | 58 |
| BERT Topk | 3.0312 | 2.7031 | 4.1562 | 0.6797 | 0.1016 | 64 |
| BERT Topp | 3.0370 | 2.7130 | 4.1296 | 0.7407 | 0.1574 | 54 |
| T5 BS | 3.5781 | 2.2812 | 4.8438 | 0.7656 | 0.0781 | 32 |
| T5 Toppk | 2.8841 | 2.4928 | 4.5870 | 0.6667 | 0.1014 | 69 |
| T5 Topk | 2.4600 | 2.3200 | 4.6400 | 0.5600 | 0.0500 | 50 |
| T5 Topp | 2.8163 | 2.4388 | 4.7449 | 0.6122 | 0.1224 | 49 |
| DialoGPT BS | 4.1596 | 2.6064 | 4.9894 | 0.8511 | 0.3085 | 47 |
| DialoGPT Toppk | 3.3679 | 2.8019 | 4.8396 | 0.7830 | 0.2736 | 53 |
| DialoGPT Topk | 3.1333 | 2.9222 | 4.8556 | 0.7333 | 0.2111 | 45 |
| DialoGPT Topp | 2.9727 | 2.7000 | 4.8455 | 0.7091 | 0.1909 | 55 |
| GPT-2 BS | 4.3085 | 2.5000 | 4.9681 | 0.8830 | 0.2766 | 47 |
| GPT-2 Toppk | 3.4909 | 2.8000 | 4.8727 | 0.8273 | 0.2273 | 55 |
| GPT-2 Topk | 3.0392 | 3.1471 | 4.8431 | 0.7255 | 0.2549 | 51 |
| GPT-2 Topp | 3.4362 | 3.0638 | 4.7872 | 0.7447 | 0.3298 | 47 |
+
+Table 8: For each model-decoding mechanism combination, these are the results of the metrics for the human evaluations. The size of each combination is showed in the column "n".
+
+T5 - Topp: The vast majority of Muslims in the UK are peaceful people who share many values and respect.
+
+GPT-2 - Topp: This is not true, the vast majority of Muslims is peaceful and peaceful Muslims are happy to live in any country they wish. They do not need to be a part of a hostile state and they should be treated as equals in society.
+
+In these examples we can observe that the CN obtained with GPT-2, albeit not perfect, is the most complete and accurate. Those generated with T5 and DialogGPT are suitable as well. On the other hand, the CNs obtained with BART and BERT present some problems: the former has a clear contradiction in it and the latter is illustrative of some of the problems detected in BERT's generations such that it is syntactically complex but very vague and repetitive, and it ends up not really conveying a clear message.
+
+Hate speech: The migrant crisis is not the fault of Western Europeans. Migrants are to blame for immigration.
+
+DiaLoGPT - Topk: What if an entire religion was to blame only for the migration crisis? So you think a large number of Muslims are responsible for all the problems of our world?
+
+BART - Topp: The migrant crisis is caused by a lack of integration and lack of knowledge of the world.
+
+BERT - BS: many migrants come here to work and provide for their families. they are our brothers and sisters, and we should welcome them.
+
+T5 - Topk: What evidence do you have for the evidence that migrants are to blame? Not all migrants have done anything wrong.
+
+GPT-2 - Topk: The EU has put in place laws to combat illegal immigration and to prevent crimes committed by migrants. They have a duty to help stop crime and do what is necessary to solve the problems within their borders. Migrants are very intelligent and do what they can to support their families while they study and work.
+
+Once again, GPT-2 and DialogGPT generations where chosen as the best CNs by the annotators. Moreover, these CNs are all generated via $\mathrm{Top}_k$ decoding mechanism. BART and T5 generated CNs are acceptable while requiring some post-editing to be employed. Finally, BERT's output is still very vague: we can notice the presence of the recurring expression our brothers and sisters.
+
+# A.4 Additional material for LOTO experiments
+
+Table 9 displays the distribution of the examples with respect to the targets, in the reference dataset and in the configurations for the LOTO experiments (Section 5.3).
+
+Table 10 presents the detailed results for the novelty of the reference CNs discussed in Section 5.3, while the RR for the CNs generated with the LOTO models and for the reference CNs are shown in Table 11. The rankings for these two RR computations are the same, and the ranges are almost overlapping. This means that leaving one target out does not impact the intra-corpora repetitiveness: instead, the CNs generated with a LOTO model gain a lower RR than the reference CNs. For the target MUSLIMS a high RR is recorded, both in candidate and in the reference CNs. A high repetitiveness in the data for this target can contribute to the good results observed on overlap metrics too (Table 3 in
+
+| Target | Samples in original dataset | Samples in LOTO experiment |
| JEWS | 594 | 600 |
| LGBT+ | 617 | 600 |
| MIGRANTS | 957 | 600 |
| MUSLIMS | 1335 | 600 |
| WOMEN | 662 | 600 |
| DISABLED | 220 | 220 |
| POC | 352 | 352 |
| other | 266 | 157 |
| Total | 5003 | 3729 |
+
+Table 9: The targets coverage in the reference dataset (Fanton et al., 2021) and in the LOTO configurations.
+
+| generation training | JEWS | LGBT+ | MIGRANTS | MUSLIMS | WOMEN |
| JEWS | - | 0.775 | 0.780 | 0.761 | 0.780 |
| LGBT+ | 0.781 | - | 0.783 | 0.765 | 0.763 |
| MIGRANTS | 0.782 | 0.775 | - | 0.764 | 0.777 |
| MUSLIMS | 0.775 | 0.770 | 0.769 | - | 0.776 |
| WOMEN | 0.789 | 0.771 | 0.783 | 0.775 | - |
+
+Table 10: The novelty of the reference CNs in the data from Fanton et al. (2021) (generation) with respect to the training data for the LOTO models (training).
+
+Section 5.3): it is easier that two outputs are similar if they use a limited and repeated number of words.
+
+| Target | RR reference CN | RR candidate CN |
| JEWS | 5.071 | 4.796 |
| LGBT+ | 4.489 | 4.620 |
| MIGRANTS | 4.381 | 4.707 |
| MUSLIMS | 5.244 | 5.314 |
| WOMEN | 4.547 | 4.632 |
+
+Table 11: The RR computed on the reference CN (pertaining the test set) and on the CN generated with the LOTO models.
+
+# A.5 APE Experiment Details
+
+The dataset by (Fanton et al., 2021) contains three versions of the same CN: the original CN generated by a GPT-2 model $\left(\mathrm{CN}_{or}\right)$ , the expert post-edited versions obtained during the human-in-the-loop cycles $\left(\mathrm{CN}_{pe*}\right)$ , and the final version rechecked by NGO experts $\left(\mathrm{CN}_{pe}\right)$ .
+
+For fine-tuning our APE model, we have thus used the triplets $$ and $$ . In this way, we managed to roughly double the number of the post-edit training samples, which is highly beneficial for a better model. When we filtered the triplets with a positive
+
+TER score between $\mathrm{CN}_{ed}$ and $\mathrm{CN}_{pe}$ , or $\mathrm{CN}_{or}$ and $\mathrm{CN}_{pe}$ , we obtained 4185 training, 596 test, and 568 validation samples following the partition used in the first set of experiments as described in Section 3.1. Finally, the best fine-tuning configuration of the GPT-2 medium model for APE was obtained with a learning rate of 2e-5 for 3 epochs resulting in 3.34 train loss and 1.23 eval loss.
\ No newline at end of file
diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/images.zip b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1878489ee7a4815aa6890db3ef712562a3844a31
--- /dev/null
+++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ae782261801ad6e73647f830638c9854c7a7816cef049927b21fe6c071eb0d2
+size 627792
diff --git a/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/layout.json b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..19eaea11f17f0a2dcfcc7b4418527b37df914893
--- /dev/null
+++ b/usingpretrainedlanguagemodelsforproducingcounternarrativesagainsthatespeechacomparativestudy/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a89ca3232d25f0df36c0884919f6b92cfa5e6c1661994d1c4e544385ee40d225
+size 496983
diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_content_list.json b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..23f714dc9cd41d3dd62bbc09627895cbacfe4add
--- /dev/null
+++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2476d4d94a326600a721b6fd91939b96a34c23f233ecfb036b78da90accedc21
+size 87948
diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_model.json b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..32a93d28835ffa4ea08574c2d234a845b1d97a94
--- /dev/null
+++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e72b10d707f314eac3b7606654f5e1636f70fabff8d42660f8c78186514db35
+size 110500
diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_origin.pdf b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f926dd2577d0d1514f7bf109a35c067d184ea44b
--- /dev/null
+++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/e27c9381-5b82-42d0-be7e-d5fb7f716aea_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dca4619a80e253063d0029fbdd75936e265d18e05441af52e55392c1ae391e6e
+size 530645
diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/full.md b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3e2510bde16bb548c67a3f8c293aca5e1cb19886
--- /dev/null
+++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/full.md
@@ -0,0 +1,327 @@
+# Virtual Augmentation Supported Contrastive Learning of Sentence Representations
+
+Dejiao Zhang* Wei Xiao Henghui Zhu Xiaofei Ma Andrew O. Arnold AWS AI Labs, New York
+
+# Abstract
+
+Despite profound successes, contrastive representation learning relies on carefully designed data augmentations using domain-specific knowledge. This challenge is magnified in natural language processing, where no general rules exist for data augmentation due to the discrete nature of natural language. We tackle this challenge by presenting a Virtual augmentation Supported Contrastive Learning of sentence representations (VaSCL). Originating from the interpretation that data augmentation essentially constructs the neighborhoods of each training instance, we in turn utilize the neighborhood to generate effective data augmentations. Leveraging the large training batch size of contrastive learning, we approximate the neighborhood of an instance via its K-nearest in-batch neighbors in the representation space. We then define an instance discrimination task regarding the neighborhood and generate the virtual augmentation in an adversarial training manner. We access the performance of VaSCL on a wide range of downstream tasks and set a new state-of-the-art for unsupervised sentence representation learning.
+
+# 1 Introduction
+
+Universal sentence representation learning has been a long-standing problem in Natural Language Processing (NLP). Leveraging the distributed word representations (Bengio et al., 2003; Mikolov et al., 2013; Collobert et al., 2011; Pennington et al., 2014) as the base features to produce sentence representations is a common strategy in the early stage. However, these approaches are tailored to different target tasks, thereby yielding less generic sentence representations (Yessenalina and Cardie, 2011; Socher et al., 2013; Kalchbrenner et al., 2014; Cho et al., 2014).
+
+This issue has motivated more research efforts on designing generic sentence-level learning objectives or tasks. Among them, supervised learning on the Natural Language Inference (NLI) datasets (Bowman et al., 2015a; Williams et al., 2017; Wang et al., 2018) has established benchmark transfer learning performance on various downstream tasks (Conneau et al., 2017; Cer et al., 2018; Reimers and Gurevych, 2019a; Zhang et al., 2021). Despite promising progress, the high cost of collecting annotations precludes its wide applicability, especially when the target domain has scarce annotations but differs significantly from the NLI datasets (Zhang et al., 2020).
+
+On the other hand, unsupervised learning of sentence representations has seen a resurgence of interest with the recent successes in self-supervised contrastive learning. These approaches rely on two main components, data augmentation and an instance-level contrastive loss. The popular contrastive learning objectives Chen et al. (2020); He et al. (2020) and their variants thereof have empirically shown their effectiveness in NLP. However, the discrete nature of the text makes it challenging to establish universal rules for effective text augmentation generation.
+
+Various contrastive learning based approaches have been proposed for sentence representation learning, where the main difference lies in how the augmentations are generated (Fang and Xie, 2020; Giorgi et al., 2020; Wu et al., 2020; Meng et al., 2021; Yan et al., 2021; Kim et al., 2021; Gao et al., 2021). Somewhat surprisingly, a recent work (Gao et al., 2021) shows that Dropout (Srivastava et al., 2014), i.e., augmentations obtained by feeding the same instance to the encoder twice, outperforms common data augmentations obtained by operating on the text directly, including cropping, word deletion, or synonym replacement. Again, this observation validates the inherent difficulty of attaining effective data augmentations in NLP.
+
+This paper tackles the challenge by presenting a neighborhood-guided virtual augmentation strategy to support contrastive learning. In a nutshell, data augmentation essentially constructs the neighborhoods of each instance, with the semantic content being preserved. We take this interpretation in the opposite direction by leveraging the neighborhood of an instance to guide augmentation generation. Benefiting from the large training batch of contrastive learning, we approximate the neighborhood of an instance via its K-nearest in-batch neighbors. We then define an instance discrimination task within this neighborhood and generate the virtual augmentation in an adversarial training manner. We run in-depth analyses and show that our VaSCL model leads to a more dispersed representation space with the data semantics at different granularities being better captured. We evaluate our model on a wide range of downstream tasks and show that our model consistently outperforms the previous state-of-the-art results by a large margin.
+
+# 2 Related Work
+
+Universal Sentence Representation Learning Arguably, the simplest and most common approaches for attaining sentence representations are bag-of-words (Harris, 1954) and variants thereof. However, bag-of-words suffers from data sparsity and a lack of sensibility to word semantics. In the past two decades, the distributed word representations (Bengio et al., 2003; Mikolov et al., 2013; Collobert et al., 2011; Pennington et al., 2014) have become the more effective base features for producing sentence representations. The downside is that these approaches are tailored to the target tasks (Yessenalina and Cardie, 2011; Socher et al., 2013; Kalchbrenner et al., 2014; Cho et al., 2014), and thereby the resulting sentence representations attain limited transfer learning performance.
+
+More recent efforts focus on directly designing the sentence-level learning objectives or tasks. On the supervised learning regime, Conneau et al. (2017); Cer et al. (2018) empirically show the effectiveness of leveraging the NLI task (Bowman et al., 2015a; Williams et al., 2017) to promote generic sentence representations. The task involves classifying each sentence pair into one of three categories: entailment, contradiction, or neutral. Reimers and Gurevych (2019b) further bolster the performance by using the pre-trained transformer (Devlin et al., 2018; Liu et al., 2019) as backbone.
+
+On the other end of the spectrum, Hill et al. (2016); Bowman et al. (2015b) propose using the denoising or variational autoencoders for sentence representation learning. Kiros et al. (2015); Hill et al. (2016) extend the distributional hypothesis to the sentence level and train an encoder-decoder to construct the surrounding context for each sentence. Alternatively, Logeswaran and Lee (2018) present a model that learns to discriminate the target context sentences from all contrastive ones.
+
+Contrastive Learning\n\nContrastive learning has been the pinnacle of recent successes in sentence representation learning. Gao et al. (2021); Zhang et al. (2021) substantially advance the previous state-of-the-art results by leveraging the entailment sentences in NLI as positive pairs for optimizing the properly designed contrastive loss functions. Nevertheless, we focus on unsupervised contrastive learning and form the positive pairs via data augmentation since such methods are more cost-effective and applicable across different domains and languages. Along this line, several approaches have been proposed recently, where the augmentations are obtained via dropout (Yan et al., 2021; Gao et al., 2021), back-translation (Fang and Xie, 2020), surrounding context sampling (Logeswaran and Lee, 2018; Giorgi et al., 2020), or perturbations conducted at different semantic-level (Wu et al., 2020; Yan et al., 2021; Meng et al., 2021).\n\n
+
+Consistency Regularization Our work is also closely related to consistency regularization, which is often used to promote better performance by regularizing the model output to remain unchanged under plausible input variations that are often induced via data augmentations. Bachman et al. (2014); Sajjadi et al. (2016); Samuli and Timo (2017); Tarvainen and Valpola (2017) show randomized data augmentations such as dropout, cropping, rotation, and flipping yield effective regularization. Berthelot et al. (2019, 2020); Verma et al. (2019) improve the performance by applying Mixup (Zhang et al., 2017) and its variants on top of stochastic data augmentations. However, data augmentation has long been a challenge in NLP as there are no general rules for effective text transformations. An alternative that comes to light when considering the violation of consistency regularization can in turn be used to find the most sensitive perturbation for a model. Therefore, we utilize consistency regularization to promote informative virtual augmen
+
+
+Figure 1: Illustration of VaSCL. For each instance $x_{i}$ in a randomly sampled batch, we optimize (i) an instance-wise contrastive loss with the dropout induced augmentation obtained by forwarding the same instance twice, i.e., $x_{i}$ and $x_{i'}$ denote the same text example; and (2) a neighborhood constrained instance discrimination loss with the virtual augmentation proposed in Section 3.2.
+
+tation for a training instance in the representation space while leveraging its approximated neighborhood to regularize the augmentation sharing similar semantic content as its original instance.
+
+# 3 Method
+
+# 3.1 Preliminaries
+
+Self-supervised contrastive learning often aims to solve the instance discrimination task. In our scenario, let $f$ denote the transformer encoder that maps the $i^{\text{th}}$ input sentence $\mathbf{x}_i$ to its representation vector $\mathbf{e}_i = f(\mathbf{x}_i)^1$ . Further let $h$ be the contrastive learning head and $\mathbf{z}_i = h(f(\mathbf{x}_i))$ denote the final output for $\mathbf{x}_i$ . Let $\mathcal{B} = \{i, i'\}_{i=1}^{M}$ denote the indices of a randomly sampled batch of paired examples, where $\mathbf{x}_i, \mathbf{x}_{i'}$ are two independent variations of the $i^{\text{th}}$ instance. A popular loss function (Chen et al., 2020) for contrastive learning is defined as follows,
+
+$$
+\begin{array}{l} \ell_ {\mathcal {B}} \left(\mathbf {z} _ {i}, \mathbf {z} _ {i ^ {\prime}}\right) = \tag {1} \\ - \log \frac {e ^ {\mathbf {s i m} (\mathbf {z} _ {i} , \mathbf {z} _ {i ^ {\prime}}) / \tau}}{e ^ {\mathbf {s i m} (\mathbf {z} _ {i} , \mathbf {z} _ {i ^ {\prime}}) / \tau} + \sum_ {j \in \mathcal {B} \backslash (i , i ^ {\prime})} e ^ {\mathbf {s i m} (\mathbf {z} _ {i} , \mathbf {z} _ {j}) / \tau}}, \\ \end{array}
+$$
+
+where $\tau$ is the temperature hyper-parameter and $\mathbf{sim}(\cdot)$ denotes the cosine similarity, i.e., $\mathbf{sim}(\cdot) = \mathbf{z}_i^T\mathbf{z}_{i'} / \| \mathbf{z}_i\|_2\|\mathbf{z}_{i'}\|_2$ . Similarly, $\ell_{\mathcal{B}}(\mathbf{z}_{i'},\mathbf{z}_i)$ is defined by exchanging the roles of $\mathbf{z}_i$ and $\mathbf{z}_{i'}$ in the above equation. Intuitively, Equation (1) defines the log-likelihood of classifying the $i^{th}$ instance as its positive $i'$ among all $2M - 1$ candidates within
+
+the same batch $\mathcal{B}$ . Therefore, minimizing the above log-loss guides the encoder to map each positive pair close in the representation space, and negative pairs further apart.
+
+Dropout based contrastive learning As Equation (1) implies, the success of contrastive learning relies on effective positive pairs construction. However, it is challenging to generate strong and effective data transformations in NLP due to the discrete nature of natural language. This challenge is further demonstrated in a recent work (Gao et al., 2021), which shows that augmentations obtained by Dropout (Srivastava et al., 2014), i.e., $\mathbf{z}_i, \mathbf{z}_{i'}$ obtained by forwarding the same instance $\mathbf{x_i}$ twice, outperforms the common text augmentation strategies such as cropping, word deletion, or synonym replacement. Dropout provides a natural data augmentation by randomly masking its inputs or the hidden layer nodes. The effectiveness of using Dropout as pseudo data augmentations can be traced back to Bachman et al. (2014); Samuli and Timo (2017); Tarvainen and Valpola (2017). Nevertheless, the augmentation strength is weak with Dropout only. There is room for improvement, which we investigate in the following section.
+
+# 3.2 Neighborhood Constrained Contrastive Learning with Virtual Augmentation
+
+In essence, data augmentation can be interpreted as constructing the neighborhood of a training instance, with the semantic content being preserved. In this section, we take the interpretation in the opposite direction and leverage the neighborhoods of each instance to generate the augmentation. To be more specific, let $\bar{B} = \{i\}_{i=1}^{M}$ denote the indices
+
+of a randomly sampled batch with $M$ examples. We first approximate the neighborhood $\mathcal{N}(i)$ of the $i^{\mathrm{th}}$ instance as its K-nearest neighbors in the representation space,
+
+$\mathcal{N}(i) = \{k:\mathbf{e}_k$ has the top-K similarity with $\mathbf{e}_i$ among all other M-1 instances in $\bar{\mathcal{B}}\}$
+
+We then define an instance-level contrastive loss regarding the $i^{\mathrm{th}}$ instance and its neighborhood as follows,
+
+$$
+\begin{array}{l} \ell_ {\mathcal {N} (i)} \left(\mathbf {z} _ {i} ^ {\delta}, \mathbf {z} _ {i}\right) = \tag {2} \\ - \log \frac {e ^ {\mathbf {s i m} (\mathbf {z} _ {i} ^ {\delta} , \mathbf {z} _ {i}) / \tau}}{e ^ {\mathbf {s i m} (\mathbf {z} _ {i} ^ {\delta} , \mathbf {z} _ {i}) / \tau} + \sum_ {k \in \mathcal {N} (i)} e ^ {\mathbf {s i m} (\mathbf {z} _ {i} ^ {\delta} , \mathbf {z} _ {k}) / \tau}} . \\ \end{array}
+$$
+
+In the above equation, $\mathbf{z}_i^\delta = h(\mathbf{e}_i^\delta)$ denotes the output of the contrastive learning head with the perturbed representation $\mathbf{e}_i^\delta = \mathbf{e}_i + \delta_i$ as input. Here, the initial perturbation $\delta_{i}$ is chosen as isotropic Gaussian noise. As it implies, Equation (2) shows the negative log-likelihood of classifying the perturbed $i^{\mathrm{th}}$ instance as itself rather than its neighbors. Then the augmentation of the $i^{\mathrm{th}}$ instance is retained by identifying the optimal perturbation that maximally disturbs its instance-level identity within the neighborhood. That is,
+
+$$
+\begin{array}{l} \delta_ {i} ^ {*} = \underset {\| \delta_ {i} \| _ {2} \leq \Delta} {\arg \max } \ell_ {\mathcal {N} (i)} \left(\mathbf {z} _ {i} ^ {\delta}, \mathbf {z} _ {i}\right), \tag {3} \\ \mathbf {e} _ {i ^ {*}} = \mathbf {e} _ {i} + \delta_ {i} ^ {*}. \\ \end{array}
+$$
+
+For the $i^{\mathrm{th}}$ instance, denote $\mathcal{N}_{\mathrm{A}}(i)$ as the augmented neighborhood that consists of its $K$ nearest neighbors and their associated augmentations. That is, $\mathcal{N}_{\mathrm{A}}(i) = \{k,k^{*}\}_{k = 1}^{K}$ with $\mathbf{e}_k$ and $\mathbf{e}_{k^*}$ denoting the original representation and the augmented representation of the $k^{\mathrm{th}}$ nearest neighbor of instance $i$ , respectively. Here, each augmentation $\mathbf{e}_{k^*}$ is obtained by solving Equations (3) with respect to the neighborhood $\mathcal{N}(k)$ of $e_k$ . We then discriminate the $i^{\mathrm{th}}$ instance and its augmentation from the augmented neighborhood $\mathcal{N}_{\mathrm{A}}(i)$ ,
+
+$$
+\ell_ {\mathcal {N} _ {\mathrm {A}} (i)} = \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} (\mathbf {z} _ {i} ^ {*}, \mathbf {z} _ {i}) + \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} (\mathbf {z} _ {i}, \mathbf {z} _ {i} ^ {*}). \quad (4)
+$$
+
+Here both terms on the right hand side are defined in the same way as Equation (2) with respect to the augmentation $e_i^*$ and the augmented neighborhood $\mathcal{N}_{\mathrm{A}}(i)$ of the $i^{\mathrm{th}}$ instance.
+
+Putting it all together Therefore, for each randomly sampled minibatch $\mathcal{B}$ with $M$ samples, we minimize the following:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {V a S C L}} = \frac {1}{2 M} \sum_ {i = 1} ^ {M} \left\{\ell_ {\bar {\boldsymbol {B}}} \left(\mathbf {z} _ {i}, \mathbf {z} _ {i ^ {\prime}}\right) + \ell_ {\bar {\boldsymbol {B}}} \left(\mathbf {z} _ {i ^ {\prime}}, \mathbf {z} _ {i}\right) \right. \\ \left. + \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} \left(\mathbf {z} _ {i}, \mathbf {z} _ {i} ^ {*}\right) + \ell_ {\mathcal {N} _ {\mathrm {A}} (i)} \left(\mathbf {z} _ {i} ^ {*}, \mathbf {z} _ {i}\right) \right\} \tag {5} \\ \end{array}
+$$
+
+The last two terms of the right hand side are defined in Equation 4. Notice that, $\ell_{\bar{B}}(\mathbf{z}_i,\mathbf{z}_{i'})$ is defined in the same way as Equation (1) except that $\mathbf{z}_i,\mathbf{z}_{i'}$ are retained by feeding the $i^{\mathrm{th}}$ instance in $\bar{B}$ to the encoder twice. In summary, two instance discrimination tasks are posed for each training example: i) discriminating each instance and its dropout induced variation from the other in-batch instances; and ii) separating each instance and its virtual augmentation from its K nearest neighbors and their associated virtual augmentations.
+
+# 4 Experiment
+
+In this section, we mainly evaluate VaSCL against SimCSE (Gao et al., 2021) which leverages the dropout (Srivastava et al., 2014) induced noise as data augmentation. We show that VaSCL consistently outperforms SimCSE on various downstream tasks that involve semantic understanding at different granularities. We carefully study the regularization effects of VaSCL and empirically demonstrate that VaSCL leads to a more dispersed representation space with semantic structure better encoded. Please refer to Appendix A for details of our implementations and the dataset being used.
+
+# 4.1 Evaluation Datasets
+
+In addition to the popular semantic textual similarity (a.k.a STS) related tasks, we evaluate two additional downstream tasks, short text clustering and few-shot learning based intent classification. Our motivation is twofold. First, these two tasks provide a new evaluation aspect that complements the pairwise similarity-oriented STS evaluation by assessing the high-level categorical semantics encoded in the representations. Second, two desired challenges are posted as short text clustering requires more effective representations due to the weak signal each text example manifests; and intent classification often suffers from data scarcity since the intents can vary significantly over different dialogue systems and the intent examples are costly to collect.
+
+ | STS12 | STS13 | STS14 | STS15 | STS16 | SICK-R | STS-B | Avg. |
| RoBERTa distil | 54.41 | 46.85 | 56.96 | 65.79 | 64.22 | 61.10 | 59.01 | 58.33 |
| SimCSE distil | 65.58 | 77.42 | 70.17 | 79.31 | 78.45 | 67.66 | 77.98 | 73.79 |
| VaSCL distil | 67.68 | 80.61 | 72.19 | 80.92 | 78.59 | 68.81 | 77.32 | 75.16 |
| RoBERTa base | 53.95 | 47.42 | 55.87 | 64.73 | 63.55 | 62.94 | 58.40 | 58.12 |
| SimCSE base | 68.88 | 80.46 | 73.54 | 80.98 | 80.68 | 69.54 | 80.29 | 76.34 |
| VaSCL base | 69.02 | 82.38 | 73.93 | 82.54 | 80.96 | 69.40 | 80.52 | 76.96 |
| RoBERTa large | 55.00 | 50.14 | 54.87 | 62.14 | 62.99 | 58.93 | 54.56 | 56.95 |
| SimCSE large | 69.83 | 81.29 | 74.42 | 83.77 | 79.79 | 68.89 | 80.66 | 76.95 |
| VaSCL large | 73.36 | 83.55 | 77.16 | 83.25 | 80.66 | 72.96 | 82.36 | 79.04 |
+
+Table 1: Spearman rank correlation between the cosine similarity of sentence representation pairs and the ground truth similarity scores.
+
+Semantic Textual Similarity The semantic textual similarity (STS) tasks are the most commonly used benchmark for evaluating sentence representations. STS consists of seven tasks, namely STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), the STS Benchmark (Cer et al., 2017), and the SICK-Relatedness (Marelli et al., 2014). For each sentence pair in these datasets, a fine-grained similarity score ranges from 0 to 5 is provided.
+
+Short Text Clustering Compared with general text clustering, short text clustering has its own challenge due to lack of signal. Nevertheless, texts containing only a few words grow at unprecedented rates from a wide range of popular resources, including Reddit, Stackoverflow, Twitter, and Instagram. Clustering those texts into groups of similar texts plays a crucial role in many real-world applications such as topic discovery (Kim et al., 2013), trend detection (Mathioudakis and Koudas, 2010), and recommendation (Bouras and Tsogkas, 2017). We evaluate six benchmark datasets for short text clustering. As shown in Table 4, the datasets present the desired diversities regarding both the cluster sizes and the number of clusters contained in each dataset.
+
+Intent Classification Intent classification aims to identify the intents of user utterances, which is a critical component of goal-oriented dialog systems. Attaining high intent classification accuracy is an important step towards solving many downstream tasks such as dialogue state tracking (Wu et al., 2019; Zhang et al., 2019) and dialogue management (Gao et al., 2018; Ham et al., 2020). A practical challenge is data scarcity because differ
+
+ent systems define different sets of intents, and it is costly to obtain enough utterance samples for each intent. Therefore, few-shot learning has attracted much attention under this scenario, which is also our main focus. We evaluate four intent classification datasets originating from different domains. We summarize the data statistics in Appendix B.1.
+
+# 4.2 Main Results
+
+# 4.2.1 Evaluation Setup
+
+Semantic Textual Similarity. Same as Reimers and Gurevych (2019b); Gao et al. (2021), in Table 1 we report the Spearman correlation3 between the cosine similarity of the sentence representation pairs and the ground truth similarity scores. Short Text Clustering. We evaluate the sentence representations using K-Means (MacQueen et al., 1967; Lloyd, 1982) given its simplicity and report the clustering accuracy4 averaged over ten independent runs in Table 2. Intent Classification. We freeze the transformer and fine-tune a linear classification layer with the softmax-based cross-entropy loss. We merge the training and validation sets, from which we sample K training and validation samples per class. We report the mean and standard deviation of the testing classification accuracy evaluated over five different splits in Table 3.5 We set the learning rate to 1e-04 and batch size to 32. For each task, we train the model with 1000 iterations
+
+ | Ag News | Search Snippets | Stack Overflow | Bio-medical | Tweet | Google News | Avg |
| RoBERTa distil | 59.32 | 33.18 | 14.16 | 24.69 | 37.10 | 58.05 | 37.75 |
| SimCSE distil | 73.33 | 60.74 | 66.97 | 35.69 | 50.68 | 67.55 | 59.16 |
| VaSCL distil | 71.71 | 62.76 | 73.98 | 38.82 | 51.35 | 67.66 | 61.05 |
| RoBERTa base | 66.50 | 30.83 | 15.63 | 26.98 | 37.80 | 58.51 | 39.38 |
| SimCSE base | 65.53 | 55.97 | 64.18 | 38.12 | 49.16 | 65.69 | 56.44 |
| VaSCL base | 68.33 | 47.26 | 76.15 | 39.53 | 51.50 | 67.10 | 58.31 |
| RoBERTa large | 69.35 | 53.00 | 27.89 | 33.25 | 46.08 | 64.04 | 48.93 |
| SimCSE large | 62.93 | 51.55 | 54.11 | 35.39 | 50.92 | 67.86 | 53.79 |
| VaSCL large | 66.09 | 61.57 | 69.04 | 42.91 | 56.74 | 67.75 | 60.68 |
+
+and evaluate the validation set every 100 iterations. We report the testing accuracy on the checkpoint achieving the best validation accuracy.
+
+Table 2: Clustering accuracy reported on six short text clustering datasets.
+
+ | SNIPS | BANK77 | CLINC150 | HWU64 |
| S-Shot | RoBERTa | 76.71±4.84 | 38.77±2.29 | 55.19±1.99 | 51.52±2 |
| SimCSE | 76.94±2.53 | 67.48±1.63 | 72.84±1.5 | 66.1±1.9 |
| VaSCL | 78.51±1.39 | 70.10±1.76 | 74.23±1.17 | 67.06±2.17 |
| 10-Shot | RoBERTa | 85.63±2.43 | 46.55±1.84 | 60.55±1.16 | 57.47±0.91 |
| SimCSE | 85.14±2.18 | 72.19±0.88 | 77.13±0.76 | 70.87±1.35 |
| VaSCL | 84.83±1.05 | 75.25±0.81 | 79.15±0.82 | 72.43±1.12 |
| 20-Shot | RoBERTa | 88.14±1.54 | 51.65±1.42 | 63.51±1.08 | 60.93±1.27 |
| SimCSE | 88.43±1.2 | 75.13±0.78 | 78.59±0.78 | 74.44±0.74 |
| VaSCL | 89.11±1.29 | 78.06±0.37 | 81.39±0.60 | 76.39±0.26 |
+
+Table 3: Few-shot learning evaluation of Intent Classification. Each result is aggregated over 5 independent splits. We choose RoBERTa-base as backbone.
+
+# 4.2.2 Evaluation Results
+
+We report the evaluation results in Tables 1, 2, and 3. As we can see, both SimCSE and VaSCL largely improve the performance of the pre-trained language models, while VaSCL consistently outperforms SimCSE on most tasks. To be more specific, we attain $0.6\% - 2.1\%$ averaged absolute improvement over SimCSE on seven STS tasks and $1.8\% - 6.9\%$ averaged absolute improvement on six short text clustering tasks. We also achieved considerable improvement over SimCSE on intent classification tasks under different few-shot learning scenarios. We do not include the evaluation on ATIS in Table 3 as this dataset is highly imbalanced with one single class account for more than $73\%$ of the data. Please refer to Appendix C for details.
+
+# 4.3 Analysis
+
+To better understand what enables the good performance of VaSCL, we carefully analyze the representations at different semantic granularities.
+
+Neighborhood Evaluation on Categorical Data We first evaluate the neighborhood statistics on StackOverflow (Xu et al., 2017) which contains 20 balanced categories, each with 1000 text instances. For each instance, we retrieve its K nearest (top-K) neighbors in the representation space, among which those from the same class as the instance itself as treated as positives. In Figure 2a, we report both the percentage of true positives and the average distance of an instance to its top-K neighbors. For each top-K value, the evaluation is averaged over all 20,000 instances.
+
+As indicated by the small distance values reported in Figure 2a, the representation space of the original RoBERTa model is tighter and is incapable of uncovering the categorical structure of data. In contrast, both VaSCL and SimCSE are capable of scattering representations apart while better capturing the semantic structures. Compared with SimCSE, VaSCL leads to even more dispersed representations with categorical structures being better encoded. This is also demonstrated by the better performance attained on both clustering and few-shot learning reported in Tables 2&3.
+
+Fine-grained Semantic Understanding We then compare VaSCL against SimCSE and RoBERTa on encoding more fine-grained semantic concepts. We randomly sample 20,000 premises from the combined set of SNLI (Bowman et al., 2015a) and MNLI (Williams et al., 2017), where
+
+
+(a) Neighborhood evaluation on StackOverflow. Instances from the same category are treated as true positives.
+
+
+(b) Fine-grained semantics encoding evaluation on NLI.
+Figure 2: VaSCL leads to more dispersed representation with data structure being better uncovered.
+
+the associated entailment and contradiction hypotheses are also sampled for each premise instance. In Figure 2b, we report both the distributions of the pairwise distances of the entailment or the contradiction pairs (left). While on the right-hand side, we plot the distance of each premise to its entailment hypothesis over that to its contradiction hypothesis (right).
+
+We observe the same trend that both SimCSE and VaSCL well separate different instances apart in the representation space while better discriminating each premise's entailment hypothesis from the contradiction one. Figure 2b also demonstrates that VaSCL outperforms SimCSE on better capturing the fine-grained semantics when separating different instances apart. This advantage of VaSCL is further validated by Table 1, where VaSCL consistently outperforms SimCSE on the STS tasks that require pairwise semantic inference on an even more fine-grained scale.
+
+# 4.4 Explicit Data Augmentation
+
+To better evaluate our virtual augmentation-oriented VaSCL model, we compare it against different explicit data augmentation strategies that directly operate on the discrete text. Specifically, we consider the following approaches: $\underline{\underline{WDel}}$ (random word deletion) removes words from the input
+
+text randomly; $\underline{WNet}$ (WordNet synonym substitute) transforms a text instance by replacing its words with the WordNet synonyms (Morris et al., 2020; Ren et al., 2019); and $CTxt$ (contextual synonyms substitute) leverages the pre-trained transformers to find top-n suitable words of the input text for substitution (Kobayashi, 2018). For each strategy, we evaluate three augmentation strengths by partially changing $5\%$ , $10\%$ , and $20\%$ words of each text instance. For a positive pair $(x_i, x_i')$ , $x_i$ denotes the original text and $x_{i'}$ is the associated augmentation. We also explore the case where both $x_i$ and $x_{i'}$ are the transformations of the original text, which we find yielding worse performance.
+
+Virtual Augmentation Performs Better The performance of explicit text augmentation is evaluated using the standard dropout for training, i.e., "SimCSE w/ {WDel/WNet/CTxt})" in Figure 3. As Figure 3a shows, contrastive learning with moderate explicit text augmentations, i.e., augmentation strength less than $20\%$ , does yield better sentence representations when compared with the original RoBERTa model. Nevertheless, both virtual augmentation strategies, i.e., SimCSE & VaSCL, substantially outperform all three explicit text augmentation strategies on almost all downstream tasks. Although a bit surprising, especially considering the performance gap between SimCSE and explicit augmentations, this comparison provides a new perspective on interpreting the underlying challenge of designing effective transformations that operate on the discrete text directly.
+
+VaSCL Outperforms SimCSE Figure 3a also empirically demonstrates that VaSCL outperforms SimCSE no matter in the presence of explicit text augmentations or not. The only exception occurs when the explicit augmentation strength is too large, i.e., $20\%$ of the words of each text are perturbed. One possible explanation is that undesired noises are generated by the large perturbations on discrete texts directly, which can violate the coherent semantics maintained by a neighborhood and hence make it hard for VaSCL to generate effective virtual augmentations.
+
+New Linguistic Patterns Are Required Another observation drawn from Figure 3a is that both SimCSE and VaSCL attain worse performance on most downstream tasks when combining with explicit text augmentations. Although VaSCL does improve the performance of explicit augmentations
+
+
+(a) Virtual augmentation vs. explicit augmentation. For each downstream task, we report the mean performance averaged over all its subtasks. The explicit augmentations are evaluated using SimCSE (dropout) for training, i.e., "SimCSE w/ {WDel/WNet/CTxt})".
+
+
+
+
+
+Figure 3: Comparing and combining virtual augmentation with explicit augmentation.
+
+(b) Cosine similarity between each original training example and its augmentation evaluated on the representation spaces of different models. From left to right, the augmentations are obtained via WDel, WNet, and CTxt. Each point is averaged over 20,000 randomly sampled training examples and their augmentations. We exclude "SimCSE w/WNet" and "VaSCL w/WNet" for better visualization. Please refer to Figure 4 in Appendix for the full plot.
+
+
+
+
+
+in most cases, this is undesired as we expect a win-win outcome that moderate explicit augmentations could further enhance VaSCL. We hypothesize that new and informative linguistic patterns are missing for the expected performance gain.
+
+To validate our hypothesis, in Figure 3b we report the cosine similarity between each original training example and its augmentation evaluated on the representation spaces of different models. Our observation is twofold. First, the representations induced by RoBERTa and the one trained with contextual synonyms substitution ("SimCSE w/ CTxt") are very similar in all three settings, which also explains why "SimCSE w/ WDel" attains similar performance as RoBERTa on the downstream tasks. We attribute this to the fact that CTxt leverages the transformer itself to generate augmentations which hence carry limited unseen and effective linguistic patterns. Second, as indicated by the comparatively smaller similarity values in Figure 3b, the incorporation of explicit augmentations tightens the representation spaces of both SimCSE and VaSCL, which also results in a worse performance of downstream tasks. One possible explanation is that all the three explicit augmentations are weak and noisy, which harms both the instance discriminative
+
+ination force and the semantic relevance of each neighborhood.
+
+# 5 Conclusion
+
+In this paper, we present a virtual augmentation-oriented contrastive learning framework for unsupervised sentence representation learning. Our key insight is that data augmentation can be interpreted as constructing the neighborhoods of each training instance, which can, in turn, be leveraged to generate effective data augmentations. We evaluate VaSCL on a wide range of downstream tasks and substantially advance the state-of-the-art results. Moreover, we conduct in-depth analyses and show that VaSCL leads to a more dispersed representation space with the data semantics at different granularities being better encoded.
+
+On the other hand, we observe a performance drop of both SimCSE and VaSCL when combined with the explicit text augmentations. We suspect this is caused by the linguistic patterns generated by explicit augmentations being less informative yet noisy. We hypothesize effective data augmentation operations on the discrete texts could complement our virtual augmentation approach if new and informative linguistic patterns are generated.
+
+# References
+
+Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, et al. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 252-263.
+Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 81-91.
+Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez Agirre, Rada Mihalcea, German Rigau Claramunt, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In SemEval-2016. 10th International Workshop on Semantic Evaluation; 2016 Jun 16-17; San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511. ACL (Association for Computational Linguistics).
+Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385-393.
+Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In Second joint conference on lexical and computational semantics (*SEM), volume 1: proceedings of the Main conference and the shared task: semantic textual similarity, pages 32-43.
+Philip Bachman, Ouais Alsharif, and Doina Precup. 2014. Learning with pseudo-ensembles. Advances in neural information processing systems, 27:3365-3373.
+Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The journal of machine learning research, 3:1137-1155.
+David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2020. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In International Conference on Learning Representations.
+
+David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249.
+Christos Bouras and Vassilis Tsogkas. 2017. Improving news articles recommendations via user clustering. International Journal of Machine Learning and Cybernetics, 8(1):223-237.
+Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015a. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326.
+Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015b. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.
+Inigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. Efficient intent detection with dual sentence encoders. arXiv preprint arXiv:2003.04807.
+Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.
+Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709.
+Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.
+Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(ArtICLE):2493-2537.
+Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364.
+Alice Coucke, Alaa Saade, Adrien Ball, Theodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Hongchao Fang and Pengtao Xie. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766.
+Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1371-1374.
+Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. arXiv preprint arXiv:2104.08821.
+John M Giorgi, Osvald Nitski, Gary D Bader, and Bo Wang. 2020. Declutr: Deep contrastive learning for unsupervised textual representations. arXiv preprint arXiv:2006.03659.
+Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using gpt-2. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 583-592.
+Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9729-9738.
+Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990.
+Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367-1377, San Diego, California. Association for Computational Linguistics.
+Nal Kalchbrenner, Edward Grefenstette, and Phil Blun som. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188.
+Hwi-Gang Kim, Seongjoo Lee, and Sunghyon Kyeong. 2013. Discovering hot topics using twitter streaming data social topic detection and geographic clustering. In 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2013), pages 1215-1220. IEEE.
+
+Taeuk Kim, Kang Min Yoo, and Sang-goo Lee. 2021. Self-guided contrastive learning for bert sentence representations. arXiv preprint arXiv:2106.07345.
+Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. arXiv preprint arXiv:1506.06726.
+Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201.
+Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, et al. 2019. An evaluation dataset for intent classification and out-of-scope prediction. arXiv preprint arXiv:1909.02027.
+Xingkun Liu, Arash Eshghi, Pawel Swietojanski, and Verena Rieser. 2021. Benchmarking natural language understanding services for building conversational agents. In Increasing Naturalness and Flexibility in Spoken Dialogue Interaction, pages 165-183. Springer.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Stuart Lloyd. 1982. Least squares quantization in pmc. IEEE transactions on information theory, 28(2):129-137.
+Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. arXiv preprint arXiv:1803.02893.
+James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281-297. Oakland, CA, USA.
+Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of compositional distributional semantic models. In Lrec, pages 216-223. Reykjavik.
+Michael Mathioudakis and Nick Koudas. 2010. Twittermonitor: trend detection over the twitter stream. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, pages 1155-1158.
+
+Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett, Jiawei Han, and Xia Song. 2021. Coco-lm: Correcting and contrasting text sequences for language model pretraining. arXiv preprint arXiv:2102.08473.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.
+John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp.
+James Munkres. 1957. Algorithms for the assignment and transportation problems. Journal of the society for industrial and applied mathematics, 5(1):32-38.
+Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.
+Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
+Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In Proceedings of the 17th international conference on World Wide Web, pages 91-100.
+Md Rashadul Hasan Rakib, Norbert Zeh, Magdalena Jankowska, and Evangelos Milios. 2020. Enhancement of short text clustering by iterative classification. arXiv preprint arXiv:2001.11631.
+Nils Reimers and Iryna Gurevych. 2019a. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
+Nils Reimers and Iryna Gurevych. 2019b. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics.
+Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1085-1097.
+Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. 2016. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Advances in neural information processing systems, 29:1163-1171.
+
+Laine Samuli and Aila Timo. 2017. Temporal ensembling for semi-supervised learning. In International Conference on Learning Representations (ICLR), volume 4, page 6.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
+Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780.
+Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz. 2019. Interpolation consistency training for semi-supervised learning. arXiv preprint arXiv:1903.03825.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
+Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.
+Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. arXiv preprint arXiv:1905.08743.
+Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. Clear: Contrastive learning for sentence representation. arXiv preprint arXiv:2012.15466.
+Jiaming Xu, Bo Xu, Peng Wang, Suncong Zheng, Guanhua Tian, and Jun Zhao. 2017. Self-taught convolutional neural networks for short text clustering. Neural Networks, 88:22-31.
+Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Consent: A contrastive framework for self-supervised sentence representation transfer. arXiv preprint arXiv:2105.11741.
+Ainur Yessenalina and Claire Cardie. 2011. Compositional matrix-space models for sentiment analysis.
+
+In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 172-182.
+
+Jianhua Yin and Jianyong Wang. 2016. A model-based approach for text clustering with outlier detection. In 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pages 625-636. IEEE.
+
+Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O Arnold, and Bing Xiang. 2021. Pairwise supervised contrastive learning of sentence representations. arXiv preprint arXiv:2109.05424.
+
+Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2017. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412.
+
+Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. arXiv preprint arXiv:1910.03544.
+
+Xiang Zhang and Yann LeCun. 2015. Text understanding from scratch. arXiv preprint arXiv:1502.01710.
+
+Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, and Lidong Bing. 2020. An unsupervised sentence embedding method by mutual information maximization. arXiv preprint arXiv:2009.12061.
+
+# A Implementation
+
+Same as the original SimCSE work (Gao et al., 2021), we adopted $10^{6}$ randomly sampled sentences from English Wikipedia as training data.
+
+We implement our models with Pytorch (Paszke et al., 2017). We use the pre-trained RoBERTa models as the backbone. We choose a two-layer MLP with size $(d\times d,d\times 128)$ to optimize our contrastive learning losses, where $d$ denotes the dimension of the sentence representations. We use Adam (Kingma and Ba, 2015) as our optimizer with a constant learning rate of $5e - 04$ , which we scale to $5e - 06$ for updating the backbones/transformers. We set the virtual augmentation strength of VaSCL, i.e., $\Delta$ in Equation (3), to 15 for both DistilRoBERTa and RoBERTaBase, and 30 for RoBERTaLarge.
+
+We train SimCSE (Gao et al., 2021) using $3e - 05$ for optimizing the contrastive learning head and the backbone. We also tried the default learning rate $1e - 05$ (suggested in Gao et al. (2021)) as well as our learning rate setup for optimizing the RoBERTa
+
+models with SimCSE. We found $3e - 05$ yields better performance. For both SimCSE and VaSCL, we set the batch size to 1024, train all models over five epochs and evaluate the development set of STS-B every 500 iterations. We report all our evaluations on the downstream tasks with the associated checkpoints attaining the best performance on the validation set of STS-B.
+
+# B Dataset Statistics
+
+# B.1 Intent Classification Dataset
+
+We evaluate our model on four intent classification datasets: (1) SNIPS (Coucke et al., 2018) is a SLU benchmark that consists of 7 distinct intents. (2) BANKING77 (Casanueva et al., 2020) is a large fine-grained single banking domain intent dataset with 77 intent classes. (3) HWU64 (Liu et al., 2021) contains 25,716 examples for 64 intents in 21 domains. (4) CLINC150 (Larson et al., 2019) spans 150 intents and 23,700 examples across 10 domains. As we can see here, SNIPS are limited to only a small number of classes, which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. The remaining three datasets contain much more diversity and are more challenging.
+
+# B.2 Short Text Clustering Dataset
+
+| Dataset | N | W | C | ImN |
| AgNews | 8.0K | 23 | 4 | 1 |
| SearchSnippets | 12.3K | 18 | 8 | 7 |
| StackOverflow | 20K | 8 | 20 | 1 |
| Biomedical | 20K | 13 | 20 | 1 |
| GoogleNews | 11.1K | 28 | 152 | 143 |
| Tweet | 2.5K | 8 | 89 | 249 |
+
+Table 4: Statistics of six short text clustering datasets. N: number of text samples; $\bar{W}$ : average number of words each text example has; C: number of clusters; ImN: imbalance number defined as the size of the largest class divided by that of the smallest class.
+
+- **SearchSnippets** is extracted from web search snippets, which contains 12340 snippets associated with 8 groups Phan et al. (2008).
+- StackOverflow is a subset of the challenge data published by Kaggle8, where 20000 question titles associated with 20 different categories are selected by Xu et al. (2017).
+
+
+(a) Evaluating VaSCL in presence of different explicit data augmentation strategies.
+
+
+
+
+
+
+(b) Cosine similarity between the representations of each original training example and its augmentation evaluated on different models. From left to right, the augmentations are obtained via WDel, WNet, and CTxt. Each point is averaged over 20,000 randomly sampled training examples.
+
+
+Figure 4: Comparing and combining virtual augmentation with explicit text augmentations. (Full plot of Figure 3 in Section 4.4.)
+
+
+
+- Biomedical is a subset of PubMed data distributed by BioASQ9, where 20000 paper titles from 20 groups are randomly selected by Xu et al. (2017).
+- AgNews is a subset of news titles (Zhang and LeCun, 2015), which contains 4 topics selected by Rakib et al. (2020).
+- Tweet consists of 89 categories with 2472 tweets in total (Yin and Wang, 2016).
+- GoogleNews contains titles and snippets of 11109 news articles related to 152 events (Yin and Wang, 2016).
+
+# C Full Evaluation of Intent Classification
+
+ATIS (Hemphill et al., 1990) is a benchmark for the air travel domain. This dataset is highly imbalanced, with the largest class containing $73\%$ of all the training and validation examples. Moreover, more than $60\%$ classes have less than 20 examples. We thereby exclude this task in our evaluation.
\ No newline at end of file
diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/images.zip b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d5b921969929782e1a7e9a58ef586bd2512c7020
--- /dev/null
+++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:787020883c40cb6b475a7d6a00c756e8281d51389689c3cb96178de771db2bc5
+size 556574
diff --git a/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/layout.json b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c1b0f36d791e80aaece28ba09cc0a856d4d492e7
--- /dev/null
+++ b/virtualaugmentationsupportedcontrastivelearningofsentencerepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9472ba21d39d1f7161849629e02396be87ef47df8ace12da3b038e17de225915
+size 427671
diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_content_list.json b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..09c1269cde8b0aa6bfa2a7e80ea5ed4ed27d921b
--- /dev/null
+++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1fcef02565674fa448ba34032aa96a7f80f34326ab6382550520368faf1c90b4
+size 72833
diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_model.json b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..18179c5c7cadd8de1703f6b0b1e8f49c42599ae6
--- /dev/null
+++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a86a6d9069381ecc2ba8bf31c18f8212e55b0de05e858fcb291f9c38907f5f1
+size 89189
diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_origin.pdf b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3b823f1c9d1856894625d7d033ac3c1a25b4bca5
--- /dev/null
+++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/1b07c5cd-6c0d-416a-85c4-4f4153c49b67_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68405f294496435133e98df33d4bfb034859f691ef7905029cd817b865ba0d94
+size 1640211
diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/full.md b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..161a8ee07250ff7ecc72a5a16a107ec49f960fcb
--- /dev/null
+++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/full.md
@@ -0,0 +1,242 @@
+# VISITRON: Visual Semantics-Aligned Interactively Trained Object-Navigator
+
+Ayush Shrivastava1*, Karthik Gopalakrishnan2, Yang Liu2, Robinson Piramuthu2, Gokhan Tur2, Devi Parikh1, Dilek Hakkani-Tur2
+
+1Georgia Tech, 2Amazon Alexa AI
+
+{ayshrv, parikh}@gatech.edu
+
+{karthgop, yangliud, robinpir, gokhatur, hakkanit}@amazon.com
+
+# Abstract
+
+Interactive robots navigating photo-realistic environments need to be trained to effectively leverage and handle the dynamic nature of dialogue in addition to the challenges underlying vision-and-language navigation (VLN). In this paper, we present VISITRON, a multi-modal Transformer-based navigator better suited to the interactive regime inherent to Cooperative Vision-and-Dialog Navigation (CVDN). VISITRON is trained to: i) identify and associate object-level concepts and semantics between the environment and dialogue history, ii) identify when to interact vs. navigate via imitation learning of a binary classification head. We perform extensive pre-training and fine-tuning ablations with VISITRON to gain empirical insights and improve performance on CVDN. VISITRON's ability to identify when to interact leads to a natural generalization of the gameplay mode introduced by Roman et al. (2020) for enabling the use of such models in different environments. VISITRON is competitive with models on the static CVDN leaderboard and attains state-of-the-art performance on the Success weighted by Path Length (SPL) metric.
+
+# 1 Introduction
+
+Large pre-trained Transformer-based language models (Vaswani et al., 2017) are ubiquitous in natural language processing (NLP) and have performed very well in interactive settings such as open-domain (Gopalakrishnan et al., 2019; Huang et al., 2020) and task-oriented dialogue (Kim et al., 2020). The success of Transformers and the pretrain/fine-tune paradigm in NLP has also inspired their adoption in vision-and-language research, with cross-modal representations being learned (Li et al., 2020) and utilized towards tasks like image and object captioning, visual question answering, visual commonsense reasoning and visual dialogue.
+
+
+Figure 1: Cooperative Vision-and-Dialog Navigation (CVDN) with Dynamic Question-Asking
+
+Vision-and-language navigation (VLN) is a challenging cross-modal research task in which agents need to learn to navigate in response to natural language instructions in simulated photo-realistic environments. VLN has been studied extensively with the advent of the Room-to-Room (R2R) dataset (Anderson et al., 2018b) and there has been growing interest recently in pushing the pre-train/fine-tune paradigm towards VLN, with work on leveraging disembodied corpora (Majumdar et al., 2020) to learn cross-modal pre-trained representations that can improve embodied VLN performance. As depicted in Figure 1, the Cooperative Vision-and-Dialog Navigation (CVDN) dataset (Thomason et al., 2020) allows for dialogue with a guide during navigation: a navigator can ask natural language questions to a guide when it needs assistance and the guide responds in natural language by using privileged knowledge of the environment accessible only to it, thus expanding beyond the traditional VLN task towards deployable interactive agents that are more robust and generalizable. But preliminary navigator modeling using CVDN is still VLN-style via the Navigation from Dialog History (NDH) task, treating the dia
+
+logue history as a static instruction.
+
+In this paper, we present work on training VISITRON, a multi-modal Transformer-based navigator with a focus on tackling challenges unique to CVDN: i) moving beyond rote memorization to associative learning in order to learn to identify and acquire visio-linguistic concepts and semantics while interacting in new environments, and ii) learning when to ask questions (Chi et al., 2020). VISITRON builds off the recent cross-modal object-semantics aligned pre-training (OSCAR) strategy and uses object-tags as explicit anchor points during training to learn to associate the environment's visual semantics with the textual dialogue history, thus allowing for interaction/experience-grounded (Bisk et al., 2020) visio-linguistic concepts and semantics identification and acquisition. VISITRON is trained in a data-driven fashion to identify when to engage in dialogue, i.e., ask questions, vs. when to navigate, thus providing the first known empirical baselines for this task. We also present empirical results from various first-principles modeling ablations performed with VISITRON. We demonstrate that for CVDN, panoramic viewpoint selection is a better formulation than discrete turn-based action prediction, akin to what has been seen on VLN with R2R (Fried et al., 2018). We observe that multi-task learning with long-trajectory VLN datasets leads to significant CVDN performance gains relative to training on CVDN alone. VISITRON is competitive with models on the leaderboard for the static NDH task on EvalAI (Yadav et al., 2019), attaining state-of-the-art performance on the Success weighted by Path Length (SPL) metric. Given VISITRON's design and ability to identify when to engage in dialogue, we also propose a generalization of the game-play mode introduced by Roman et al. (2020) for jointly fine-tuning and evaluating VISITRON and future such models with pre-trained guides to help them easily adapt to their guides' capabilities.
+
+# 2 Background
+
+# 2.1 Vision-and-Language Navigation
+
+The Vision-and-Language Navigation (VLN) task requires an agent spawned in an indoor environment at a starting position $s_0$ to follow natural language instructions $x$ and navigate to a target position $s_{goal}$ . This can also be seen as a Partially Observable Markov Decision Process $\mathcal{M} = \langle S, \mathcal{A}, P_s, r \rangle$ where $S$ is the visual state space, $\mathcal{A}$
+
+is the discrete action space, $P_{s}$ is the unknown environment distribution from which the next state is drawn and $r\in \mathbb{R}$ is the reward function (Hao et al., 2020). At a given time step $t$ , the agent receives an RGB image observation $obs(s_{t})$ , where $s_t\in S$ . Based on the observation, the agent takes an action $a_{t}\in \mathcal{A}$ transitions into the next state $s_{t + 1}$ drawn as follows: $s_{t + 1}\sim P_s(\cdot |s_t,a_t)$ , and receives a new image observation $obs(s_{t + 1})$ . To end the episode, the agent must select the special STOP action. A $T$ -step trajectory can be represented as $\pmb {\tau} = [s_0,a_0,s_1,a_1,\dots ,s_T,a_T]$ . The episode is considered successful if the agent stops within $\epsilon$ distance of the goal, i.e., $|s_T - s_{goal}|\leq \epsilon$ Using a training dataset $\mathcal{D} = \{(\pmb {\tau},\pmb {x})\}$ consisting of expert trajectory $\pmb{\tau}$ and instructions $\pmb{x}$ pairs, the goal is to train a policy $\pi_{\theta}(\tau |x)$ with $\pmb{\theta}$ parameters that maximizes the log-likelihood of the target trajectory given instructions $\pmb{x}$ ..
+
+$$
+\begin{array}{l} \max _ {(\boldsymbol {\tau}, \boldsymbol {x}) \sim \mathcal {D}} \mathcal {L} _ {\boldsymbol {\theta}} (\boldsymbol {\tau}, \boldsymbol {x}) = \log \pi_ {\boldsymbol {\theta}} (\boldsymbol {\tau} | \boldsymbol {x}) \\ = \sum_ {t = 0} ^ {T} \log \pi_ {\boldsymbol {\theta}} (\boldsymbol {a} _ {t} | \boldsymbol {s} _ {t}, \boldsymbol {x}) \tag {1} \\ \end{array}
+$$
+
+Several datasets have been released for VLN based on Matterport3D (Chang et al., 2017), a large-scale RGB-D dataset containing $\sim 10000$ panoramic views from $\sim 194000$ RGB-D images of 90 building-scale scenes. The most popular VLN dataset based on Matterport3D is the Room-to-Room (R2R) dataset (Anderson et al., 2018b), containing $\sim 7200$ trajectories and 3 natural language instructions per trajectory. For validation and test sets, seen and unseen splits are created to easily evaluate how well an agent generalizes. Room-4-Room (R4R) (Jain et al., 2019) is an augmentation of R2R wherein existing short trajectories in R2R are joined to form longer, challenging trajectories. Room-across-Room (RxR) (Ku et al., 2020) is a newly introduced dataset with several properties, including but not limited to multilingual instructions, larger scale (for each language, $\sim 14000$ trajectories with 3 instructions per trajectory), fine-grained spatio-temporal grounding and follower demonstrations.
+
+A navigating agent's actions typically belong in a pre-defined discrete set comprising options such as FORWARD, LEFT, RIGHT, etc. Predicting the next best action from this low-level visuomotor space (Fried et al., 2018) of actions is referred to
+
+as turn-based action prediction. Given the nature of the aforementioned VLN datasets, it is also possible to have a navigating agent's actions belong in the panoramic space, wherein the agent selects the next best viewpoint in the navigation graph from the panoramic space visible to it at its current location. This is referred to as viewpoint selection.
+
+# 2.2 Cooperative Vision-and-Dialog Navigation
+
+Cooperative Vision-and-Dialog Navigation (CVDN) is a recently introduced dataset (Thomason et al., 2020) collected by partnering crowd-workers in simulated photo-realistic environments. One worker acts as a NAVIGATOR, seeking to navigate to a goal and interacting in natural language with a GUIDE along the way if it needs assistance. The other worker acts as a GUIDE, answering the NAVIGATOR's questions while having privileged access to the best next steps the NAVIGATOR should take according to an ORACLE full-state shortest path planner. The collection of each CVDN instance begins with the state $(S, T_{O}, s_{0}, G)$ , where $S$ is the environment in which the agents are placed, $s_{0}$ is the start location of the NAVIGATOR, $G$ is the goal region and $T_{O}$ is the initial hint given to both agents about the goal region containing object $O$ . At any time step $t$ , the NAVIGATOR can make one of three choices: i) take a sequence of $k_{t}$ navigation steps $N_{t} = [n_{t}^{1}, n_{t}^{2}, \ldots, n_{t}^{k_{t}}]$ , ii) ask a question $Q_{t}$ to the GUIDE, iii) declare its current position as the goal region. If a question is asked, the GUIDE looks at $l$ next steps along the shortest path to the goal and replies with an answer $A_{t}$ . The instance ends when the NAVIGATOR reaches $G$ . Thus, a CVDN instance comprises $\left[(S, T_{O}, s_{o}, G), \langle N_{0}, Q_{1}, A_{1}, N_{1}, Q_{2}, A_{2}, N_{2}, \ldots, Q_{m}, A_{m}, N_{m} \rangle\right]$ , where $m$ is the number of dialogue exchanges between the NAVIGATOR and GUIDE, and $N_{0}$ is the sequence of navigation steps before the $1^{\text{st}}$ exchange.
+
+# 2.2.1 Navigation from Dialog History (NDH)
+
+With the CVDN dataset, the NDH task for the NAVIGATOR was introduced (Thomason et al., 2020), in which the NAVIGATOR needs to navigate towards a goal given a dialogue history. Specifically, the NAVIGATOR is spawned at the terminal position of $N_{t-1}$ (or $s_0$ in the case of $N_0$ ) in environment $S$ and is given $(T_O, Q_{1:t}, A_{1:t})$ . The task is to predict the navigation steps that bring the agent closer to the goal region $G$ . To train a NAVIGATOR agent
+
+for this task, the navigation steps needed for supervision from the dataset can be provided in any of the three forms: i) human NAVIGATOR steps, $N_{t}$ : the navigation steps that were taken by the human NAVIGATOR after the dialogue exchange at time step $t$ , ii) ORACLE steps, $O_{t}$ : the shortest path steps accessible to the GUIDE when it gave the answer $A_{t}$ , iii) MIXED: a mix of both human NAVIGATOR and ORACLE supervision where the supervision path is $N_{t}$ when $e(O_{t}) \in N_{t}$ , and $O_{t}$ otherwise, where $e(\cdot)$ represents the terminal position of a sequence of navigation steps. The agent NAVIGATOR is trained VLN-style using Equation 1 on NDH instances extracted as described above from the CVDN instances, and evaluated on NDH instances using VLN metrics such as Goal Progress and Success weighted by Path Length (SPL), defined in Section 4.1. In the CVDN literature, it has been observed that MIXED supervision typically performs the best, followed by ORACLE and human NAVIGATOR supervision respectively. However, for the purposes of all our experiments, we pick the human NAVIGATOR supervision mode to establish a lower-bound on performance for VISITRON.
+
+# 2.2.2 Gameplay Mode
+
+In the CVDN dataset, a human NAVIGATOR cooperates with a human GUIDE to find a goal region $G$ with target object $O$ . Roman et al. (2020) introduced the game-play mode, which is essentially an agent-agent replica of this dynamic dataset creation process wherein the two trained agents consume each other's outputs. This mode can be applied during both fine-tuning and evaluation and helps understand how well a pre-trained NAVIGATOR agent adapts to the capabilities of different GUIDE agents in a dynamic/interactive setting. For the sake of consistency with game-play mode notation introduced by Roman et al. (2020), we denote the role of asking questions that is intrinsic to the NAVIGATOR by QUESTIONER. Thus, in a game-play mode episode, at $t = 0$ (prior to the first QA exchange), the NAVIGATOR takes $N_{0}$ steps given the initial hint $T_{O}$ . For time steps $t > 0$ , the QUESTIONER generates a question $Q_{t}$ , GUIDE generates an answer $A_{t}$ having access to the next $l$ steps in the shortest path, and then NAVIGATOR generates $N_{t}$ navigation steps of length $k_{t}$ . All agents have access to the entire visual navigation $(N_{0:t-1})$ and dialogue $(Q_{1:t-1}A_{1:t-1})$ histories in addition to the initial hint $T_{O}$ . The QUESTIONER asks questions every $4^{\text{th}}$ time-step, which is a hard-coded heuristic
+
+by Roman et al. (2020) since their NAVIGATOR does not know when to ask questions. The episode ends when the NAVIGATOR declares that the current position is in the goal region $G$ or a maximum number of turns (20) are played. NAVIGATOR's performance in game-play mode is measured using Goal Progress (see Section 4.1). While the focus of our work is not to train a QUESTIONER, we ensure our NAVIGATOR is equipped with the ability to identify when to ask questions. This leads to our proposed general game-play mode, wherein the aforementioned description of a regular gameplay mode episode still holds but the hard-coded heuristic of asking questions every $4^{\text{th}}$ time-step is eliminated, i.e., the NAVIGATOR decides when a question must be asked to continue game-play.
+
+# 2.3 OSCAR
+
+The OSCAR pre-training strategy (Li et al., 2020) for cross-modal Transformers uses object tags detected in images as anchor points to ease the learning of semantic alignments between images and text. The input is represented as Word-Tag-Image $(\boldsymbol{w}, \boldsymbol{q}, \boldsymbol{v})$ , where $\boldsymbol{w}$ and $\boldsymbol{q}$ are the sequence of word embeddings of the text and object tags respectively, and $\boldsymbol{v}$ is the sequence of region features of the image. To generate $\boldsymbol{v}$ , Faster R-CNN (Ren et al., 2015) is used to extract visual semantics of each region as $(v', z)$ where $v' \in \mathbb{R}^P$ ( $P = 2048$ ) is the region feature, $z \in \mathbb{R}^6$ is the region position represented by the coordinates of the top-right and bottom-left corners and the height & width. $v'$ and $z$ are concatenated to form a position-sensitive region feature, which is further transformed into $v$ using a projection layer such that $v$ has the same dimension as the input token embeddings. It is then pre-trained with a Masked Token Loss (MTL) and a Contrastive Loss (CL).
+
+$$
+\begin{array}{l} \mathcal {L} _ {\text {P r e - t r a i n i n g}} = \mathcal {L} _ {M T L} + \mathcal {L} _ {C L} \\ = - \mathbb {E} _ {\left(\boldsymbol {v}, \boldsymbol {h}\right) \sim \mathcal {D}} \log p \left(h _ {i} \mid \boldsymbol {h} _ {\backslash i}, \boldsymbol {v}\right) \\ - \mathbb {E} _ {\left(\boldsymbol {h} ^ {\prime}, \boldsymbol {w}\right) \sim \mathcal {D}} \log p (y | f (\boldsymbol {h} ^ {\prime}, \boldsymbol {w})) \\ \end{array}
+$$
+
+The MTL is akin to that in BERT (Devlin et al., 2019), masking the input tokens $(\boldsymbol{w}, \boldsymbol{q})$ with a probability of $15\%$ and predicting them. The CL is computed by polluting the object tags $\boldsymbol{q}$ with a probability of $50\%$ with randomly chosen object tags from the dataset, and a feed-forward layer on top of [CLS] predicts whether the input contains the original image representation or a pol
+
+lated one. In the previous equation, $\pmb{h} = [\pmb{w},\pmb{q}]$ , $\pmb{h}' = [\pmb{q},\pmb{v}]$ , $h_{\backslash i}$ are the surrounding tokens of masked token $h_i$ , $f(.)$ denotes the binary classifier where $y = 0$ if the object tags are polluted and 1 otherwise, and $\mathcal{D}$ is the dataset. OSCAR uses a collection of popular image-text datasets for pretraining, including but not limited to Conceptual Captions (Sharma et al., 2018), MS-COCO (Lin et al., 2014), Flickr30K (Young et al., 2014) and GQA (Hudson and Manning, 2019). Such datasets typically have images of objects taken from perfect angles whereas a navigating agent will see objects from different vantage points, which also motivates augmenting OSCAR and performing an additional phase of navigation-specific pre-training.
+
+# 3 Approach
+
+The policy for NDH (and VLN) can be decomposed into an encoder-decoder setup, $\pi_{\pmb{\theta}} = f_{\pmb{\theta}_{E}}\circ f_{\pmb{\theta}_{D}}$ ..
+
+- A vision-language encoder $f_{\theta_E}:\{s_{1:t},x\} \to z_t$ where $s_{1:t}$ are visual states, $x$ is the dialogue history (or instructions for VLN) and $z_{t}$ is the joint latent representation at time step $t$ .
+- An action decoder $f_{\theta_D} : \{s_t, z_t, a_{t-1}\} \to a_t$ , where $a_t$ is the next action.
+
+We model $\pi_{\theta}$ by VISITRON, a visio-linguistic Transformer-based model. VISITRON's encoder is structurally similar to OSCAR's Transformer (Li et al., 2020). This is by design to enable easy transfer of visual semantics-aligned representations learned from disembodied image-text data. We make navigation-specific modifications to OSCAR, but they are all structured as augmentations of modules instead of removal of network components, thus enabling us to use the pre-trained weights of OSCAR's Transformer to initialize large portions of our encoder. The augmentations are described in Section 3.1. As with OSCAR, the input to VISITRON's encoder is represented as Word-Tag-Image $(w, q, v)$ , where $w$ and $q$ are the sequence of word embeddings of the text and object tags respectively, and $v$ is the sequence of region features of the image. We represent the panorama in 36 views, extract Faster R-CNN (Ren et al., 2015) region features $r'$ from each view and add positional vector $p$ , $r = (r', p)$ . To incorporate 3D direction, we add direction embedding $d$ to the region features, $v = r + d$ . $d$ is a 128-dimensional orientation vector represented by repeating $[\sin \phi; \cos \phi; \sin \omega; \cos \omega]$
+
+
+Figure 2: VISITRON's Encoder Architecture and Semantics-Aligned Navigation Pre-Training Tasks
+
+32 times where $\phi$ and $\omega$ are heading and elevation poses. In addition to the standard [CLS] and [SEP], we also use [TAR], [NAV], [GUI] as delimiter tokens for the initial target hint, NAVIGATOR's questions and the GUIDE's answers respectively. While this input structure is dialogue-specific, it is amenable to instructions-based datasets for multi-tasking.
+
+# 3.1 VISITRON Pre-Training
+
+We adopt a two-stage pre-training strategy, initializing VISITRON's encoder with weights from OsCAR to begin with web-scale disembodied visiolinguistic representations, followed by facilitating a domain shift to navigation and actions by pretraining on navigation data. For each navigation trajectory, we extract $(\boldsymbol{w}, \boldsymbol{q}, \boldsymbol{v}, \boldsymbol{a})$ tuples where $\boldsymbol{w}$ is the dialogue history/instruction, $\boldsymbol{q}$ is the sequence of object tags from the current panorama, $\boldsymbol{v}$ is the sequence of region features and $\boldsymbol{a}$ is the direction in the $360^{\circ}$ panoramic space where the next node in the trajectory is located (Fried et al., 2018). The pre-training objectives are:
+
+1. Masked Language Modeling: Input word tokens are replaced with [MASK] with $15\%$ probability and the masked token $x_{i}$ is predicted conditioned on surrounding tokens $x_{\backslash i}$ .
+2. Masked Object Tag Prediction: Object tags are replaced with [MASK] with $15\%$ probability. A feed-forward head on top of [MASK] is used to predict the tag from a distribution over Faster R-CNN semantic classes. This provides more fine-grained object supervision unlike OSCAR's global masked token loss for tokens in both object tags and text, since this computes a distribution over the object detector's semantic classes instead of over the
+
+entire input vocabulary.
+
+3. Directional Grounding: [CLS] hidden state goes into a feed-forward head to predict $\mathbf{a}$ .
+
+Figure 2 illustrates VISITRON's encoder architecture and the pre-training objectives we use, with an extracted tuple from a sample NDH instance.
+
+# 3.2 VISITRON Fine-Tuning
+
+After pre-training the encoder, we leverage it with an attention-based Long Short-Term Memory (LSTM) action decoder (Hochreiter and Schmidhuber, 1997), as shown in Figure 3. At time-step $t$ , the decoder (cell state $d_t$ ) takes the previous action $a_{t-1}$ , the panoramic ResNet features extracted from the current location/state and decodes the next action $a_t$ , while attending to the VISITRON encoder's cross-modal representation of its input. After this LSTM is fine-tuned, the same stack is frozen and a randomly initialized two-layer feedforward head is added and trained with a binary cross-entropy loss to learn to classify when to ask a question. The supervision for this head comes from the elongated CVDN instances defined in Section 2.2, with time-steps when a question was asked serving as positive labels and the remaining time-steps during which navigation occurs serving as negative labels. Note that as described in Section 2.1, the decoder's actions can belong in either the panoramic space or the low-level visuomotor space (Fried et al., 2018), leading to independent formulations for viewpoint selection and turn-based action prediction.
+
+# 4 Experiments
+
+In this section, we first describe the evaluation metrics we adopt. We then describe and discuss
+
+Table 1: Pre-Training Ablations (Fine-Tuning and Evaluating on NDH)
+
+ | Semantics-aligned Pre-Training Curriculum | Val Seen | Val Unseen |
| Stage 1: Web (OSCAR) | Stage 2: Navigation |
| Contrastive+Masked LM | ObjectTags | MaskedLM | Masked ObjectTag Prediction | DirectionalGrounding | GP (m) ↑ | SPL (%) ↑ | SR (%) ↑ | nDTW (%) ↑ | GP (m) ↑ | SPL (%) ↑ | SR (%) ↑ | nDTW (%) ↑ |
| 1 | (No pre-training and no object tags) | 4.76 | 36.56 | 46.07 | 30.97 | 2.09 | 9.96 | 22.49 | 6.50 |
| 2 | ✓ | | | | | 4.82 | 50.73 | 58.11 | 47.34 | 2.67 | 24.88 | 34.29 | 24.21 |
| 3 | ✓ | ✓ | | | | 4.38 | 45.15 | 52.09 | 41.14 | 2.30 | 13.03 | 24.81 | 8.63 |
| 4 | ✓ | ✓ | ✓ | | | 5.09 | 25.92 | 41.10 | 17.91 | 1.90 | 11.27 | 23.48 | 5.62 |
| 5 | ✓ | ✓ | ✓ | ✓ | | 4.83 | 48.22 | 56.02 | 47.01 | 2.70 | 24.04 | 32.86 | 23.46 |
| 6 | ✓ | ✓ | ✓ | ✓ | ✓ | 5.34 | 55.16 | 61.78 | 54.83 | 2.71 | 24.56 | 32.52 | 24.51 |
+
+
+VISITRON
+Figure 3: NAVIGATOR predicts navigation actions, given dialogue history and visual observations. The same stack decides when to ask the GUIDE a question. A similar setup can be used for question generation.
+
+our experimental observations from performing ablations during VISITRON pre-training and finetuning respectively. We present our observations for question-asking classification for CVDN, establishing a strong baseline for future models. We finally present and discuss our observations from submitting our model checkpoints to the static EvalAI leaderboard for CVDN.
+
+# 4.1 Evaluation Metrics
+
+We evaluate VISITRON's ability to navigate to the goal with the following metrics:
+
+- Goal Progress (GP) measures the difference between the distance from the start position to the final goal and the distance from the end position to the final goal. It is used to determine how much progress in meters the agent has made towards the final goal.
+- Success weighted by (Normalized Inverse) Path Length (SPL) introduced by Anderson
+
+et al. (2018a) provides a measure of success normalized by the ratio between the length of the shortest path and the selected path.
+
+- Success Rate (SR) measures the success of an episode. If the agent stops within 3 meters of the goal, it is considered a success.
+- Normalized Dynamic Time Warping (nDTW) introduced by Ilharco et al. (2019) helps measure a navigator agent's fidelity to the dialogue history/instruction by softly penalizing deviations from the reference path.
+
+We evaluate the question-asking classification head by computing accuracy and balanced accuracy (Brodersen et al., 2010). The latter accounts for the natural class imbalance of more navigation time-steps than question-asking time-steps expected in dialogue-based navigation by computing the average of recall obtained on each class.
+
+# 4.2 Pre-Training Ablations
+
+Using NDH and R2R trajectories, we pre-train VISITRON as described in Section 3.1. We begin experimenting with cumulative addition of each pretraining stage and objective to obtain an ablative understanding of their effect on the downstream NDH task. Results are shown in Table 1. We see that our pre-training strategy helps: the best performance on Val Seen (as measured by all metrics) is obtained when using all pre-training stages and objectives. We also see that Goal Progress (GP) is highest on Val Unseen in this setting (an absolute increase of 0.62 relative to no pre-training). Rows 3-4 demonstrate the efficacy of our second-stage masked language modeling (MLM) task, helping improve Val Seen GP from 4.38 to 5.09. Rows 4-5 demonstrate the efficacy of our newly introduced masked object tag prediction task as a means towards experience-driven concepts and semantics
+
+Table 2: Fine-Tuning Ablations
+
+ | # | Action Space | Multi-Task Fine-Tuning NDH+ | Val Seen | Val Unseen |
| GP (m) ↑ | SPL (%) ↑ | SR (%) ↑ | nDTW (%) ↑ | GP (m) ↑ | SPL (%) ↑ | SR (%) ↑ | nDTW (%) ↑ |
| NORTHIS | 1 | Turn-based | X | 1.15 | 9.66 | 11.78 | 26.86 | 1.60 | 13.02 | 14.77 | 29.28 |
| 2 | Action Prediction | ✓(RxR) | 1.50 | 12.30 | 15.18 | 19.95 | 0.97 | 11.52 | 15.44 | 20.49 |
| VA | 3 | Viewpoint | X | 5.34 | 55.16 | 61.78 | 54.83 | 2.71 | 24.56 | 32.52 | 24.51 |
| 4 | Selection | ✓(RxR) | 5.11 | 12.33 | 25.65 | 4.66 | 3.25 | 10.74 | 27.34 | 3.78 |
+
+identification and acquisition, with significant increases in all metrics across both validation seen and unseen splits. Rows 5-6 show that our directional grounding task for pre-training the encoder plays a particularly important role: the increase in both GP and nDTW suggest that this task improves VISITRON's ability to navigate closer to the goal while ensuring that dialogue fidelity is maintained in the process by aligning encoder representations in the direction along the reference path.
+
+# 4.3 Fine-Tuning Ablations
+
+Next, we perform ablations during fine-tuning, leveraging all objectives from Table 1 since our previous analysis demonstrated their effectiveness. For VLN agents, it has been shown that viewpoint selection in the panoramic space is a better formulation than turn-based action prediction in the low-level visuomotor space (Fried et al., 2018). However, it is not immediately obvious or known whether this can be extrapolated to dialogue-based navigation as in CVDN. So we experiment with both formulations for our NAVIGATOR. Given the sparsity of NDH instances $(\sim 4k)$ for fine-tuning, we also study if multi-task fine-tuning with the RxR dataset helps boost performance. Table 2 presents the fine-tuning ablation results. Row 1 and 3 demonstrate that panoramic viewpoint selection is a better formulation than turn-based action prediction for CVDN, with all metrics increasing significantly when switching to viewpoint selection. Further, we see in rows 3 and 4 that multi-task fine-tuning leads to better CVDN generalization, with Val Unseen GP increasing from 2.71 to 3.25 when multi-tasking with viewpoint selection. However, we see this increase in GP occurs alongside a decrease in nDTW, SPL and SR. This decrease can be attributed to the fact that the RxR dataset has very long trajectories, which prime the model to take long paths to the final CVDN goal (which GP cares about), well-beyond the next 5 GUIDE steps in the NDH instance that nDTW, SPL and SR
+
+evaluate against.
+
+# 4.4 Question-Asking Classification and Leaderboard Evaluation
+
+We pick the VISITRON model checkpoint with the highest GP in Table 2 (row 4), and perform imitation learning of the question-asking classification head as described in Section 3.2. We evaluate the classification head by creating elongated CVDN instances from the validation sets as described in Section 2.2, akin to how supervision was provided during training: time-steps when a question was asked serve as positive instances and the remaining timesteps during which navigation occurs serve as negative instances. As seen in Table 3, our approach to identifying when to ask questions vs. when to navigate establishes a strong baseline for future work on identifying when to ask questions with CVDN, as measured by accuracy and balanced accuracy on Val Unseen. It is important to note that our design choice of adding and training a separate head for this task while keeping the navigator stack frozen ensures that there is no direct impact on navigation performance itself. This is unlike approaches that perform direct navigation action space augmentation with a special action for question-asking, where navigation actions themselves are affected by the presence of an additional competing variable for shared total probability mass.
+
+Table 3: Question-Asking Classification Performance
+
+| Metric (%) | Val Seen | Val Unseen |
| Accuracy | 68.05 | 67.87 |
| Balanced Accuracy | 63.33 | 61.09 |
+
+We submitted this model checkpoint to the CVDN leaderboard aimed at the static NDH task. We observe in Table 4 that this model checkpoint's performance is competitive with state-of-the-art models with a hidden test GP of 3.11. However, the low hidden test SPL of 12 indicates the impact
+
+that multi-task fine-tuning with long RxR paths had on this checkpoint's ability to take short paths to the goal, like we discussed earlier in Section 4.3. Given this expected decrease in SPL when utilizing such long trajectories, we also created a model checkpoint by multi-task fine-tuning VISITRON on NDH, R2R and R4R. We observe that this model checkpoint obtains state-of-the-art SPL of 25 alongside an associated decrease in GP to 2.40.
+
+Table 4: NDH Hidden Test Set Performance
+
+| # | Method | GP (m) ↑ | SPL (%) ↑ |
| 1 | MT-RCM + EnvAg (Wang et al., 2020) | 3.91 | 17 |
| 2 | BabyWalk (Zhu et al., 2020b) | 3.65 | 11 |
| 3 | VISITRON | 3.11 | 12 |
| 4 | Cross-modal Memory Network (Zhu et al., 2020c) | 2.95 | 14 |
| 5 | PREVALENT (Hao et al., 2020) | 2.44 | 24 |
| 6 | VISITRON (Best SPL) | 2.40 | 25 |
+
+# 5 Related Work
+
+Vision-and-language pre-training (Tan et al., 2019; Lu et al., 2019; Sun et al., 2019; Chen et al., 2020; Zhou et al., 2020) has grown to become a popular area of research, primarily aimed at solving downstream tasks such as image captioning, visual question answering and image retrieval. This line of work typically involves learning cross-modal representations using self-supervised objectives with a co-attention Transformer that fuses the two modalities represented by input token embeddings and visual region features, where the latter is typically sourced from Faster R-CNN (Ren et al., 2015).
+
+Research in vision-and-language navigation (VLN) has also seen tremendous progress (Fried et al., 2018; Ke et al., 2019; Anderson et al., 2019; Tan et al., 2019; Zhu et al., 2020a) since the advent of the Room-to-Room (R2R) dataset (Anderson et al., 2018b) based on Matterport3D (Chang et al., 2017), with scope for further advances only increasing with the recent release of the much larger, densely annotated and multilingual Room-across-Room (RxR) dataset (Ku et al., 2020). As an extension to VLN, the recent Cooperative Vision-and-Dialog Navigation (CVDN) dataset (Thomason et al., 2020) allows for training interactive navigator and guide agents. The dominant focus of research with CVDN so far has been the Navigation from Dialog History (NDH) task introduced with CVDN, which is equivalent to treating the dialogue history as a VLN-style fixed instruction. The NDH formulation allows for easy transfer and multi-task learning (Hao et al., 2020; Wang et al., 2020; Zhang
+
+et al., 2020) with VLN. However, state-of-the-art VLN models such as VLN-BERT (Majumdar et al., 2020) rely on the fully-observable setting when framing the task as ahead-of-time path selection, which is fundamentally at odds with the need for dialogue in CVDN: dialogue is aimed at enabling the navigating agent to succeed while it makes navigation decisions and decides it needs assistance. The recent Recursive Mental Model (RMM) (Roman et al., 2020) for CVDN attempts to address this by introducing a simulated dialogue game-play mode, where a trained navigator is fine-tuned jointly with a pre-trained guide and evaluated in this mode. However, the RMM navigator does not dynamically ask questions, instead relying on a data-driven heuristic of asking questions after every 4th navigation time-step. VISITRON's design naturally leads to a generalization of this game-play mode which eliminates the aforementioned heuristic.
+
+Our work is similar to recent work (Hao et al., 2020) on leveraging pre-trained cross-modal representations for the NDH task. However, our work takes on added goals of learning when to ask questions and associative learning of visio-linguistic concepts and semantics to ensure they can be identified and acquired when interacting in new environments, which are key requirements for full cooperative vision-and-dialogue navigation.
+
+# 6 Conclusion and Future Work
+
+We presented VISITRON, a Transformer-based navigator designed to identify and acquire visio-linguistic concepts and semantics and make decisions, all key traits for interactive navigation inherent to CVDN. We demonstrated the efficacy of our approach via experiments and ablations. We proposed generalizing the game-play regime introduced with RMM (Roman et al., 2020) to enable interactive fine-tuning and evaluation of VISITRON-like models with pre-trained guides. The trade-off between GP and SPL in dialogue-based navigation, Sim-to-Real transfer (Anderson et al., 2021) and robustness in dialogue-based navigation in presence of speech recognition errors (Gopalakrishnan et al., 2020) are all important problems that merit detailed investigation in future work.
+
+# 7 Societal Impact
+
+The primary dataset of interest for our work on interactive navigation in photo-realistic indoor environments: Cooperative Vision-and-Dialog Nav
+
+igation (CVDN), is an English-only dataset. We also multi-task with several other datasets, namely R2R, R4R and RxR, but RxR is the only multilingual dataset and covers English, Hindi and Telugu. Due to CVDN being English-only, we utilized the English-pportion of the RxR data during multi-task fine-tuning. There are over 6500 known languages spoken in the world today and vision-and-dialog navigation research could, in principle, be deployed in every home in the world, but due to current data limitations, it can only be deployed in English-speaking homes. Our modeling methods should transfer to other languages given sufficient volume of data, but ensuring that might not be possible for low-resource or endangered languages. VISITRON may benefit from new training schemes and modeling improvements to account for such scenarios. When deployed in real homes, speech would be the primary modality for most humans to interact with such robots. While speech recognition research has advanced considerably, ensuring accurate speech recognition across various speaker populations and accents is still challenging. Errors in speech recognition could impact VISITRON's ability to navigate accurately, so making VISITRON robust to speech recognition errors will be necessary, potentially via augmentation of the language component of its training data with synthetic and actual speech recognition errors (Gopalakrishnan et al., 2020).
+
+During navigation, VISITRON needs access to neighboring viewpoints to select from. Each environment in CVDN contains an underlying navigation graph which provides this information, which might not be the case in real unseen environments. In its absence, additional modules can be added that generate a local navigation graph based on the surroundings (Anderson et al., 2021). Datasets in the vision-and-language navigation space such as R2R and CVDN typically consider the environment to be static. Obstacle avoidance methods need to be added to models built using these datasets to avoid hazardous collisions in a dynamic environment, such as with moving humans and pets.
+
+Large language models are known to have a high carbon footprint associated with training them (Strubell et al., 2019). VISITRON is about the same size as BERT (Devlin et al., 2019), which is now ubiquitously used in both academic and industrial settings and can be trained reasonably fast. The carbon footprint of this work was maintained within permissible limits by using a maximum of 8
+
+Tesla V100 GPUs for training.
+
+# Acknowledgments
+
+Many thanks to Jesse Thomason and Aishwarya Padmakumar for useful technical discussions and actionable feedback on multiple versions of this paper. We would also like to thank the anonymous reviewers for their service and useful feedback.
+
+# References
+
+Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, et al. 2018a. On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757.
+Peter Anderson, Ayush Shrivastava, Devi Parikh, Dhruv Batra, and Stefan Lee. 2019. Chasing ghosts: Instruction following as bayesian state tracking. In Advances in Neural Information Processing Systems, pages 371-381.
+Peter Anderson, Ayush Shrivastava, Joanne Truong, Arjun Majumdar, Devi Parikh, Dhruv Batra, and Stefan Lee. 2021. Sim-to-real transfer for vision-and-language navigation. In Conference on Robot Learning, pages 671-681. PMLR.
+Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sunderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018b. Vision- and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3674-3683.
+Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8718-8735, Online. Association for Computational Linguistics.
+Kay Henning Brodersen, Cheng Soon Ong, Klaas Enno Stephan, and Joachim M. Buhmann. 2010. The balanced accuracy and its posterior distribution. In Proceedings of the 2010 20th International Conference on Pattern Recognition, ICPR '10, page 3121-3124, USA. IEEE Computer Society.
+Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3d: Learning from rgb-d data in indoor environments. International Conference on 3D Vision (3DV).
+
+Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104-120. Springer.
+Ta-Chung Chi, Minmin Shen, Mihail Eric, Seokhwan Kim, and Dilek Hakkani-tur. 2020. Just ask: An interactive learning framework for vision and language navigation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 2459-2466.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
+Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower models for vision-and-language navigation. In Advances in Neural Information Processing Systems, pages 3314-3325.
+Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. In INTERSPEECH.
+Karthik Gopalakrishnan, Behnam Hedayatnia, Longshaokan Wang, Yang Liu, and Dilek Hakkani-Tur. 2020. Are neural open-domain dialog systems robust to speech recognition errors in the dialog history? an empirical study. In INTERSPEECH.
+Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, and Jianfeng Gao. 2020. Towards learning a generic agent for vision-and-language navigation via pretraining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13137-13146.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
+Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems (TOIS), 38(3):1-32.
+Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6700-6709.
+
+Gabriel Ilharco, Vihan Jain, Alexander Ku, Eugene Ie, and Jason Baldridge. 2019. General evaluation for instruction conditioned navigation using dynamic time warping. In ViGIL@NeurIPS.
+Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the path: Instruction fidelity in vision-and-language navigation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1862-1872.
+Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, and Siddhartha Srinivasa. 2019. Tactical rewind: Self-correction via backtracking in vision-and-language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6741-6749.
+Seokhwan Kim, Mihail Eric, Karthik Gopalakrishnan, Behnam Hedayatnia, Yang Liu, and Dilek Hakkani-Tur. 2020. Beyond domain APIs: Task-oriented conversational modeling with unstructured knowledge access. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 278-289, 1st virtual meeting. Association for Computational Linguistics.
+Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. 2020. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4392-4412.
+Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer.
+Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13-23.
+Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, and Dhruv Batra. 2020. Improving vision-and-language navigation with imagetext pairs from the web. In European Conference on Computer Vision, pages 259-274. Springer.
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28:91-99.
+
+Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz, and Jianfeng Gao. 2020. Rmm: A recursive mental model for dialog navigation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 1732-1745.
+Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650.
+Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7464-7473.
+Hao Tan, Licheng Yu, and Mohit Bansal. 2019. Learning to navigate unseen environments: Back translation with environmental dropout. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2610-2621.
+Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2020. Vision-and-dialog navigation. In Conference on Robot Learning, pages 394-406.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
+Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, and Sujith Ravi. 2020. Environment-agnostic multitask learning for natural language grounded navigation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, pages 413–430. Springer.
+Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvi-jit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. 2019. Evalai: Towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570.
+Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.
+
+Yubo Zhang, Hao Tan, and Mohit Bansal. 2020. Diagnosing the environment bias in vision-and-language navigation. arXiv preprint arXiv:2005.03086.
+Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified vision-language pre-training for image captioning and vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13041-13049.
+Fengda Zhu, Yi Zhu, Xiaojun Chang, and Xiaodan Liang. 2020a. Vision-language navigation with self-supervised auxiliary reasoning tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10012-10022.
+Wang Zhu, Hexiang Hu, Jiacheng Chen, Zhiwei Deng, Vihan Jain, Eugene Ie, and Fei Sha. 2020b. Babywalk: Going farther in vision-and-language navigation by taking baby steps. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2539-2556.
+Yi Zhu, Fengda Zhu, Zhaohuan Zhan, Bingqian Lin, Jianbin Jiao, Xiaojun Chang, and Xiaodan Liang. 2020c. Vision-dialog navigation by exploring cross-modal memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10730-10739.
\ No newline at end of file
diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/images.zip b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2fdc13787378f561068e26c50a35646e07eea3b2
--- /dev/null
+++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2842fc141ef4415c01869bb7226cee472980244bf527e2204abf5a51ef2e2409
+size 320965
diff --git a/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/layout.json b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d960b28b8eeab42664fbbadff641b1eb8bd71147
--- /dev/null
+++ b/visitronvisualsemanticsalignedinteractivelytrainedobjectnavigator/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09cbe83dbeeee5df43c91df29626b104b8591fb66ca9cdede9a1a8cc951c5a6a
+size 375440
diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_content_list.json b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0f83695c79cef9f5794da213a8954f8758c2fbd4
--- /dev/null
+++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c9b7403828bb7bd053d1a9540fdad29cd37390a1aaf6dbdae0d5f43565fbd823
+size 88192
diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_model.json b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..31186dc038bba180209672c1753830f9eb08a73c
--- /dev/null
+++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7f1ea8642aa16da7ea13be0eb62b98263f6acfe14b4a4ac0b26b5ea53b7c825a
+size 106670
diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_origin.pdf b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..68a9ebf1556b53d252e0d8ffbaf06840b496b68e
--- /dev/null
+++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/107c2baa-b9b2-4045-8ab0-42040c79d133_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a1fff26cf3ed982f67c0c211b8abe9b6475e2271cbd8115ddf1024b6413ccba
+size 1068290
diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/full.md b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd933a1787844bd54c8e1822a5801e75c146214d
--- /dev/null
+++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/full.md
@@ -0,0 +1,375 @@
+# Visualizing the Relationship Between Encoded Linguistic Information and Task Performance
+
+Jiannan Xiang\*, Huayang Li\*, Defu Lian\*, Guoping Huang\*, Taro Watanabe\*, Lemao Liu\*
+
+Carnegie Mellon University $\spadesuit$ Nara Institute of Science and Technology
+
+$\diamond$ University of Science and Technology of China $\clubsuit$ Tencent AI Lab
+
+jiannanx@cs.cmu.edu, li.huayang.lh6@is.naist.jp
+
+liandefu@ustc.edu.cn, donkeyhuang@tencent.com
+
+taro@is.naist.jp, lemaoliu@gmail.com
+
+# Abstract
+
+Probing is popular to analyze whether linguistic information can be captured by a well-trained deep neural model, but it is hard to answer how the change of the encoded linguistic information will affect task performance. To this end, we study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality. Its key idea is to obtain a set of models which are Pareto-optimal in terms of both objectives. From this viewpoint, we propose a method to optimize the Pareto-optimal models by formalizing it as a multi-objective optimization problem. We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and investigate the relationship between several kinds of linguistic information and task performances. Experimental results demonstrate that the proposed method is better than a baseline method. Our empirical findings suggest that some syntactic information is helpful for NLP tasks whereas encoding more syntactic information does not necessarily lead to better performance, because the model architecture is also an important factor.
+
+# 1 Introduction
+
+Recent years have witnessed great success of deep neural networks for natural language processing tasks, such as language modeling (Zaremba et al., 2014; Merity et al., 2018) and Neural Machine Translation (Bahdanau et al., 2015; Vaswani et al., 2017). The excellent task performance they achieved spiked the interest in interpreting their underlying mechanism. Since linguistic knowledge is crucial in natural languages, an emerging body of literature uses probes (Conneau et al., 2018; Alt et al., 2020; Saleh et al., 2020; Cao et al., 2021) to investigate whether a standard model trained
+
+
+Figure 1: Illustration of Pareto frontier by a toy example. Triangle $(\triangle)$ corresponds to the standard checkpoint with best performance and each circle $(\bigcirc)$ corresponds to a sampled checkpoint. The y-axis indicates the linguistic information $\mathcal{I}$ encoded by the model, and x-axis indicates the negative loss value $-\mathcal{L}$ .
+
+towards better task performance also captures the linguistic information. From the perspective of information theory, Voita and Titov (2020) and Pimentel et al. (2020b) show that probes can be used to estimate the amount of linguistic information captured by a fixed model.
+
+However, the above probing only extracts linguistic information from a fixed standard model, which helps little to understand the relationship between the task performance and linguistic information encoded by the model. For example, under their methodology, it is difficult to answer the following two questions. First, would adding linguistic information be beneficial for an NLP model; second, is it harmful when this linguistic information is reduced. Therefore, it is still an open and intriguing question to reveal how task performance changes with respect to different amounts of linguistic information.
+
+To this end, this paper proposes a novel viewpoint to study the relationship between task performance and the amount of linguistic information, inspired by the criterion of Pareto Optimality which is widely used in economics (Greenwald and Stiglitz, 1986). Our main idea is to obtain Pareto-optimal models on a test set in terms of both linguistic information and task performance and then visualize their relationship along with these
+
+optimal models. By comparing a standard model with these optimal models, it is clear to answer the question that whether adding the encoded information is helpful to improve the task performance over the standard model, as illustrated in Figure 1, where the points on the line are Pareto-optimal and the red triangle denotes the standard model with best performance.
+
+Nevertheless, it is typically intractable to obtain the Pareto-optimal models according to both dimensions on test data. To address the challenge, we propose a principled method to approximately optimize the Pareto-optimal models on the training data which can be expected to generalise well on test sets according to statistical learning theory (Vapnik, 1999). Formally, the approach can be regarded as a multi-objective optimization problem: during the learning procedure, it optimizes two objectives, i.e., the task performance and extracted linguistic information. In addition, we develop a computationally efficient algorithm to address the optimization problem. By inspecting the trend of those Pareto-optimal points, the relationship between task performance and linguistic information can be clearly illustrated. Back to our questions, we also consider two instances within the proposed methodology: one aims to maximize the amount of linguistic information (i.e., adding) while the other tries to minimize it (i.e., reducing).
+
+We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and choose three different linguistic properties, including two syntactic properties (Part-of-Speech and dependency labels) and one phonetic property. We investigate the relationship between NMT performance and each syntactic information, and the relationship between LM performance and phonetic information. For machine translation, we use LSTM, i.e., RNN-search (Bahdanau et al., 2015), and Transformer (Vaswani et al., 2017) as the main model architectures, and conduct our experiments on $\mathrm{En} \Rightarrow \mathrm{De}$ and $\mathrm{Zh} \Rightarrow \mathrm{En}$ tasks. For language modeling, we employ the LSTM model and conduct experiments on the Penn Treebank dataset. The experimental results show that: i) syntactic information encoded by NMT models is important for MT task and reducing it leads to sharply decreased performance; ii) the standard NMT model obtained by maximum likelihood estimation (MLE) is Pareto-optimal for Transformer but it is not the case for LSTM based NMT; iii) reducing the phonetic in
+
+formation encoded by LM models only makes task performance drop slightly.
+
+In summary, our contributions are three-fold:
+
+1. We make an initial attempt to study the relationship between encoded linguistic information and task performance, i.e., how the change of linguistic information affects the performance of models.
+2. We propose a new viewpoint from Pareto Optimality as well as a principled approach which is formulated as a multi-objective optimization problem, to visualize the relationship.
+3. Our experimental results show that encoding more linguistic information is not necessary to yield better task performance depending on the specific model architecture.
+
+# 2 Related Work
+
+Probe With the impressive performance of Neural Network models for NLP tasks (Sutskever et al., 2014; Luong et al., 2015; Vaswani et al., 2017; Devlin et al., 2019; Xu et al., 2020), people are becoming interested in understanding neural models (Ding et al., 2017; Li et al., 2019, 2020). One popular interpretation method is probe (Conneau et al., 2018), also known as auxiliary prediction (Adi et al., 2017) and diagnostic classification (Hupkes et al., 2018), which aims to understand how neural models work and what information they have encoded and used. From the perspective of information theory, Voita and Titov (2020) and Pimentel et al. (2020b) show that probes can be used to estimate the amount of linguistic information captured by a model. However, recent research studies point out that probes fail to demonstrate whether the information is used by models. For example, Hewitt and Liang (2019) show that the probe can also achieve high accuracy in predicting randomly generated tags, which is useless for the task. And Ravichander et al. (2021) present that the representations encode the linguistic properties even if they are invariant and not required for the task. Instead of studying the encoded linguistic information by training a probe for fixed representations, in this work we study how the amount change of linguistic information affects the performance of NLP tasks.
+
+Information Removal Information removal is crucial in the area of transfer learning (Ganin and Lempitsky, 2015; Tzeng et al., 2017; Long et al., 2018) and fairness learning (Xie et al., 2017; Elazar and Goldberg, 2018), where people want to remove
+
+domain information or bias from learned representations. One popular method is Adversarial Learning (Goodfellow et al., 2014; Ganin and Lempitsky, 2015), which trains a classifier to predict the properties of representations, e.g., domain information or gender bias, while the feature extractor tries to fool the classifier. In this work, when using our method to reduce the linguistic information in the representations, we find that our multi-objective loss function is the same form as adversarial learning, which provides the theoretical guarantee for using adversarial learning to find the Pareto-optimal solutions to a multi-objective problem.
+
+Recently, Elazar et al. (2020) also propose to study the role of linguistic properties with the idea of information removal (Ravfogel et al., 2020). However, the representations got by their method may not be Pareto-optimal, because it only minimizes the mutual information, but ignores the objective of task performance. On the contrary, our proposed method optimizes towards both objectives, thus our results can be used to visualize the relationship between linguistic properties and task performance.
+
+Pareto Optimality The idea of Pareto Optimality (Mas-Colell et al., 1995) is an important criterion in economics, where the goal is to characterize situations where no variable can be better off without making at least one variable worse off. It has been also widely used in the area of sociology and game theory (Beckman et al., 2002; Chinchuluun et al., 2008). In addition, in artificial intelligence Martínez et al. (2020) use Pareto optimality to solve group fairness problem and Duh et al. (2012) proposed to optimize an MT system on multiple metrics based on the theory of Pareto optimality. In particular, Pimentel et al. (2020a) propose a variant of probing on the hidden representation of deep models and they consider Pareto optimality in terms of both objectives similar to our work. Comparing with their work, one difference is the choice of objectives. Another significant difference is that they optimize probing model in a conventional fashion, and thus are unable to study the relationship between linguistic information and task performance.
+
+# 3 Visualizing Relationship via Pareto Optimality
+
+We consider the relationship between linguistic information and task performance for two popular
+
+tasks in NLP, i.e., machine translation and language modeling. Let $\boldsymbol{x} = \{x_{1}, x_{2}, \dots, x_{N}\}$ be a sentence and $s = \{s_{1}, s_{2}, \dots, s_{N}\}$ be the labels of the linguistic property of $\boldsymbol{x}$ , where $s_{i}$ is the label for $x_{i}$ , e.g., POS tag. On both tasks, a deep model typically encodes $\boldsymbol{x}$ into a hidden representation $\boldsymbol{h}$ with a sub-network $E$ parameterized by $\theta_{e}$ : $h = E(\boldsymbol{x})$ , and then uses another sub-network $D$ parameterized by $\theta_{d}$ to map $h$ into an output.
+
+# 3.1 Background
+
+$h$ and Loss in NMT An NMT architecture aims to output a target sentence $\pmb{y} = \{y_{1},y_{2},\dots,y_{M}\}$ for a given source sentence $\pmb{x}$ according to $P(\pmb{y}|\pmb{x};\theta)$ (Zaremba et al., 2014; Vaswani et al., 2017), where $\theta$ indicates a set of parameters of a sequence-to-sequence neural network, which contains an encoder $E$ and a decoder $D$ . We define $\pmb{h}$ as the output of the encoder. To train $\theta$ , the MLE loss is usually minimized on a training dataset. For NMT, the loss is defined as following:
+
+$$
+L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) = - \sum_ {j = 1} ^ {M} \log P \left(y _ {j} \mid \boldsymbol {x}, \boldsymbol {y} _ {< j}; \theta\right) \tag {1}
+$$
+
+In our experiments, we consider two models, namely the LSTM (Bahdanau et al., 2015) and Transformer (Vaswani et al., 2017).
+
+$h$ and Loss in LM For language modeling task, a deep model typically generates a token $x_{j}$ based on $\pmb{x}_{< j}$ according to $P(x_{j}|\pmb{x}_{< j};\theta)$ . Here the subnetworks $E$ is set as one hidden layer to encode $\pmb{x}_{< j}$ into $h_{< j}$ and $D$ is set as the sub-network to generate $x_{j}$ on top of $h_{< j}$ . The parameter $\theta$ is optimized by the following MLE loss:
+
+$$
+L _ {\theta} (\boldsymbol {x}) = - \sum_ {j = 1} ^ {N} \log P (x _ {j} | \boldsymbol {x} _ {< j}; \theta).
+$$
+
+To make notations consistent for both NMT and LM, in the rest of this paper, we follow the form of Eq. (1) and re-write the $L_{\theta}(\pmb {x})$ in LM as $L_{\theta}(\pmb {x},\pmb {y})$ , where $\pmb{y}$ is a shifted version of $\pmb{x}$ , i.e., $\pmb {y} = \{x_2,\dots ,x_N\}$
+
+Encoded Information Let $\operatorname {I}(\boldsymbol {h},\boldsymbol {s})$ denote the linguistic information in the representation $h$ , i.e., the mutual information between $\pmb{h}$ and the linguistic label $s$ . Since the probability $p(h,s)$ is unknown, it is intractable to compute $\operatorname {I}(h,s)$ . Following Pimentel et al. (2020b), we approximately estimate
+
+$\operatorname {I}(\pmb {h},\pmb {s})$ by using a probing model $q$ as follows:
+
+$$
+\begin{array}{l} \operatorname {I} (\boldsymbol {h}, \boldsymbol {s}) = H (\boldsymbol {s}) - H (\boldsymbol {s} | \boldsymbol {h}) \\ \approx \mathrm {H} (\boldsymbol {s}) - \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s}) \tag {2} \\ = \mathrm {H} (\boldsymbol {s}) + \min _ {\theta_ {q}} \sum_ {i} \log q (s _ {i} | \boldsymbol {h}; \theta_ {q}) \\ \end{array}
+$$
+
+where $H(s)$ is the entropy of linguistic labels, $H(\boldsymbol{s}|\boldsymbol{h})$ is the ideal cross entropy, and $L_{\theta_q}(\boldsymbol{h},\boldsymbol{s})$ is the cross-entropy loss of the probe model $q$ parameterized by $\theta_q$ .
+
+Theory of Pareto Optimality Pareto optimality (Mas-Colell et al., 1995) is essentially involved in the multi-objective optimization problem. Suppose that we have $K$ different objectives $M_{k}$ to evaluate a parameter $\theta^{\prime}$ , i.e.,
+
+$$
+\arg \max _ {\theta^ {\prime}} [ M _ {1} (\theta^ {\prime}); M _ {2} (\theta^ {\prime}); \dots ; M _ {K} (\theta^ {\prime}) ]. \tag {3}
+$$
+
+There are two important concepts in Pareto optimality as follows:
+
+Definition 1. Pareto Optimal: A parameter $\theta^{*}$ is Pareto-optimal iff for any $\theta^{\prime}$ , the condition always holds that, $\forall i = 1,\dots,k$ , $M_{i}(\theta^{*})\geq M_{i}(\theta^{\prime})$ and $\exists j$ such that $M_{j}(\theta^{*}) > M_{j}(\theta^{\prime})$ .
+
+Definition 2. Pareto Frontier: The set of all Pareto-optimal parameters is called the Pareto frontier.
+
+# 3.2 Viewpoint via Pareto Optimality
+
+Motivation Suppose $\theta$ is a given model parameter, $L(\theta)$ is its task performance on a test set, and $I(\theta)$ is the amount of linguistic information encoded in its hidden representation. Conventionally, if one can figure out a function $f$ such that $I = f(L)$ for any $\theta$ , it is trivial to study their relationship by visualizing $f$ . Unfortunately, for some complicated situations as illustrated in Figure 1, there does not exist such a function to represent the relationship between two variables due to a large number of many-to-many correspondences.
+
+Our Viewpoint Pareto Optimality, a well-known criterion in economics (Mas-Colell et al., 1995), is widely used to analyze the relationship among multiple variables in a complicated environment (Chinchuluun et al., 2008). In our context, it is also a powerful tool to reveal the relationship between the encoded linguistic information and task performance. Taking the Pareto Frontier in Figure 1 as an example, since the capacity of a model is fixed and linguistic information may compete with other kinds of information, capturing more linguistic information may reduce the amount of
+
+information from other sources that are also helpful for the model. Nevertheless, if increasing the amount of linguistic information constantly leads to performance gain, i.e., linguistic information is complimentary to translation, only one Pareto Optimal point would exist on the top right corner.
+
+Therefore, in this paper, we propose to study the relationship between $I(\theta)$ and $L(\theta)$ from the viewpoint of Pareto Optimality. Our key idea is to take into account only Pareto-optimal models rather than all models like the conventional method. Thanks to the definition of Pareto optimality, there are no many-to-many correspondences between two variables along the Pareto frontier. Hence their relationship can be visualized by the trend of these frontier points, as shown in Figure 1. Taking Figure 1 as an example, to answer the questions mentioned before, we can see that adding more information is possible to increase the task performance comparing with a standard model. According to this viewpoint, the core challenge is how to obtain a set of models which are Pareto optimality on a test dataset.
+
+It is natural to employ a heuristic method to approximately obtain the Pareto-optimal models as following. We can first randomly select a number of checkpoints during the standard training and probe each checkpoint by optimizing its corresponding probing model $q$ , as shown in Eq (2). Second, we can record the task performances and the amounts of linguistic information of the selected models on a test set. Finally, we can find the Pareto-optimal points and obtain the Pareto frontier. However, when using this method in our experiments, we find the amounts of encoded linguistic information for all checkpoints are similar and the task performances of those checkpoints are worse than the optimal model. Hence, in the next section, a new method will be presented to approximately derive the Pareto-optimal models.
+
+# 4 Methodology
+
+# 4.1 Multi-Objective Optimization
+
+To study the relationship between linguistic information and task performance, our goal is to obtain a set of models $\theta$ which are Pareto optimal on test data in terms of both objectives. Inspired by statistical learning theory (Vapnik, 1999), we propose an approach by optimizing the Pareto-optimal models towards both objectives on a given training dataset, which are expected to generalize well on unseen
+
+test data, i.e., these models are Pareto optimal on unseen test data. Formally, Our approach can be formulated as the following multi-objective optimization problem:
+
+$$
+\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); - \mathrm {I} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {4}
+$$
+
+where minimizing $L_{\theta}(\pmb{x}, \pmb{y})$ aims to promote the task performance and maximizing $\mathrm{I}(h, s)$ encourages a model to encode more linguistic information in the representation. Once we obtain a set of Pareto-optimal models, we can observe how increasing the encoded linguistic information affects the variance of task performance.
+
+To further study how reducing the encoded linguistic information affects task performance, we optimize a similar multi-objective problem:
+
+$$
+\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); \mathrm {I} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {5}
+$$
+
+The only difference between Eq. (4) and Eq. (5) is that the former maximizes $\mathrm{I}(\pmb {h},\pmb {s})$ while the latter minimizes $\operatorname {I}(\pmb {h},\pmb {s})$ .
+
+Since $H(s)$ is a constant term, we can plug Eq. (2) into the above two equations and obtain the following reduced multi-objective problems:
+
+$$
+\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {6}
+$$
+
+$$
+\arg \min _ {\theta} [ L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}); - \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s}) ] \tag {7}
+$$
+
+Notice that in the above equations, $\min_{\theta_q} L_{\theta_q}(h, s)$ resembles a conventional probing if $h$ is a fixed representation. However, unlike the standard probing applied on top of a fixed $h$ determined by the standard model, here $h$ is the representation obtained from a encoder $E$ parameterized by $\theta_e$ . It is also worth noting that the Pareto frontiers obtained from the Eq. (6) and (7) are independent, although they have a similar measurement, because the Pareto Optimal is only valid for the same objective.
+
+# 4.2 Optimization Algorithm
+
+To solve the above multi-objective problems, we leverage the linear-combination method to find a set of solutions, and then filter the non-Pareto-optimal points from the set to get the Pareto frontier. The details of our algorithm are shown below.
+
+Optimization Process Since the detailed optimization method for Eq. (6) is similar to that for Eq. (7), in the following we take Eq. (6) as an example to describe the optimization method. Inspired by (Duh et al., 2012), we employ a two-step strategy for optimization to find the Pareto frontier to
+
+
+Figure 2: Overview of our multi-objective optimization method, where $L_{y} = L_{\theta}(\boldsymbol{x}, \boldsymbol{y})$ and $L_{\theta_q} = L_{\theta_q}(\boldsymbol{h}, \boldsymbol{s})$ . In the back propagation, the GM Layer multiplies the gradient by $\pm \lambda$ , i.e., $\lambda$ for Eq. (6) and $-\lambda$ for Eq. (7).
+
+address the multi-objective problems.
+
+In the first step, we adopt an method to find the Pareto-optimal solutions to the problem. There are several different methods to solve the problem, such as linear-combination, PMO (Duh et al., 2012) and APStar (Martínez et al., 2020). In this work, we adopt the linear-combination method because of its simplicity. Specifically, we select a coefficient set $\{\lambda_k \mid \lambda_k > 0\}$ and minimize the following interpolating function for each coefficient $\lambda_k$ :
+
+$$
+\arg \min _ {\theta} \left(L _ {\theta} (\boldsymbol {x}, \boldsymbol {y}) + \lambda_ {k} \min _ {\theta_ {q}} L _ {\theta_ {q}} (\boldsymbol {h}, \boldsymbol {s})\right) \tag {8}
+$$
+
+Notice that the first term of the loss function $L_{\theta}(\boldsymbol{x}, \boldsymbol{y})$ is the function of both encoder parameters $\theta_e$ and decoder parameters $\theta_d$ , while the second term $\min_{\theta_q} L_{\theta_q}(\boldsymbol{h}, \boldsymbol{s})$ is only the function of $\theta_e$ . Therefore, when minimizing Eq.(8), we apply a Gradient-Multiple (GM) Layer on the representations before inputting it into the probe model. As shown in Fig. 2, in the forward propagation, the GM Layer acts as an identity transform, while in the backward propagation, the GM Layer multiplies the gradient by $\pm \lambda$ and passes it to the preceding layers. Note that when the multiplier is $-\lambda$ , the GM Layer is the same as Gradient Reversal Layer (Ganin and Lempitsky, 2015).
+
+Suppose $\{\theta_k^* \mid \theta_k^* > 0\}$ is the minimized solution set for Eq. (8). In the second step, to get more accurate solutions, we filter the non-Pareto-optimal points of the solution set obtained by $\{\theta_k^* \mid \theta_k^* > 0\}$ . Finally, we get the Pareto frontier to the multi-objective problem according to the definition of Pareto optimality.
+
+Algorithm 1 Optimization Algorithm
+Input: $\Lambda = \{\lambda_k\}$ , learning rate $\eta$
+Output: Pareto frontier set $\mathcal{P} = \{\langle \theta_e^i, \theta_d^i \theta_q^i \rangle\}$
+1: $\mathcal{M} = \{\}$ empty model set
+2: for $\lambda_k \in \Lambda$ do minimize Eq. (8)
+3: Random initialize $\theta_e^k, \theta_d^k$ , and $\theta_q^k$
+4: while convergence do
+5: $\theta_e^k = \theta_e^k - \eta \left( \frac{\partial L_\theta(x,y)}{\partial \theta_e} + \lambda_k \frac{\partial L_\theta_q(s,h)}{\partial \theta_e} \right)$
+6: $\theta_d^k = \theta_d^k - \eta \frac{\partial L_\theta(x,y)}{\partial \theta_d}$
+7: $\theta_q^k = \theta_q^k - \eta \frac{\partial L_\theta_q(s,h)}{\partial \theta_q}$
+8: end while
+9: Re-train a probe model $\theta_{q'}^k$ based on fixed encoder $\theta_e$
+10: Add $\langle \theta_e^k, \theta_d^k, \theta_{q'}^k \rangle$ into $\mathcal{M}$
+11: end for
+12: $\mathcal{P} = \{\}$ Pareto frontier set
+13: for all $\langle \theta_e^k, \theta_d^k, \theta_{q'}^k \rangle \in \mathcal{M}$ do
+14: if IsParetoOptimal( $\theta_e^k, \theta_d^k, \theta_{q'}^k$ ) then
+15: Add $\langle \theta_e^k, \theta_d^k, \theta_{q'}^k \rangle$ into $\mathcal{P}$
+16: end if
+17: end for
+
+Detailed Algorithm The overall optimization algorithm regarding to Eq. (6) is shown in Algorithm 1. Theoretically, when minimizing Eq. (8), in every step updating $\theta$ , we should retrain the probe model $\theta_q$ to minimize $L_{\theta_q}(\boldsymbol{h}, \boldsymbol{s})$ in for many steps, in order to estimate $\mathrm{H}(\boldsymbol{s}|\boldsymbol{h})$ precisely. However, this is time-consuming and inefficient. Instead, after updating $\theta$ , we update $\theta_q$ only by one step (see line 7 Algorithm 1). Empirically, we find that optimization in this way has been very effective.
+
+In addition, as is mentioned by Elazar and Goldberg (2018), information leakage may occur when minimizing the mutual information. Therefore, after the training process is finished, we fix the deep model and retrain another probe model to estimate $\mathrm{H}(s|h)$ more precisely (line 9 in Algorithm 1). When maximizing the mutual information, we find there is no difference between $\mathrm{H}(s|h)$ estimated by jointly trained or retrained probe models.
+
+# 5 Experimental Settings
+
+# 5.1 Dataset
+
+We conduct experiments on both machine translation and language modeling tasks. For machine
+
+in our preliminary experiments.
+
+translation, we conduct the experiments on $\mathrm{En} \Rightarrow$ De and $\mathrm{Zh} \Rightarrow$ En translation tasks. For $\mathrm{En} \Rightarrow$ De task, we use WMT14 corpus which contains 4M sentence pairs. For $\mathrm{Zh} \Rightarrow$ En task, we use LDC corpus which consists of 1.25M sentence pairs, and we choose NIST02 as our validation set, and NIST06 as our test set. For language modeling task, we use Penn Treebank $^2$ dataset. We preprocess our data using byte-pair encoding (Sennrich et al., 2016) and keep all tokens in the vocabulary. For machine translation, we use case-insensitive 4-gram BLEU score (Papineni et al., 2002) to measure the task performance, which is proved to be positively correlated well with the MLE loss (?); For language modeling, we directly use the MLE loss to evaluate the task performance.
+
+# 5.2 Linguistic Properties
+
+For machine translation, we study part-of-speech (POS) and dependency labels in this work. Since there are no gold labels for the MT datasets, we use Stanza toolkit3 (Qi et al., 2020) to annotate source sentences and use the pseudo labels for running our algorithm, following Senrich and Haddow (2016); Li et al. (2018). We clean the labels and remove the sentences that fail to be parsed by Stanza from the dataset. To study whether all kinds of linguistic information are critical for neural models, we also investigate the phonetic information on the language modeling task. More precisely, the probing model needs to predict the first character of the International Phonetic Alphabet of each word.4 We get the labels with the open source toolkit English-to-IPA5. We use mutual information $\mathrm{I}(h,s) = \mathrm{H}(s) - \mathrm{H}(s|h)$ to evaluate the amount of information in the representations. Since $\mathrm{H}(s)$ is a constant, we only compare $\mathrm{H}(s|h)$ in the experiments. Note that $\mathrm{H}(s|h)$ is estimated by our probe model $q$ .
+
+# 5.3 Implementation Details
+
+All of our models are implemented with Fairseq $^6$ (Ott et al., 2019). For NMT experiments, our LSTM model consists of a bi-directional 2-layer encoder with 256 hidden units, and a 2-layer decoder
+
+$^{2}$ https://deepai.org/dataset/
+penn-treebank
+ $^{3}$ https://github.com/stanfordlp/stanza
+ $^{4}$ For example, given the input sentence "This dog is so cute", the probing model is asked to predict "ō dɪs k".
+ $^{5}$ https://github.com/mphilli/
+English-to-IPA
+ $^{6}$ https://github.com/pytorch/fairseq
+
+
+
+
+
+
+
+
+
+
+Figure 3: Experiments on WMT14 corpus. Triangle $(\triangle)$ denotes the model trained by minimizing MLE loss, circle $(\bigcirc)$ denotes the models obtained by our method, and the models on the line $(—)$ denotes the Pareto frontier.
+
+
+
+
+
+
+
+
+Figure 4: Comparison with baseline method. Triangle $(\triangle)$ denotes the standard model by minimizing MLE loss. The green line and blue line are frontiers got from baseline method and our method respectively.
+
+with 512 hidden units, and the probe model is a 2-layer MLP with 512 hidden units. Our Transformer model consists of a 6-layer encoder and a 6-layer decoder, whose hyper-parameters are the same as the base model in (Vaswani et al., 2017), and the probe model is a 6-layer transformer encoder. For LM experiments, our model is a 2-layer LSTM with 256 hidden states, and the probe model is a 2-layer MLP with 256 hidden states. More training details about our models are shown in appendix A.
+
+# 6 Experiment Results
+
+In the following experiments, "Model + Property", e.g., "Transformer+Pos", which is corresponding to Eq. 4 and studies how adding the linguistic properties information affects the task performance. Instead, "Model - Property", e.g., "Transformer-Pos", which is corresponding to Eq. 5 and studies how removing the linguistic properties information affects the task performance. It is worth noting
+
+that merging the two frontiers of $+$ Property and - Property together would lead to trivial results, because Pareto Optimal points of the $+$ Property are more likely to dominate. However, we think the frontier of - Property is helpful for answering the question that whether reducing the encoded linguistic information would affect the model performance. Therefore, we plot the Pareto frontiers for the two objectives independently.
+
+# 6.1 Soundness of Methodology
+
+The heuristic method mentioned before can be considered as a simple and straightforward baseline method to measure the relationship. To set up this baseline, we firstly save some checkpoints every 1,000 steps when training a standard model. Second, we randomly sample 30 checkpoints for probing and plot a scatter diagram in terms of BLEU and encoded linguistic information.
+
+As shown in Figure 4, we compare our proposed method with the heuristic method in the setting of "Transformer+Pos". Comparing with the baseline method, the frontier obtained from our method is better: for each model explored by baseline, there exists at least one model explored by our method whose two objectives, i.e., encoded linguistic information and BLEU score, are larger. The main reason is that the objective of baseline method only considers the task performance and most checkpoints contain similar encoded linguistic information. Therefore, the models optimized by our multi-objective method is more close to the globally Pareto-optimal points $^7$ , making the
+
+
+
+
+
+
+
+
+
+
+Figure 5: Experimental results on LDC corpus. The format is the same as Fig. 3
+
+
+
+
+
+
+
+revealed relationship between encoded linguistic information and task performance more reliable. Therefore, in the next subsection, our proposed method will be used to visualize the relationship between encoded linguistic information and task performance for neural models.
+
+# 6.2 Visualization Results
+
+Results on NMT The results of machine translation on the WMT dataset are shown in Figure 3. For LSTM based NMT, we observe that the standard model, i.e., the $\triangle$ in Figure 3, is not in the Pareto frontier in Figure 3 (a,c). In other words, when adding linguistic information into the LSTM model, it is possible to obtain a model which contains more POS or DEP information and meanwhile leads to better BLEU score than the standard model by standard training. In contrast, for Transformer based NMT, the standard model is in the Pareto frontier, as shown in Figure 3 (e,g). This finding provides an explanation to the fact in NMT: many efforts (Luong et al., 2016; Nădejde et al., 2017; Bastings et al., 2017; Hashimoto and Tsuruoka, 2017; Eriguchi et al., 2017) have been devoted to improve the LSTM based NMT architecture by explicitly modeling linguistic properties, but few have been done on Transformer based NMT (McDonald and Chiang, 2021; Currey and Heafield, 2019). In addition, when removing the linguistic information from LSTM or Transformer, the standard model is very close to the lower right of Pareto frontier, or even at the frontier, as shown in Figure 3 (b,d,f,h). This result shows that removing linguistic informa
+
+
+Figure 6: Experimental results on the PTB dataset.
+
+
+
+tion always hurts the performance of NMT models for both LSTM and Transformer, indicating that encoding POS and DEP information is important for NMT task. Similar trends are observed on the LDC datasets, as shown in Figure 5. More details about the effect of randomness on our approach are shown in appendix B.
+
+Results on LM Above experiments have shown that both syntactic information are important for NMT models, and then a natural question is whether all kinds of linguistic information are important for neural models. To answer this question, we propose to investigate the influence of phonetic information on a language model. Figure 6 depicts the relationship between encoded phonetic information and task performance for an LSTM based language model. In Figure 6 (a), we find that the performances of Pareto-optimal models drop slightly when forcing an LSTM model to encode more phonetic information. Besides, as the Pareto-frontier shown in Figure 6 (b), removing phonetic information from an LSTM model only leads to a slight change in performance. These experiments demonstrate that the encoded phonetic information may be not that critical for an LSTM based language model. This finding suggests that
+
+not all kinds of linguistic information are crucial for LSTM based LM and it is not promising to further improve language modeling with phonetic information.
+
+# 7 Conclusion
+
+This paper aims to study the relationship between linguistic information and the task performance and proposes a new viewpoint inspired by the criterion of Pareto Optimality. We formulate this goal as a multi-objective problem and present an effective method to address the problem by leveraging the theory of Pareto optimality. We conduct experiments on both MT and LM tasks and study their performance with respect to linguistic information sources. Experimental results show that the presented approach is more plausible than a baseline method in the sense that it explores better models in terms of both encoded linguistic information and task performance. In addition, we obtain some valuable findings as follows: i) syntactic information encoded by NMT models is important for MT task and reducing it leads to sharply decreased performance; ii) the standard NMT model obtained by minimizing MLE loss is Pareto-optimal for Transformer but it is not the case for LSTM based NMT; iii) reducing the phonetic information encoded by LM models only leads to slight performance drop.
+
+# Acknowledgement
+
+We would like to thank the anonymous reviewers for their constructive comments. L. Liu is the corresponding author.
+
+# References
+
+Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. Probing linguistic features of sentence-level representations in neural relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1534-1545, Online. Association for Computational Linguistics.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly
+
+learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957-1967, Copenhagen, Denmark. Association for Computational Linguistics.
+Steven R Beckman, John P Formby, W James Smith, and Buhong Zheng. 2002. Envy, malice and pareto efficiency: An experimental examination. Social Choice and Welfare, 19(2):349-367.
+Steven Cao, Victor Sanh, and Alexander Rush. 2021. Low-complexity probing via finding subnetworks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 960-966, Online. Association for Computational Linguistics.
+Mia Xu Chen, Orhan First, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76-86, Melbourne, Australia. Association for Computational Linguistics.
+Altannar Chinchuluun, Panos M Pardalos, Athanasios Migdalas, and Leonidas Pitsoulis. 2008. Pareto optimality, game theory and equilibria. Springer.
+Alexis Conneau, German Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ \& !\#$ * vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics.
+Anna Currey and Kenneth Heafield. 2019. Incorporating source syntax into transformer-based neural machine translation. In Proceedings of the Fourth Conference on Machine Translation, pages 24-33, Florence, Italy. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150-1159, Vancouver, Canada. Association for Computational Linguistics.
+Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723-1732, Beijing, China. Association for Computational Linguistics.
+Kevin Duh, Katsuhito Sudoh, Xianchao Wu, Hajime Tsukada, and Masaaki Nagata. 2012. Learning to translate with multiple objectives. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1-10, Jeju Island, Korea. Association for Computational Linguistics.
+Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11-21, Brussels, Belgium. Association for Computational Linguistics.
+Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2020. When bert forgets how to pos: Amnesic probing of linguistic properties and mlm predictions. arXiv preprint arXiv:2006.00995.
+Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 72-78, Vancouver, Canada. Association for Computational Linguistics.
+Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1180-1189. JMLR.org.
+Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2672-2680.
+Bruce C Greenwald and Joseph E Stiglitz. 1986. Externalities in economies with imperfect information and incomplete markets. The quarterly journal of economics, 101(2):229-264.
+
+Kazuma Hashimoto and Yoshimasa Tsuruoka. 2017. Neural machine translation with source-side latent graph parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 125-135, Copenhagen, Denmark. Association for Computational Linguistics.
+John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics.
+Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926.
+Jason Lee, Dustin Tran, Orhan Firat, and Kyunghyun Cho. 2020. On the discrepancy between density estimation and sequence generation. In Proceedings of the Fourth Workshop on Structured Prediction for NLP, pages 84-94, Online. Association for Computational Linguistics.
+Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang, and Shuming Shi. 2020. Evaluating explanation methods for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 365-375, Online. Association for Computational Linguistics.
+Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1293-1303, Florence, Italy. Association for Computational Linguistics.
+Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, and Max Meng. 2018. Target foresight based attention for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1380-1390, New Orleans, Louisiana. Association for Computational Linguistics.
+Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. 2018. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada, pages 1647-1657.
+Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multitask sequence to sequence learning. In 4th International Conference on Learning Representations,
+
+ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
+Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics.
+Natalia Martínez, Martin Bertran, and Guillermo Sapiro. 2020. Minimax pareto fairness: A multi objective perspective. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 6755-6764. PMLR.
+Andreu Mas-Colell, Michael Dennis Whinston, Jerry R Green, et al. 1995. Microeconomic theory, volume 1. Oxford university press New York.
+Colin McDonald and David Chiang. 2021. Syntax-based attention masking for neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 47-52, Online. Association for Computational Linguistics.
+Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
+Maria Nădejde, Siva Reddy, Rico Senrrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, and Alexandra Birch. 2017. Predicting target language CCG supertags improves neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 68-79, Copenhagen, Denmark. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+
+Tiago Pimentel, Naomi Saphra, Adina Williams, and Ryan Cotterell. 2020a. Pareto probing: Trading off accuracy for complexity. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 3138-3153, Online. Association for Computational Linguistics.
+Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020b. Information-theoretic probing for linguistic structure. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4609-4622, Online. Association for Computational Linguistics.
+Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101-108, Online. Association for Computational Linguistics.
+Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7237-7256, Online. Association for Computational Linguistics.
+Abhilasha Ravichander, Yonatan Belinkov, and Eduard Hovy. 2021. Probing the probing paradigm: Does probing accuracy entail task relevance? In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pages 3363-3377, Online. Association for Computational Linguistics.
+Abdelrhman Saleh, Tovly Deutsch, Stephen Casper, Yonatan Belinkov, and Stuart Shieber. 2020. Probing neural dialog models for conversational understanding. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 132-143, Online. Association for Computational Linguistics.
+Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation, pages 83-91, Berlin, Germany. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks.
+
+In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
+Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2962-2971. IEEE Computer Society.
+Vladimir N Vapnik. 1999. An overview of statistical learning theory. IEEE transactions on neural networks, 10(5):988-999.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Elena Voita and Ivan Titov. 2020. Information-theoretic probing with minimum description length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 183-196, Online. Association for Computational Linguistics.
+Qizhe Xie, Zihang Dai, Yulun Du, Eduard H. Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 585-596.
+Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pretraining of text and layout for document image understanding. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 1192-1200. ACM.
+Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
+
+| BLEU | H(POSlh) |
| mean | var | mean | var |
| 21.08 | 0.00407 | 0.1113 | 0 |
| 21.32 | 0.01536 | 0.1093 | 0 |
| 21.49 | 0.01847 | 0.108 | 0 |
| 21.52 | 0.00060 | 0.1123 | 0 |
+
+Table 1: Experiment results from LSTM + POS setting. Specifically, "mean" and "var" denotes the mean and the variance over the window.
+
+# A Training Details
+
+On the WMT14 corpus, training one LSTM model with 4 V100 GPUs costs 5 hours, and training one Transformer with 8 V100 GPUs costs 8 hours. On LDC corpus, training one LSTM model with 4 V100 GPUs costs 3 hours, and training one Transformer with 8 V100 GPUs costs 3 hours. On the PTB dataset, training LSTM model with 1 V100 GPU costs 6 minutes.
+
+When running our algorithm, we empirically observe that when $\lambda$ is below 0.01, the optimized models show little difference comparing with the standard model, and when $\lambda$ is larger than 0.1, the proposed algorithm becomes unstable and can't converge to Pareto-optimal solutions well. Therefore, we take ten values from 0.1 to 0.01 at equal intervals as $\lambda$ in Eq. 8, and train ten models with different $\lambda$ for each condition respectively. Then we plot all the models and the Pareto frontier of these models in the experiments.
+
+# B Effects of Randomness
+
+Following the method from Chen et al. (2018), we check if randomness will affect our experimental results. Specifically, we select a window of size 3 around the best checkpoint model and report the mean and variance over the selected window. The results are shown in Table 1. Because repeating experiments under all the settings are too extensive, we only randomly select 4 models from LSTM + POS settings. As shown in the table, all the variances are small, and the variances of the entropy even achieve 0. This suggests that the random disturbance of our experiments are small and thus our results are reliable.
\ No newline at end of file
diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/images.zip b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..46f794da255a2226c5612693ebdd890d6458b9dc
--- /dev/null
+++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5416e2354963625019e2b5f4e770a114bfaa0b8d73297283234efdebc4c0b2ab
+size 264455
diff --git a/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/layout.json b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7523b87dd431c21d21243d3f1f71273867c1fb69
--- /dev/null
+++ b/visualizingtherelationshipbetweenencodedlinguisticinformationandtaskperformance/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40567492d8285065b9b9a12ee997e38acdd392f5d1a30a2fd7e80d0721b1be96
+size 485764
diff --git a/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_content_list.json b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d1a717a0165955795fce87d5cf93665f6d5ab965
--- /dev/null
+++ b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee0daebec702f44cc79b226f950c366c087bc7ebf6a20683a8b3f5ebab425e44
+size 99190
diff --git a/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_model.json b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..110c9c0ff53cb89f085f811b92e0eb3b7660cd77
--- /dev/null
+++ b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f1fb5749fa868587103188f9bbca6ffc1be3bbecbd62b9f52080dac2179b348
+size 116812
diff --git a/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_origin.pdf b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..53eadf91af7b4123869441c3a92338d647aabdad
--- /dev/null
+++ b/weightedselfdistillationforchinesewordsegmentation/7512f1ba-fcd5-444c-8afd-c76d1af2aa68_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca023a96a7546e1ef6b14cfa5050a28784a06a14f02e286ad5d5f3ea0502abd5
+size 1197614
diff --git a/weightedselfdistillationforchinesewordsegmentation/full.md b/weightedselfdistillationforchinesewordsegmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9572bc4a51ecaa8af03f4d70f552e088a237b470
--- /dev/null
+++ b/weightedselfdistillationforchinesewordsegmentation/full.md
@@ -0,0 +1,405 @@
+# Weighted self Distillation for Chinese word segmentation
+
+Rian He $^{1}$ , Shubin Cai $^{*2}$ , Zhong Ming $^{*3}$ , Jialei Zhang $^{4}$
+National Engineering Laboratory for Big Data System Computing Technology
+College of Computer Science and Software Engineering
+Shenzhen University, Shenzhen 518060, China
+ $^{1}$ herian2020@email.szu.edu.cn $^{2}$ shubin@szu.edu.cn
+ $^{3}$ mingz@szu.edu.cn $^{4}$ zhangjialei2021@email.szu.edu.cn
+
+# Abstract
+
+Recent researches show that multi-criteria resources and n-gram features are beneficial to Chinese Word Segmentation (CWS). However, these methods rely heavily on such additional information mentioned above and focus less on the model itself. We thus propose a novel neural framework, named Weighted self Distillation for Chinese word segmentation (WeiDC). The framework, which only requires unigram features, adopts self-distillation technology with four hand-crafted weight modules and two teacher models configurations. Experiment results show that WeiDC can make use of character features to learn contextual knowledge and successfully achieve state-of-the-art or competitive performance in terms of strictly closed test settings on SIGHAN Bakeoff benchmark datasets. Moreover, further experiments and analyses also demonstrate the robustness of WeiDC. Source codes of this paper are available on $\mathrm{Github}^1$
+
+# 1 Introduction
+
+Chinese is written without explicit word delimiters, while numerous Natural Language Processing (NLP) applications are word-based. Moreover, CWS is always a fundamental and essential step for processing most language tasks.
+
+Following the pace of many researchers (Sun and Xu, 2011; Chen et al., 2015; Ke et al., 2021), we also choose [B, I/M, E, S] tags (Beginning, Inside/Middle, End, Single character), which represent the precise position of a character in one word. Figure 1 gives a simple example.
+
+Char: 我 喜欢 大 自 然。 Tag: S B E B I E S
+
+Figure 1: The [B, I, E, S] tagging scheme. "我喜欢大自然。" ("I love nature.")
+
+Generally, a CWS task usually consists of three important parts: Embedding, Encoder and Decoder. Google published two papers, Mikolov et al. (2013a) and Mikolov et al. (2013b), and distributed representation has been widely used in NLP due to its low dimensions and efficiency in semantic similarity. Most researchers keep a close eye to the encoder part which includes Maximum Entropy (ME) (Berger et al., 1996), feed-forward neural network (Zheng et al., 2013), recursive neural network (Wang and Xu, 2017), long-short-term memory (LSTM) (Chen et al., 2015), Pre-training of Deep Bidirectional Transformers such as BERT (Tian et al., 2020) and other models. As for the decoder part, in addition to softmax, Conditional Random Fields (CRF) (Lafferty et al., 2001) usually plays a vital role because it can use the rich contextual feature in the annotation process.
+
+With the prevalence of pre-training and fine-tuning, transformer-based pre-trained models have dominated the field of CWS in recent years. Given sufficient training data, the pre-trained models (Nakkiran et al., 2020; Xu et al., 2020) have achieved remarkable results. However, these works may suffer from poor predicting accuracy when rare words or OOV (out-of-vocab) words exist. What's more, Huang and Zhao (2007) confirm that the loss of word segmentation accuracy, caused by OOV words, is at least 5 times greater than word segmentation ambiguity. We believe that improving the accuracy of the OOV words is worthy of further exploration.
+
+Unlike traditional Knowledge Distillation (KD) methods, self distillation teaches a student network by itself instead of a separate teacher (Xu and Liu, 2019; Zhang et al., 2019). Specifically, during one training epoch, the best student model or the student model from the last iteration will be saved as the teacher model for the next training epoch to teach the student itself.
+
+Moreover, we believe that the student model
+
+should study knowledge selectively according to the importance of information, so it is a practical solution to add an weight matrix to the training process. Different from the temperature distillation technology proposed by Hinton et al. (2015), we subtly utilize the information gap between pseudo labels, predicted by the teacher model or student model, and real labels to obtain the hand-crafted weight matrix. From another perspective, the process of acquiring weight matrices can also be seen as a kind of communication between teachers and students. Finally, to more precisely demonstrate the impact of WeiDC, we will temporarily ignore all external information.
+
+Our contributions are summarized below. We proposed WeiDC, which only requires unigram features and adopts self-distillation technology with four hand-crafted weight modules and two teacher models configurations. Considering there are few choices of weight measures, it is also a challenge to design a feasible method to obtain a rational weight value. We also performed various experiments, such as testing its robustness in some low-resource settings, and explored the efficiency of our framework by combining different encoders and decoders. Experimental results from four widely used benchmark datasets confirm that WeiDC can achieve state-of-the-art or competitive performance, especially in OOV recall.
+
+# 2 Related Work
+
+Xue and Converse (2002) first treat CWS as a sequence labeling task and use a maximum entropy tagger to train the data set. Xu (2003) shows a unique charm of the sequential labeling method in the CWS bakeoffs (Sproat and Emerson, 2003), especially its results on $\mathsf{R}_{\mathsf{OOV}}$ (Recall of Out Of Vocabulary). People thus turn their attention to the research of sequence labeling method (Peng et al., 2004; Zhao et al., 2006; Zhao and Kit, 2008). And Huang and Zhao (2007) conclude that treating the word segmentation process as a character labeling problem can balance the recognition of vocabulary words and unregistered words, because all words are realized through one unified character marking process. In general, our research is related to the following works.
+
+Pre-trained Frameworks Transformer-based pretrained models, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and ZEN (Diao et al., 2020), have demonstrated excellent performance
+
+in CWS tasks. Qiu et al. (2020) propose one unified model for multi-criteria CWS by leveraging the powerful ability of the Transformer encoder. Huang et al. (2020) also use BERT to capture various annotation criteria among datasets. Ke et al. (2021) propose a CWS-specific pre-trained model METASEG. Tian et al. (2020) and Liu et al. (2021) consider the combination of lexicon features and BERT for CWS. Huang et al. (2021) propose a semi-supervised neural method based on RoBERTa encoder through pseudo labels.
+
+Knowledge Distillation Hinton et al. (2015) first propose knowledge distillation, using a larger network to teach a smaller network. Tang et al. (2019) choose to distill knowledge from BERT, a state-of-the-art language representation model, into a simple heterogeneous model. Huang et al. (2020) also extract knowledge from BERT to a truncated (3 or 6 layers) BERT to balance computational cost and segmentation accuracy on CWS tasks. Jiao et al. (2020) adopt multiple distilling strategies to reduce the number of parameters of the pre-trained language models. Huang et al. (2021) collect massive unlabeled data and distill knowledge from the teacher model to the student model by generating pseudo labels. Zhang et al. (2019) put forward self-distillation, which has recently been used in computer vision, but not commonly used in NLP.
+
+To summarize, for further improving word segmentation accuracy, many researchers make use of lexicon information (Tian et al., 2020; Liu et al., 2021), multi-criteria label data (Chen et al., 2017; Huang et al., 2020; Qiu et al., 2020; Ke et al., 2020) and even unlabeled data (Sun and Xu, 2011; Zhang et al., 2013; Huang et al., 2021).
+
+# 3 The WeiDC Framework
+
+Huang and Zhao (2007) point out that CWS is the first step of most Chinese information processing systems, which usually relies on the shallow information of the text content, such as character features, which is distinct from the idea, "understand first and then segment words". As shown in Figure 2, we adopted the traditional word segmentation scheme, but added self distillation and weight modules to the training phase.
+
+# 3.1 The Sequential Part
+
+The traditional word segmentation scheme consists of the Embedding layer, Encoder layer, and Decoder layer. Formally, $x$ is always seen as all
+
+
+Figure 2: The WeiDC framework. The sentence, "千载难逢天外客" ("A once-in-a-l lifetime visitor from outside the sky"), is from the MSR testing corpus. And it's difficult to split "天外客" ("A visitor from outside the sky").
+
+marked data sequences and $x = [x_{1}, x_{2}, \ldots, x_{n}]$ , and $y$ is over corresponding label sequences and $y = [y_{1}, y_{2}, \ldots, y_{n}]$ . We choose the BERT model to get character embeddings and encode these embeddings. After that, the encoder's outputs are fed into the decoder layer to obtain predicted tags.
+
+Embedding layer We use BertTokenizer to obtain our input embeddings. Each character embedding consists of token embedding and position embedding. We don't need to consider the Next Sentence Prediction problem and remove token_type embedding. Additionally, to easily explore various weight mechanisms, WeiDC ignores unlabeled data or n-gram features.
+
+Encoder layer Once obtaining character embeddings, they will be fed into an encoder, such as BERT or its derivative models. We choose bert-base-chinese $^2$ version and only need config.json, pytorch_model.bin, and vocab.txt to train linguistic data. Vaswani et al. (2017) give BERT, based on Transformer, an abundant description. We decide to omit its background description here. Furthermore, we also take RoBERTa $^3$ as our encoder to explore the impact of various pre-trained models on the CWS experiments.
+
+Decoder layer Compared with Hidden Markov Models, Lafferty et al. (2001) present CRF for building probabilistic models to mark and segment the sequence data with weak independence assumptions.
+
+$$
+p \left(y _ {i} \mid x _ {i}\right) = \frac {\exp \left(W _ {c} \cdot z _ {i} + b _ {c}\right)}{\sum_ {y _ {i - 1} y _ {i}} \exp \left(W _ {c} \cdot z _ {i} + b _ {c}\right)} \tag {1}
+$$
+
+In addition, softmax is also a frequent decoder, which can efficiently convert logit to probability regardless of intrinsic correlation.
+
+$$
+p \left(y _ {i} \mid x _ {i}\right) = \log \frac {\exp \left(z _ {i} ^ {d}\right)}{\sum_ {d} ^ {\mathcal {D}} \exp \left(z _ {i} ^ {d}\right)} \tag {2}
+$$
+
+where $z_{i}\in \mathbb{R}^{|\mathcal{D}|}$ is logits and $z_{i}^{d}$ is the value at dimension $d$ in $z_{i}$ . $p(y_{i}|x_{i})$ is the corresponding probability value. $W_{c}\in \mathbb{R}^{|\mathcal{D}|\times |\mathcal{D}|}$ and $b_{c}\in \mathbb{R}^{|\mathcal{D}|}$ are trainable parameters of CRF. $y_{i - 1}y_{i}$ models the state from $y_{i - 1}$ to $y_{i}$ .
+
+We continue to operate on the probability $(p(y|x))$ to get the predicted label $(\hat{y})$ .
+
+$$
+\hat {y} = \operatorname {a r g m a x} p (y | x) \tag {3}
+$$
+
+Through comparative experiments, Qiu et al. (2020) conclude that with or without CRF does not make much difference. Since CRF is more complex and the training cost is higher, we mainly try softmax to decode logits to make full use of computing resources.
+
+# 3.2 Weight Mechanism
+
+During one training epoch, the pseudo labels $(\hat{y})$ from $t$ or $s$ are compared with corresponding true labels (y), which can be expressed by formula 4. $t$ and $s$ indicate that $\hat{y}$ come from the teacher model or student model, respectively. $\eta$ refers to the information difference between $\hat{y}$ and y.
+
+$$
+\eta_ {m} = | \hat {y} _ {m} - y |, m = t, s \tag {4}
+$$
+
+In the process of executing equation 4, we use absolute value operations. When one pseudo label is equal to the corresponding true label, we get 0, otherwise we get a positive number. Since the
+
+result is the opposite of what we want, we have to perform 5 and 6.
+
+$$
+F (j) = \left\{ \begin{array}{l l} 0, & j = 0 \\ 1, & j \neq 0 \end{array} \right. \tag {5}
+$$
+
+$F(j)$ converts all positive numbers to 1 and $j$ is a variable symbol. Then, the intermediate value is processed by equation 6 to get the final result.
+
+$$
+\eta_ {m} = 1 - F (\eta_ {m}) \tag {6}
+$$
+
+We hope there will be enough communication between the teacher and student to obtain a reasonable weight value, so we designed equation 7. $w_{wei}^{1}$ is the first type of weight vector.
+
+$$
+w _ {w e i} ^ {1} = \eta_ {t} + \eta_ {s} + 1 \tag {7}
+$$
+
+The meaning of equation 7 is very concise. During distillation, samples with higher accuracy are given more attention, while samples with lower accuracy are given less attention. Moreover, to avoid losing the basic information carried by each sample, we need to make sure that the minimum value of $w_{wei}^{1}$ is 1, we thus add 1.
+
+We also notice that $\eta_t$ and $\eta_s$ may contain various amounts of knowledge. Therefore, we multiply $\eta_t$ or $\eta_s$ by 2 to get equations 8 and 9, respectively. Certainly, other coefficients can also be selected according to actual needs.
+
+$$
+w _ {w e i} ^ {2} = 2 \cdot \eta_ {t} + \eta_ {s} + 1 \tag {8}
+$$
+
+$$
+w _ {w e i} ^ {3} = \eta_ {t} + 2 \cdot \eta_ {s} + 1 \tag {9}
+$$
+
+From another perspective, if the teacher model is correct and the student model is wrong, this kind of knowledge should be more valuable. We thus get another calculation method, which is described in equation 10, to obtain the weight vector.
+
+$$
+w _ {w e i} ^ {4} = 2 \cdot \eta_ {t} - \eta_ {s} + 2 \tag {10}
+$$
+
+We must add 2 to ensure that the minimum value of $w_{wei}^{4}$ is 1.
+
+Finally, according to different weight modules, all possible values of a single character (marked as k) are shown in Table 1. The above four weight mechanisms show that different key factors affect the weight value. In other words, for the same pseudo label, different reference factors will lead to various weight values.
+
+| ηtk | ηsk | w1weik | w2weik | w3weik | w4weik |
| 1 | 1 | 3 | 4 | 4 | 3 |
| 1 | 0 | 2 | 3 | 2 | 4 |
| 0 | 1 | 2 | 2 | 3 | 1 |
| 0 | 0 | 1 | 1 | 1 | 2 |
+
+Table 1: All possible weight values of character k.
+
+For example, if we consider that words with low frequency can better reflect the models' performance, we can increase their weights to penalize the loss of misclassifying these words. As a result, the student model will pay more attention to low-frequency words.
+
+According to different distillation scenarios or learning needs, it is necessary to choose appropriate reference factors to design weight calculation methods. Here, we take the segmentation difficulty of words as a reference standard.
+
+# 3.3 Distillation
+
+Unlike self-training, self-distillation takes a fully supervised way to dig the potential of the model itself, requiring no auxiliary models or data. In this paper, the teacher model comes from two sources, either the student model from the last iteration $(D_{last})$ or the student model with the best historical performance $(D_{best})$ .
+
+The student also learns from two sources of information, predicted probabilities from the teacher and one-hot ground-truth label. Hence, the final loss $(\mathcal{L}_{KD})$ consists of two parts, cross-entropy loss $(\mathcal{L}_{CE})$ and distillation loss $(\mathcal{L}_{Distill})$ :
+
+$$
+\mathcal {L} _ {K D} = (1 - \alpha) \cdot \mathcal {L} _ {C E} + \alpha \cdot \mathcal {L} _ {\text {D i s t i l l}} \tag {11}
+$$
+
+To balance the above two losses, we need a coefficient $\alpha$ , which is also set to a fixed value during the training phase.
+
+$\mathcal{L}_{CE}$ is to penalize the cross-entropy loss between the predicted label $(\hat{y})$ against the true label $(y)$ :
+
+$$
+\mathcal {L} _ {C E} = - \sum_ {x} y \log \hat {y} _ {(x)} \tag {12}
+$$
+
+$\mathcal{L}_{Distill}$ is to reduce the mean-squared-error loss between the teacher's logits $(z^{(T)})$ and the student's logits $(z^{(S)})$ , and $w_{wei}$ can be any of the above four weight types.
+
+$$
+\mathcal {L} _ {\text {D i s t i l l}} = \left\| w _ {w e i} \cdot z ^ {(T)} - w _ {w e i} \cdot z ^ {(S)} \right\| _ {2} ^ {2} \tag {13}
+$$
+
+To better verify the effect of WeiDC, the temperature distillation technology is not considered here.
+
+| Dataset | MSR | PKU | AS | CITYU |
| train | test | train | test | train | test | train | test |
| Char | 4,050K | 184K | 1,826K | 173K | 8,368K | 198K | 2,403K | 68K |
| Word | 2,368K | 107K | 1,110K | 104K | 5,450K | 123K | 1,456K | 41K |
| Char types | 5,168 | 2,838 | 4,698 | 2,934 | 5,979 | 3,628 | 4,832 | 2,663 |
| Word types | 88,119 | 12,923 | 55,303 | 13,148 | 141,339 | 18,759 | 69,085 | 8,993 |
| OOV Rate | - | 2.7 | - | 5.8 | - | 4.3 | - | 7.2 |
+
+Table 2: Corpus details of four CWS datasets
+
+Distinct from previous studies on knowledge distillation, our framework adds the weight mechanism, allowing the teacher and the student to communicate fully to focus on more valuable knowledge. Furthermore, the teacher is not a static model but dynamically evolves as training proceeds. Hence, the weight vector will also alter as the teacher model changes so that the student model can learn richer knowledge.
+
+# 4 Experiments
+
+# 4.1 Dataset and Evaluation Metric
+
+The second SIGHAN international Chinese word segmentation bakeoff (Emerson, 2005), which includes MSR, PKU, AS and CITYU datasets, is frequently used in CWS tasks. Since AS and CITYU are traditional Chinese characters, we convert these data into simplified ones by following previous studies (Chen et al., 2015; Qiu et al., 2020; Tian et al., 2020). We will use these datasets in the following experiments and corpus details are listed in Table 2.
+
+We also choose precision (P), recall (R), F-score, and $R_{OOV}$ , which is the recall for out-of-vocabulary (OOV) words, to evaluate segmentation performance. Specifically, we first record the word information in the complete training corpus and then divide the corpus into a training set and validation set. Besides, we take no extra resources but only training corpus to train our model.
+
+# 4.2 Baselines
+
+According to whether to use a pre-trained model such as BERT as the encoder, we have selected two types of baselines, Non-pretrained Models and Pre-trained Models.
+
+Non-pretrained Models Chen et al. (2017) propose adversarial multi-criteria learning for CWS tasks by exploiting the underlying shared knowledge across multiple heterogeneous criteria. Ma et
+
+al. (2018) also point out that using external knowledge can improve the CWS accuracy. Gong et al. (2019) provide a more flexible solution to transfer the learned information to new criteria. They all use the bidirectional LSTM encoder. Qiu et al. (2020) propose one unified model for multi-criteria CWS based on the Transformer encoder. Through the Gaussian-masked Directional (GD) Transformer, Duan and Zhao (2020) try to further strengthen the model itself to perfect CWS tasks.
+
+Pre-trained Models Huang et al. (2020) propose a domain adaptive segmenter to exploit various open-domain knowledge. Tian et al. (2020) use key-value memory networks to incorporate word-hood information with BERT or ZEN as the encoder. Ke et al. (2021) put forward a CWS-specific pre-trained model to alleviate the discrepancy between pre-trained models and downstream CWS tasks. Nguyen et al. (2021) propose a span labeling approach to model n-gram features for word segmentation.
+
+# 4.3 Training Details
+
+All experiments are implemented on the hardware with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz and NVIDIA Tesla V100. Following previous works (Ma et al., 2018; Qiu et al., 2020), we randomly select $10\%$ training data for development and only use its testing set at the end of the training phase. Similar to the previous work (Tian et al., 2020), we performed other preprocessing measures on all data sets.
+
+During fine-tuning, we use Adam with the learning rate of 2e-5. Both train_batch_size and eval_batch_size are 16. As for the trade-off hyperparameter $(\alpha)$ , we randomly select $1\%$ of the training set to explore the influence of various $\alpha$ on WeiDC. We observe that when $\alpha$ is 0.3, WeiDC performs better.
+
+Besides, we train all models up to 50 with some early stopping strategies, such as "patient epochs"
+
+| Model | MSR | PKU | AS | CITYU | AVG |
| F | ROOV | F | ROOV | F | ROOV | F | ROOV | F | ROOV |
| Chen et al. (2017) * | 96.04 | 71.6 | 94.32 | 72.67 | 94.75 | 75.37 | 95.55 | 81.4 | 95.17 | 75.26 |
| Ma et al. (2018) † | 98.1 | 80.0 | 96.1 | 78.8 | 96.2 | 70.7 | 97.2 | 87.5 | 96.9 | 79.25 |
| Gong et al. (2019) * | 97.78 | 64.2 | 96.15 | 69.88 | 95.22 | 77.33 | 96.22 | 73.58 | 96.34 | 77.82 |
| Qiu et al. (2020) *† | 98.05 | 78.92 | 96.41 | 78.91 | 96.44 | 76.39 | 96.91 | 86.91 | 96.95 | 80.28 |
| Duan and Zhao (2020) | 97.6 | - | 95.5 | - | 95.7 | - | 95.4 | - | 96.05 | - |
| Huang et al. (2020) * | 97.9 | 84.0 | 96.7 | 81.6 | 96.7 | 77.3 | 97.6 | 90.1 | 97.23 | 83.25 |
| Tian et al. (2020) †(BERT) | 98.28 | 86.67 | 96.51 | 86.76 | 96.58 | 78.48 | 97.8 | 87.57 | 97.29 | 84.87 |
| Tian et al. (2020) †(ZEN) | 98.4 | 84.87 | 96.53 | 85.36 | 96.62 | 79.64 | 97.93 | 90.15 | 97.37 | 85.0 |
| Ke et al. (2021) *‡ | 98.5 | 83.03 | 96.92 | 80.9 | 97.01 | 80.89 | 98.2 | 90.66 | 97.66 | 83.87 |
| Nguyen et al. (2021) † | 98.31 | 85.32 | 96.56 | 85.83 | 96.62 | 79.36 | 97.74 | 87.45 | 97.31 | 84.49 |
| WeiDC (BERT) | 98.28 | 86.39 | 96.59 | 87.21 | 96.76 | 80.23 | 97.79 | 87.58 | 97.36 | 85.35 |
| WeiDC (RoBERTa) | 98.43 | 87.17 | 96.74 | 87.48 | 96.59 | 79.26 | 97.95 | 89.93 | 97.43 | 85.96 |
+
+of 3 and "minimum F value" of 0.0001. Specifically, when the gap between the current F value and the optimal F value is less than 0.0001, we will not replace our saved model to avoid frequently updating the teacher model. Table 4 summarizes all the vital parameters.
+
+Table 3: First two blocks record different baselines, namely Non-pre and Pre. The last block is our scores. $\star$ uses a multi-criteria learning framework, which means that the marked training data are different from the rest. $\dagger$ uses lexicons or n-gram features. $\ddagger$ uses a CWS-specific pre-trained model.
+
+| minimum F value | 1e-4 | train_batch_size | 16 |
| num_train_epochs | 50 | eval_batch_size | 16 |
| patient_epochs | 3 | learning_rate | 2e-5 |
| train : eval | 9 : 1 | alpha (α) | 0.3 |
+
+Table 4: Hyper parameters of WeiDC.
+
+We take [B, I, E, S] tagging scheme in our experiments. To explore the influence of diverse weight modules on CWS, we will only try BERT and RoBERTa as our encoder. As for BERT, we follow the default settings in their paper (Devlin et al., 2019). In addition to combining four weight modules and two types of teacher models, we also plan to conduct some exploratory experiments, such as testing the performance of WeiDC on a small amount of training data.
+
+# 5 Results and Analysis
+
+In this section, we firstly report the results of WeiDC and its comparison with the state-of-the-art works available. Then we explore the robustness of WeiDC through lots of experiments in different low-resource settings. We also analyze the impact of OOV words on the model. Finally, we perform various NER tasks to test WeiDC's effectiveness.
+
+# 5.1 Main Results
+
+Several observations are drawn from Table 3 and Table 5, where the overall F-score and OOV recall are all reported.
+
+First, Table 3 demonstrates that pre-trained models, with lots of prior knowledge, perform better than non-pretrained models, especially in OOV recall. Compared with baselines listed in Table 3, the results in these experiments not only confirm that self distillation and weight mechanism are effective methods to benefit CWS without any auxiliary data or CWS-specific pre-trained models, but also fully illustrate that the design of WeiDC can enhance the model learning ability.
+
+Second, as shown in Table 5, WeiDC achieved exciting results on $\mathbf{R}_{OOV}$ with maintaining competitive performance on F-score. For instance, when we took BERT as our encoder, WeiDC improved the F-score by $0.16\%$ on average, from $97.2\%$ to $97.36\%$ , and the $\mathbf{R}_{OOV}$ score by $1.71\%$ on average, from $83.64\%$ to $85.35\%$ .
+
+Third, in most cases, $D_{best}$ outperforms $D_{last}$ , and we speculate that updating the teacher model too frequently will be detrimental to the learning process of the student model. Besides, different CWS tasks need various weight modules, so it is essential to choose reasonable weight mechanisms according to the characteristics of datasets.
+
+Fourth, with BERT as the encoder and softmax as the decoder, our base model is powerful, but the improvement of WeiDC on $\mathbf{R}_{OOV}$ scores is still very decent. Specifically, under the current
+
+| Model | MSR | PKU | AS | CITYU | AVG |
| F | ROOV | F | ROOV | F | ROOV | F | ROOV | F | ROOV |
| BERT(base) | 98.22 | 85.22 | 96.5 | 85.6 | 96.44 | 77.37 | 97.63 | 86.35 | 97.2 | 83.64 |
| +Dbest | 98.22 | 85.58 | 96.59 | 87.04 | 96.64 | 79.51 | 97.68 | 86.52 | 97.28 | 84.66 |
| +Dbest + w2wei | 98.17 | 86.07 | 96.53 | 88.03 | 96.71 | 80.57 | 97.6 | 85.4 | 97.25 | 85.02 |
| +Dbest + w4wei | 98.28 | 86.39 | 96.59 | 87.21 | 96.76 | 80.23 | 97.79 | 87.58 | 97.36 | 85.35 |
| RoBERTa(base) | 98.33 | 86.74 | 96.58 | 87.04 | 96.34 | 76.14 | 97.8 | 88.8 | 97.26 | 84.68 |
| +Dbest | 98.43 | 86.67 | 96.56 | 86.34 | 96.52 | 78.47 | 97.84 | 89.38 | 97.34 | 85.22 |
| +Dbest + w2wei | 98.33 | 86.21 | 96.79 | 88.34 | 96.6 | 79.26 | 97.96 | 90.33 | 97.42 | 86.04 |
| +Dbest + w4wei | 98.43 | 87.17 | 96.74 | 87.48 | 96.59 | 79.26 | 97.95 | 89.93 | 97.43 | 85.96 |
+
+Table 5: Ablation studies combining self distillation and four weight modules. Complete results can be found in the Appendix Tables 10 and 11.
+
+| Sampling Rates | 1% | 5% | 10% | 20% | 50% | 80% | 100% | AVG |
| F | RooV | F | RooV | F | RooV | F | RooV | F | RooV | F | RooV | F | RooV | F | RooV |
| BERT(base) | 93.92 | 83.38 | 94.37 | 77.65 | 94.72 | 76.74 | 95.83 | 83.46 | 96.15 | 85.13 | 96.33 | 84.18 | 96.5 | 85.6 | 95.4 | 82.31 |
| +Dbest | 93.7 | 82.95 | 95 | 82.33 | 95.79 | 86.56 | 95.98 | 85.63 | 96.34 | 85.6 | 96.36 | 84.91 | 96.59 | 87.04 | 95.68 | 85.0 |
| +Dbest+w2cei | 93.29 | 83.3 | 95.37 | 87.86 | 95.69 | 87.36 | 95.82 | 86.39 | 96.35 | 87.96 | 96.56 | 87.73 | 96.53 | 88.03 | 95.66 | 86.95 |
+
+Table 6: Scores on PKU test set in low-resource settings.
+
+experimental conditions (listed in table 4), $w_{wei}^{4}$ has the best overall performance on all data sets, while $w_{wei}^{3}$ has the worst performance.
+
+Last, RoBERTa outperforms BERT when we deal with the CWS task. If CRF is used as the decoder, the CWS model seems to be more prone to overfitting, resulting in worse word segmentation.
+
+# 5.2 Low-Resource Settings
+
+In real life, the training corpus is usually insufficient, and it is valuable to measure the performance of CWS models in some low-resource settings. The partition criterion of our training sets follows Ke et al. (2021), whose sampling rates are 0.01, 0.05, 0.1, 0.2, 0.5, 0.8, and 1.0. For easy operation, we will obtain the above training datasets after randomizing the original training dataset but finally test on the same original testing dataset.
+
+We decided to perform the above experiment on PKU without changing any parameters in Table 4. We first took BERT as the base model and gradually added $D_{best}$ and $w_{wei}^2$ . Related results of the experiment are shown in Table 6.
+
+We notice that the performance of all models is greatly affected by sampling rates, especially at a low ratio such as $1\%$ and $5\%$ . In addition, self distillation can significantly improve the effect of CWS, and weight mechanisms can further increase the $\mathbf{R}_{OOV}$ scores.
+
+Specifically, when the sampling rate drops from $100\%$ to $5\%$ , "BERT + $D_{best}$ " and "BERT + $D_{best}+$
+
+$w_{wei}^{2}$ have better F1 scores than "BERT". For $R_{OOV}$ scores, "BERT" decreases by $7.95\%$ while that of "BERT + $D_{best}$ " only decreases by $4.71\%$ . Surprisingly, "BERT + $D_{best}$ + $w_{wei}^{2}$ " almost always maintains high $R_{OOV}$ scores, fluctuating between $87\%$ and $88\%$ . We do not pay too much attention to $1\%$ , because the sample size may be too small to reflect the real performance of the model.
+
+Generally speaking, the above results confirm that WeiDC has strong robustness when manual annotation resources are insufficient.
+
+# 5.3 OOV Words
+
+From the above experiments, WeiDC worked well in $\mathbf{R}_{OOV}$ . To verify the performance of each model on OOV words, we operated the PKU training corpus to train all models but took other testing data sets to evaluate these models.
+
+We first digitized the discrepancy between the training set of PKU and the test sets of MSR, AS and CITYU. For visual comparison, we also listed the distribution of OOV words in the PKU test set. See Table 7 for more details. It should not be ignored that both AS and CITYU are traditional Chinese datasets, where words may be slightly different, such as "铁公路" ("iron road") on CITYU while "铁路" ("railway") on PKU.
+
+As shown in Table 8, WeiDC almost performs better than the base model on all three testing tasks, especially in $R_{OOV}$ . According to table 7 and Table 8, the effect of WeiDC on the test set with a higher
+
+| \(OOV_{word}\) | PKU | MSR | AS | CITYU |
| Type | Freq | Type | Freq | Type | Freq | Type | Freq |
| NotInPKU_Train | 2863 | 6006 | 4100 | 8110 | 8386 | 18006 | 3099 | 6726 |
| All Test Word | 13148 | 104372 | 12923 | 106873 | 18759 | 122610 | 8993 | 40936 |
| OOV Rate | 21.78 | 5.75 | 31.73 | 7.60 | 44.70 | 14.69 | 34.46 | 16.43 |
+
+Table 7: OOV words for the four CWS test sets. "NotInPKU_Train" represents words that appear in the test set while not in the PKU training set. Column "Type" only includes the type of OOV word, but column "Freq" considers the frequency.
+
+| Model | MSR | AS | CITYU |
| F | ROOV | F | ROOV | F | ROOV |
| BERT(base) | 86.95 | 20.51 | 90.05 | 71.82 | 90.77 | 73.51 |
| +Dbest | +0.0 | +0.88 | +0.45 | +2.38 | +0.52 | +2.2 |
| +Dbest + w2wei | -0.08 | +0.81 | +0.47 | +2.41 | +0.51 | +3.06 |
+
+frequency of OOV words is more distinct. However, the number of types of OOV words seems to be less beneficial.
+
+We finally checked the PKU and MSR datasets to find out why all models performed poorly on MSR. The word segmentation standards of the above two corpora are very different, such as "最大" ("biggest") on MSR while "最大" ("most" and "big") on PKU, which directly causes all models to perform better on AS and CITYU, but poorly on MSR.
+
+# 5.4 NER Tasks
+
+Similar to CWS tasks, Named Entity Recognition (NER) tasks can also be performed in the form of sequence annotations. To further explore the effectiveness of the weight mechanism and compare which weight mechanism performs better, we conduct some NER experiments. All hyperparameters are the same as the CWS task. The relevant results are shown in Appendix Table 13.
+
+We can get the following suggestions. First, the hand-crafted weight module can improve sequence labeling tasks, whether CWS or NER. Second, $w_{wei}^{4}$ has the best overall performance among all weight mechanisms and is also a good choice when the features of the training dataset are unclear.
+
+Moreover, the labeling rules of various datasets vary widely, so it is almost impossible to design a general weight mechanism. This also explains that our chosen parameters do not always yield the best results. To focus our attention on experimental exploration, we did not spend much time on parameter tuning.
+
+# 6 Case Study
+
+For CWS tasks, it is very hard to get the right segmentations if two adjacent words, such as "天外" ("outside the sky") and "客" ("guest"), both appear for the first time, as shown in Table 9. Unfortunately, WeiDC can't handle this problem properly either. However, we find that if we add some valuable context, our model can still get rational results.
+
+Table 8: Train on PKU, but test on other three datasets.
+
+| Gold | 千载难逢 天外 客 |
| Original | text: 千载难逢天外客BERT: 千载难逢 天外客+Dbest+w2: 千载难逢天外客 |
| Replace 1 | 天外的人,千载难逢天外客天外的人,千载难逢 天外客天外的人,千载难逢 天外客 |
| Replace 2 | 天外的客,千载难逢天外客天外的客,千载难逢 天外客天外的客,千载难逢 天外客 |
| Replace 3 | 天外的流星,来做客,千载难逢天外客天外的流星,来做客,千载难逢 天外客天外的流星,来做客,千载难逢 天外客 |
| Replace 4 | 客人说,见到了天外来的流星,千载难逢天外客客人说,见到了天外来的流星,千载难逢 天外客客人说,见到了天外来的流星,千载难逢 天外客 |
+
+Table 9: "千载难逢天外客" ("A once-in-a-lifetime visitor from outside the sky"). In each block, the first line is a raw text, and the last two lines are segmentation results of BERT and WeiDC, respectively. Both models are trained on PKU.
+
+Although in some cases both "天外客" ("A visitor from outside the sky") and "天外客" ("outside the sky" and "guest") are rational representations, here we assume that "天外客" ("outside the sky" and "guest") is correct one and let these models learn it by enhancing the semantic environment.
+
+First, according to "Replace 1" and "Replace 4", if only "天外" ("outside the sky") appears in the previous text, BERT obtains "天外客" ("outside the sky" and "guest") at the cost of inconsistent segmentation criteria in "天外" ("outside the sky"). For WeiDC, "天外客" ("A visitor from outside the sky") is regarded as a derivative of "天外" ("outside the sky"), as shown in "Replace 1". After semantic
+
+information gets enriched, the possibility of "天外" ("outside the sky") becoming an independent word increases, so the correct result is obtained. We also notice that when text content is rich, WeiDC will get desired results even if there is interference information such as "外来" ("outside") in the added semantic knowledge.
+
+Second, from "Replace 2", when "的" ("of") locates between "天外" ("outside the sky") and "客" ("guest"), both BERT and WeiDC learn the right segmentation position by treating "的" ("of") as a single word. We analyzed the PKU training set for further exploration and found that "的" ("of") is a high-frequency single-character word. When we blur the semantic information, as shown in "Replace 3", WeiDC treats "天外客" ("A visitor from outside the sky") as a word, while BERT can still obtain the correct segmentation. We speculate that the added interference information hurts the small text content. From another perspective, WeiDC has a strong ability to learn contextual knowledge from different semantic environments to assist CWS tasks.
+
+Last but not least, we make heatmaps to visualize the word segmentation process in Figure 3.
+
+
+(a)"A once-in-a-lifetime visitor from outside the sky"
+
+
+(b) "A visitor from outside the sky, a once-in-a-lifetime visitor from outside the sky"
+Figure 3: Heatmaps of the label probability.
+
+# 7 Conclusion
+
+In this paper, we proposed a novel framework named WeiDC, which could make good use of the knowledge in teacher models through self-distillation. The framework also follows the sequence labeling paradigm but first applies self distillation and weight mechanism to CWS, combining four hand-crafted weight modules and two types of teacher models. Experimental results show that WeiDC could achieve higher performance on four CWS datasets, with the average F-score ranking second and the average $\mathrm{R}_{OOV}$ score ranking first.
+
+However, for non-sequential labeling problems, such as text classification, a paragraph only corresponds to one tag, so the number of labels is too small, which may render the method in this paper ineffective. How to solve such a dilemma deserves more exploration. Besides, it is also promising to consider whether more efficient weight methods exist.
+
+# 8 Acknowledgments
+
+We thank the anonymous reviewers for constructive and expert comments, and the support of National Natural Science Foundation of China No. 61836005.
+
+# References
+
+Adam L. Berger, Stephen Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-71.
+Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for Chinese word segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1197-1206. Association for Computational Linguistics.
+Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1193-1203. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. 2020. ZEN: pre-training chinese text encoder enhanced by n-gram representations. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 4729-4740. Association for Computational Linguistics.
+
+Sufeng Duan and Hai Zhao. 2020. Attention is all you need for chinese word segmentation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3862-3872. Association for Computational Linguistics.
+Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2005, Jeju Island, Korea, 14-15, 2005. Association for Computational Linguistics.
+Jingjing Gong, Xinchi Chen, Tao Gui, and Xipeng Qiu. 2019. Switch-lstms for multi-criteria Chinese word segmentation. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6457-6464. AAAI Press.
+Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. stat, 1050:1-9.
+Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese information processing, 27:8-19.
+Kaiyu Huang, Junpeng Liu, Degen Huang, Deyi Xiong, Zhuang Liu, and Jinsong Su. 2021. Enhancing Chinese word segmentation via pseudo labels for practicability. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 4369-4381. Association for Computational Linguistics.
+Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, and Wei Chu. 2020. Towards fast and accurate neural Chinese word segmentation with multi-criteria learning. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 2062-2072. International Committee on Computational Linguistics.
+Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. Tinybert: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 4163-4174. Association for Computational Linguistics.
+Zhen Ke, Liang Shi, Erli Meng, Bin Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Unified multi-criteria Chinese word segmentation with BERT. CoRR, abs/2004.05808.
+
+Zhen Ke, Liang Shi, Songtao Sun, Erli Meng, Bin Wang, and Xipeng Qiu. 2021. Pre-training with meta learning for chinese word segmentation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5514-5523. Association for Computational Linguistics.
+John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 - July 1, 2001, pages 282-289. Morgan Kaufmann.
+Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth Workshop on Chinese Language Processing, SIGHAN@COLING/ACL 2006, Sydney, Australia, July 22-23, 2006, pages 108-117. Association for Computational Linguistics.
+Wei Liu, Xiyan Fu, Yue Zhang, and Wenming Xiao. 2021. Lexicon enhanced chinese sequence labeling using BERT adapter. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 5847-5858. Association for Computational Linguistics.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
+Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese word segmentation with bilstms. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4902-4908. Association for Computational Linguistics.
+Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.
+Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013,
+
+Lake Tahoe, Nevada, United States, pages 3111-3119.
+Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. 2020. Deep double descent: Where bigger models and more data hurt. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Duc-Vu Nguyen, Linh-Bao Vo, Dang Van Thin, and Ngan Luu-Thuy Nguyen. 2021. Span labeling approach for Vietnamese and chinese word segmentation. In PRICAI 2021: Trends in Artificial Intelligence - 18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021, Hanoi, Vietnam, November 8-12, 2021, Proceedings, Part II, volume 13032 of Lecture Notes in Computer Science, pages 244-258. Springer.
+Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In COLING 2004, 20th International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2004, Geneva, Switzerland, pages 562-568. COLING.
+Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 548-554. The Association for Computational Linguistics.
+Xipeng Qiu, Hengzhi Pei, Hang Yan, and Xuanjing Huang. 2020. A concise model for multi-criteria Chinese word segmentation with transformer encoder. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of *Findings of ACL*, pages 2887-2897. Association for Computational Linguistics.
+Richard Sproat and Thomas Emerson. 2003. The first international chinese word segmentation bakeoff. In Proceedings of the Second Workshop on Chinese Language Processing, SIGHAN 2003, Sapporo, Japan, July 11-12, 2003, pages 133-143.
+Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK. A meeting of SIGDAT, a Special Interest Group of the ACL, pages 970-979. Association for Computational Linguistics.
+Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task-specific knowledge from BERT into simple neural networks. CoRR, abs/1903.12136.
+
+Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang, and Yonggang Wang. 2020. Improving Chinese word segmentation with wordhood memory networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8274-8285. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Chunqi Wang and Bo Xu. 2017. Convolutional neural network with word embeddings for Chinese word segmentation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 163-172. Asian Federation of Natural Language Processing.
+Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. Bert-of-theseus: Compressing BERT by progressive module replacing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 7859-7869. Association for Computational Linguistics.
+Nianwen Xu. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing., 8(1):29-48.
+Ting-Bing Xu and Cheng-Lin Liu. 2019. Data-distortion guided self-distillation for deep neural networks. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5565-5572. AAAI Press.
+Nianwen Xue and Susan P. Converse. 2002. Combining classifiers for Chinese word segmentation. In The First Workshop on Chinese Language Processing, SIGHAN@COLING 2002, Taipei, Taiwan, August 24 - September 1, 2002, pages 1-7.
+Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. 2019. Be your own teacher: Improve the performance of convolutional neural networks via self distillation. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 3712-3721. IEEE.
+Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from unlabeled data with co-training for chinese word segmentation. In Proceedings of the 2013 Conference
+
+on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 311-321. Association for Computational Linguistics.
+Yue Zhang and Jie Yang. 2018. Chinese NER using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1554-1564. Association for Computational Linguistics.
+Hai Zhao, Changning Huang, and Mu Li. 2006. An improved chinese word segmentation system with conditional random field. In Proceedings of the Fifth Workshop on Chinese Language Processing, SIGHAN@COLING/ACL 2006, Sydney, Australia, July 22-23, 2006, pages 162-165. Association for Computational Linguistics.
+Hai Zhao and Chunyu Kit. 2008. An empirical comparison of goodness measures for unsupervised Chinese word segmentation with a unified framework. In Third International Joint Conference on Natural Language Processing, IJCNLP 2008, Hyderabad, India, January 7-12, 2008, pages 9-16. Association for Computer Linguistics.
+Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and POS tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 647-657. Association for Computational Linguistics.
+
+| Model | MSR | PKU | AS | CITYU | AVG |
| F | ROOV | F | ROOV | F | ROOV | F | ROOV | F | ROOV |
| BERT(base) | 98.22 | 85.22 | 96.5 | 85.6 | 96.44 | 77.37 | 97.63 | 86.35 | 97.2 | 83.64 |
| +Dbest | 98.22 | 85.58 | 96.59 | 87.04 | 96.64 | 79.51 | 97.68 | 86.52 | 97.28 | 84.66 |
| +Dbest+w1wei | 98.16 | 85.75 | 96.63 | 87.29 | 96.68 | 80.62 | 97.78 | 86.52 | 97.31 | 85.05 |
| +Dbest+w2wei | 98.17 | 86.07 | 96.53 | 88.03 | 96.71 | 80.57 | 97.6 | 85.4 | 97.25 | 85.02 |
| +Dbest+w3wei | 98.11 | 85.61 | 96.5 | 86.33 | 96.67 | 80.57 | 97.68 | 86.59 | 97.24 | 84.78 |
| +Dbest+w4wei | 98.28 | 86.39 | 96.59 | 87.21 | 96.76 | 80.23 | 97.79 | 87.58 | 97.36 | 85.35 |
| +Dlast | 98.16 | 86.43 | 96.64 | 86.93 | 96.51 | 78.22 | 97.63 | 86.04 | 97.24 | 84.41 |
| +Dlast+w2wei | 97.82 | 86.07 | 96.53 | 87.08 | 96.67 | 80.51 | 97.77 | 87.3 | 97.2 | 85.24 |
| +Dlast+w4wei | 98.16 | 86.21 | 96.58 | 87.81 | 96.68 | 80.11 | 97.68 | 86.76 | 97.28 | 85.22 |
| +Dbest+w2wei+CRF | 98.17 | 85.37 | 96.37 | 85.26 | 96.75 | 80.96 | 97.79 | 86.86 | 97.27 | 84.61 |
| +Dbest+w4wei+CRF | 98.16 | 85.61 | 96.48 | 86.59 | 96.77 | 81.63 | 97.63 | 85.81 | 97.26 | 84.91 |
+
+Table 10: Take BERT as the base model.
+
+| Model | MSR | PKU | AS | CITYU | AVG |
| F | ROOV | F | ROOV | F | ROOV | F | ROOV | F | ROOV |
| RoBERTa(base) | 98.33 | 86.74 | 96.58 | 87.04 | 96.34 | 76.14 | 97.8 | 88.8 | 97.26 | 84.68 |
| +Dbest | 98.43 | 86.67 | 96.56 | 86.34 | 96.52 | 78.47 | 97.84 | 89.38 | 97.34 | 85.22 |
| +Dbest + w1wei | 98.35 | 88.55 | 96.64 | 87.39 | 96.53 | 78.58 | 97.95 | 90.03 | 97.37 | 86.14 |
| +Dbest + w2wei | 98.33 | 86.21 | 96.79 | 88.34 | 96.6 | 79.26 | 97.96 | 90.33 | 97.42 | 86.04 |
| +Dbest + w3wei | 98.25 | 87.88 | 96.57 | 87.23 | 96.6 | 79.41 | 97.9 | 89.58 | 97.33 | 86.03 |
| +Dbest + w4wei | 98.43 | 87.17 | 96.74 | 87.48 | 96.59 | 79.26 | 97.95 | 89.93 | 97.43 | 85.96 |
| +Dlast | 98.4 | 87.45 | 96.53 | 87.19 | 96.48 | 78.36 | 97.89 | 89.93 | 97.33 | 85.73 |
| +Dlast + w2wei | 98.15 | 86.89 | 96.7 | 88.39 | 96.54 | 79.21 | 97.94 | 90.2 | 97.33 | 86.17 |
| +Dlast + w4wei | 98.23 | 87.88 | 96.67 | 88.09 | 96.67 | 79.81 | 97.98 | 89.82 | 97.39 | 86.4 |
| +Dbest + w2wei + CRF | 98.41 | 87.0 | 96.63 | 86.86 | 96.55 | 79.09 | 97.9 | 89.28 | 97.37 | 85.56 |
+
+Table 11: Take RoBERTa as the base model.
+
+# A CWS Appendix
+
+Combining two encoders and two decoders, the final results on the four datasets are included in Tables 10 and 11. All experiments adopted the same hyperparameters, as shown in Table 4.
+
+We speculate that RoBERTa benefits from longer training time and larger batches of training data than BERT. In addition, some training tricks used in RoBERTa may also improve the performance of the pre-trained model, such as removing the next sentence prediction target, training longer sequences, and dynamically changing the mask pattern to be applied to the training data.
+
+To our surprise, if CRF is used as the decoder, the CWS model seems to be more prone to overfitting, resulting in worse word segmentation. However, we also notice that CRF performs well on the AS dataset when using BERT as the encoder, suggesting that Softmax may not really outperform
+
+CRF. We consider that the current parameters are more suitable for Softmax. More detailed analysis is available from Section 5.
+
+| Dataset | WEIBO | RESUME | MSRA |
| train | test | dev | train | test | dev | train | test | dev |
| Sentences | 1.4k | 0.27k | 0.27k | 3.8k | 0.48k | 0.46k | 46.4k | 4.4k | - |
| Chars | 73.8k | 14.8k | 14.5k | 124.1k | 15.1k | 13.9k | 2169.9k | 172.6k | - |
| Entities | 1.89k | 0.42k | 0.39k | 1.34k | 0.15k | 0.16k | 74.8k | 6.2k | - |
+
+Table 12: Corpus details of three NER datasets
+
+| Model | WEIBO | RESUME | MSRA | AVG |
| P | R | F | P | R | F | P | R | F | P | R | F |
| BERT(base) | 68.01 | 66.27 | 67.15 | 94.58 | 95.34 | 94.96 | 95.66 | 94.03 | 94.84 | 86.08 | 85.21 | 85.65 |
| +Dbest | 68.83 | 66.03 | 67.4 | 94.34 | 96.07 | 95.2 | 94.84 | 94.87 | 94.86 | 86.0 | 85.66 | 85.82 |
| +Dbest+w1wei | 70.12 | 69.62 | 69.87 | 95.21 | 96.32 | 95.76 | 95.09 | 94.27 | 94.68 | 86.81 | 86.74 | 86.77 |
| +Dbest+w2wei | 70.1 | 66.75 | 68.38 | 95.52 | 95.46 | 95.49 | 95.39 | 94.74 | 95.06 | 87.0 | 85.65 | 86.31 |
| +Dbest+w3wei | 69.93 | 70.1 | 70 | 95.32 | 96.2 | 95.76 | 95.48 | 94.73 | 95.1 | 86.91 | 87.01 | 86.95 |
| +Dbest+w4wei | 71.08 | 70.57 | 70.83 | 94.8 | 95.15 | 94.98 | 95.84 | 94.64 | 95.24 | 87.24 | 86.79 | 87.02 |
+
+Table 13: NER tasks. Take BERT as the base model.
+
+# B NER Appendix
+
+Corpus details of MSRA (Levow, 2006), WEIBO (Peng and Dredze, 2015), and RESUME (Zhang and Yang, 2018) are summarized in Table 12. We have no access to OntoNote 4, so didn't test it. All experiments adopted the same hyperparameters, as shown in Table 4. We did not list the latest performance of existing NER tasks, as we only explored whether WeiDC works for NER tasks and which weight mechanism is more robust.
+
+As shown in Table 13, $w_{wei}^{4}$ performs the best on the WEIBO and MSRA datasets, while the worst on the RESUME dataset, indicating that it is difficult, if not impossible, to design a general weight mechanism. The overall performance of $w_{wei}^{4}$ is still higher than other weight mechanisms. How to more naturally integrate weight mechanisms and knowledge distillation into NER tasks requires more exploration and research.
+
+In addition to such NER tasks, non-sequence annotation tasks, such as text classification, usually have only one label per sentence, which may limit the application of WeiDC.
\ No newline at end of file
diff --git a/weightedselfdistillationforchinesewordsegmentation/images.zip b/weightedselfdistillationforchinesewordsegmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9f8e734a786c5266fcec1b23e1e1881fee39cbd3
--- /dev/null
+++ b/weightedselfdistillationforchinesewordsegmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0edbe809f7cdcab9f7b706f54f18508585af50bc06da3ecd98921f33c3bcf4a5
+size 866744
diff --git a/weightedselfdistillationforchinesewordsegmentation/layout.json b/weightedselfdistillationforchinesewordsegmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..644d6d289185415548bb755e19893a39912cd3e0
--- /dev/null
+++ b/weightedselfdistillationforchinesewordsegmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0cc27957bbbb6bef6b43e1829e4a0b55cf828f7362c855698c995e0398f58134
+size 444380
diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_content_list.json b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8f7c298235d596953f6383e8464f8611c392050b
--- /dev/null
+++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b63f355c17de7cedea4d6115e4301e985ff4001115ee2390c8d85c51d9c7b0b8
+size 101035
diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_model.json b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..01b77797328e3678eb9dcadc34f4d569774b0ca7
--- /dev/null
+++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c37071ad150b34c82552c4f64f6a5f589fb84e113358bfa7941de3298c7e26e
+size 124113
diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_origin.pdf b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..80c612d4dd8c28d89f5b739b2df443acacebea08
--- /dev/null
+++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/928fa972-f20c-459e-bee4-213b5cceabc0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e583b39e7c13447b7c85fe074c9cf8323ff424bbe1e8b0a85bd62a50d61a6e9e
+size 504371
diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/full.md b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c3fb1946416b43b143efc47d5e05d274d9b78586
--- /dev/null
+++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/full.md
@@ -0,0 +1,393 @@
+# What does it take to bake a cake? The RecipeRef corpus and anaphora resolution in procedural text
+
+Biaoyan Fang $^{1}$ , Timothy Baldwin $^{3,1}$ and Karin Verspoor $^{2,1}$
+
+1The University of Melbourne, Australia
+
+$^{2}$ RMIT University, Australia
+
+$^{3}$ MBZUAI, Abu Dhabi
+
+biaoyanf@student.unimelb.edu.au
+
+{tbaldwin, karin.verspoor}@unimelb.edu.au
+
+# Abstract
+
+Procedural text contains rich anaphoric phenomena, yet has not received much attention in NLP. To fill this gap, we investigate the textual properties of two types of procedural text, recipes and chemical patents, and generalize an anaphora annotation framework developed for the chemical domain for modeling anaphoric phenomena in recipes. We apply this framework to annotate the RecipeRef corpus with both bridging and coreference relations. Through comparison to chemical patents, we show the complexity of anaphora resolution in recipes. We demonstrate empirically that transfer learning from the chemical domain improves resolution of anaphora in recipes, suggesting transferability of general procedural knowledge.
+
+# 1 Introduction
+
+Anaphora resolution is a core component in information extraction tasks (Poesio et al., 2016; Rösiger, 2019) and critical for various downstream natural language processing tasks, such as named entity recognition (Dai et al., 2019) and machine translation (Stanovsky et al., 2019). It consists of two primary anaphoric types, coreference (Ng, 2017; Clark and Manning, 2015) and bridging (Asher and Lascarides, 1998; Rösiger et al., 2018). Most anaphora corpora (Pradhan et al., 2012; Ghaddar and Langlais, 2016; Poesio et al., 2008), however, only focus on either coreference or bridging. To fill the gap in anaphora resolution, it is becoming increasingly important to have both types annotated.
+
+Current research on anaphora resolution is mostly based on declarative text (Pradhan et al., 2012; Ghaddar and Langlais, 2016; Rösiger, 2018a; Hou et al., 2018), such as news or dialogue. Procedural text, such as chemical patents or instruction manuals, has received limited attention despite being critical for human knowledge (Yamakata et al.,
+
+2020). In turn, correct resolution of entities is the cornerstone of procedural text comprehension—resolution of anaphora in these texts is required to determine what action applies to which entity.
+
+We focus in this work on the procedural text type of recipes. As shown in Fig. 1, recipes have rich and complex anaphora phenomena. Here, the expression the biscuits appears several times in text; while each occurrence relates to the same biscuits concept, their state and semantic meaning vary.
+
+Our aim in this paper is to address anaphora resolution in procedural text, especially for recipes, identifying anaphoric references and determining the relationships among the entities. We first investigate the textual properties of procedural texts, i.e. chemical patents and recipes. We then adapt an existing anaphora annotation schema developed for chemical patents (Fang et al., 2021a,b) to recipes, and define four types of anaphora relationships, encompassing coreference and bridging. We further create a dataset based on this schema and achieve high inter-annotator agreement with two annotators experienced with the domain. We additionally explore the feasibility of applying transfer learning from the chemical domain to model recipe anaphora resolution. The dataset and related code are publicly available. $^{1}$
+
+Our contributions in this paper include: (1) adaptation of the anaphora annotation framework from chemical patents for modeling anaphoric phenomena in recipes; (2) creation of a publicly accessible recipe anaphora resolution dataset based on the annotation framework (Fang et al., 2022); (3) investigation of the textual properties of chemical patents and recipes; and (4) demonstration of the benefit of utilizing procedural knowledge from the chemical domain to enhance recipe anaphora resolution via transfer learning.
+
+
+Figure 1: Excerpt of a recipe annotated for anaphora. Different color links represent different anaphora relation types. Detailed anaphora relation definitions are provided in Section 3.3.
+
+# 2 Related Work
+
+Anaphora relation subsumes two referring types: (1) coreference — expressions in the text that refer to the same entity (Clark and Manning, 2015; Ng, 2017); and (2) bridging — expressions that do not refer to the same entity, but are linked via semantic, lexical, or encyclopedic relations (Asher and Lascarides, 1998; Hou et al., 2018).
+
+Existing anaphora corpora mostly focus on declarative text, across a range of domains (Poesio et al., 2008; Pradhan et al., 2012; Ghaddar and Langlais, 2016; Cohen et al., 2017). There have been attempts to annotate procedural text corpora for anaphora, but most focus exclusively on coreference (Mysore et al., 2019; Friedrich et al., 2020).
+
+Pradhan et al. (2012) developed the CoNLL 2012 corpus for generic coreference resolution. It consists of declarative texts including news and magazine articles, across three languages — English, Chinese, and Arabic. This corpus adopted the OntoNotes 5.0 (Weischedel et al., 2013) annotation scheme, modeling coreference in terms of two subtypes: (1) identity, where the anaphoric references and referents are identical; and (2) apposite, where a noun phrase is modified by an intermediately-adjacent noun phrase. It models coreference as a clustering task, ignoring the direction of relations. Following largely the same annotation framework, the WikiCoref corpus (Ghaddar and Langlais, 2016) targeted Wikipedia texts. The InScript corpus (Modi et al., 2016) consists of 1,000 stories from 10 different scenarios corresponding to a "script", i.e. a standardised sequence of events. The corpus includes coreference annota
+
+tions for noun phrases.
+
+BioNLP-ST 2011 (Nguyen et al., 2011) is a gene-related coreference corpus based on abstracts from biomedical publications. It consists of four types of coreference: RELAT (relative pronouns or relative adjectives, e.g. that), PRON (pronouns, e.g. it), DNP (definite NPs or demonstrative NPs, e.g. NPs that begin with the) and APPOS (coreferences in apposition). As it only focuses on gene-related annotation, coreference is limited. CRAFT-ST 2019 (Cohen et al., 2017) annotates 97 full biomedical articles for coreference resolution, based on a slightly-modified version of the OntoNotes 5.0 annotation scheme. Compared to the BioNLP 2011 corpus, it contains a wider range of relation types, and is not limited to only abstracts. SCIERC (Luan et al., 2018) contains 500 abstracts from scientific articles, and coreference annotation.
+
+Due to the complexities of defining bridging (Zeldes, 2017; Hou et al., 2018), different corpora have adopted different definitions of bridging. According to Rosiger et al. (2018), bridging can be divided into: (1) referential, where the anaphoric references rely on the referent to be interpretable (e.g. a new town hall – the door, the old oak tree – leaves, etc.); and (2) lexical, encompassing lexical-semantic relations, such as meronymy or hyponymy (e.g. Europe and Spain are in a whole-part relation). The ARRAU corpus (Poesio et al., 2008) consists of three types of declarative text: news, dialogue and narrative text. The bridging annotations are mostly lexical, with a much smaller number of referential references. The ISNotes corpus (Hou et al., 2018) is based on 50 Wall Street
+
+Journal (WSJ) texts from the OntoNotes corpus, and contains both coreference and referential bridging. Similar to ISNotes, BASHI (Rösiger, 2018a) is based on another 50 WSJ texts from OntoNotes with referential bridging. With the same annotation scheme as BASHI, SciCorp (Rösiger, 2016) focuses on scientific text and referential bridging.
+
+A small number of domain-specific anaphora corpora have been developed for procedural text. The ChEMU-ref corpus (Fang et al., 2021a) contains 1,500 chemical patent excerpts describing chemical reactions. Based on generic and chemical knowledge, the corpus contains five types of anaphora relationships, i.e. Coreference, Transfers, Reaction-associated, Work-up, and Contained. Friedrich et al. (2020) developed the SOFC-Exp corpus based on 45 material sciences articles, for the purposes of information extraction. The corpus is primarily targeted at named entity recognition and relation extraction, with coreference as a secondary annotation task, based on coindexation between a common noun or pronoun and a more specific mention earlier in the text. Also in the context of material sciences, Mysore et al. (2019) annotated 230 synthesis procedures for coreference, largely based on text in parentheses and coreferent abbreviations.
+
+Recent work in recipe comprehension includes visual instructions (Huang et al., 2017; Nishimura et al., 2020) and linguistic texts (Agarwal and Miller, 2011; Kiddon et al., 2015; Jiang et al., 2020) across Japanese (Harashima and Hiramatsu, 2020; Harashima et al., 2016) and English (Batra et al., 2020; Marin et al., 2019). Most research analyzes the text of recipes as a workflow graph based on actions (Kiddon et al., 2015; Mori et al., 2014; Yamakata et al., 2020), where the vertices represent name entities (e.g. action, food, etc.) and edges represent relational structure (e.g. action complement, food complement, etc.). Although interactions among ingredients can be derived via action nodes, this approach doesn't sufficiently capture anaphora phenomena, i.e. coreference and bridging. The RISEC corpus (Jiang et al., 2020) identifies candidate expressions for zero anaphora verbs in English recipes. However, they do not capture generic anaphoric phenomena.
+
+In terms of modeling, most research has handled coreference and bridging separately due to limited data availability (and a lack of annotated datasets containing both coreference and bridging).
+
+For coreference resolution, span ranking models (Lee et al., 2017, 2018) have become the benchmark method, supplanting mention ranking models (Clark and Manning, 2015, 2016a,b; Wiseman et al., 2015, 2016). Various span ranking variants have been proposed (Zhang et al., 2018; Grobol, 2019; Kantor and Globerson, 2019), and achieved strong results. With the increasing number of coreference corpora, transfer learning (Brack et al., 2021; Xia and Van Durme, 2021) involving pretraining on a source domain and fine-tuning on a target domain has shown great potential at improving coreference resolution. Bridging methods can be categorised into: (1) rule-based methods (Hou et al., 2014; Rösiger et al., 2018; Rösiger, 2018b); and (2) machine learning methods (Hou, 2018a,b, 2020; Yu and Poesio, 2020). Hou (2020) modeled bridging resolution as a question answering task, and fine-tuned the question answering model from generic question answering corpora. By utilizing transfer learning, they achieved a stronger performance on the bridging task. Yu and Poesio (2020) proposed a joint training framework for bridging and coreference resolution based on an end-to-end coreference model (Lee et al., 2017). Similar to coreference, they modeled bridging as a clustering task. Through joint training, they achieved substantial improvements for bridging, but the impact on coreference was less clear. Fang et al. (2021a) adopted the same end-to-end framework for joint training, modeling bridging as a mention pair classification task, and achieved improvements on both subtasks.
+
+# 3 Annotation Scheme
+
+In this section, we describe our adapted annotation scheme for recipe anaphora annotation. The complete annotation guideline is available at Fang et al. (2022).
+
+# 3.1 Corpus Selection
+
+We create our RecipeRef dataset by random sampling texts from RecipeDB (Batra et al., 2020), a large, diverse recipe database containing 118,171 English recipes with 268 processes and more than 20,262 ingredients. It consists of ingredient lists and instruction sections. We select the instruction section of each recipe, which details the steps for preparing the dish.
+
+# 3.2 Mention Types
+
+As our goal is to capture anaphora in recipes, we focus on ingredient-related expressions. In line with previous work (Pradhan et al., 2012; Cohen et al., 2017; Fang et al., 2021a; Ghaddar and Langlais, 2016), we leave out singleton mentions, i.e. mentions that are not involved in anaphora relations (as defined in Section 3.3) are not annotated. Mention types that are considered for anaphora relations are listed below.
+
+Ingredient Terms: In recipes, ingredient terms are essential as they indicate what ingredients are used, in the form of individual words or phrases, such as butter, endive heads, red peppers, or garlic powder.
+
+Referring Expressions: We consider referring expressions to be pronouns (e.g. it or they) and generic phrases (e.g. soup, or the pastry mixture) used to represent ingredients that were previously introduced in the recipe text.
+
+We adopt several criteria in annotating mentions:
+
+- Premodifiers: One of the key challenges in procedural text is to track state changes in entities. It is critical to include premodifiers, as they play an important role in identifying an entity's state. We consider ingredients with premodifiers to be atomic mentions, e.g. chopped chicken, roasted red peppers, and four sandwiches.2
+- Numbers: In some cases, standalone numeric expressions can be used to reference to ingredients, and in such cases are considered to be mentions. Examples of this are 1 in Beat eggs, 1 at a time, and three in Combine together to make a sandwich. Repeat to make three.
+
+# 3.3 Relation Types
+
+A core challenge in procedural text comprehension is tracking the state of each entity (Dalvi et al., 2018; Tandon et al., 2018). Recipes contain rich information about changes in the state of ingredients. As shown in Fig. 1, to obtain the biscuits in line 6, the biscuits in line 1 has gone through several processes, involving physical (e.g. flatten) and chemical change (e.g. bake). Capturing labeled
+
+
+Figure 2: Overall schema of anaphora relations for recipes.
+
+interactions between ingredients provides a richer understanding of ingredients and their interactions (i.e. where is the ingredient from).
+
+There are two basic types of anaphora: coreference and bridging. In recipes, we define bridging according to three subtypes of referring relations, based on the state of entities (with coreference making up the fourth subtype). The overall schema of anaphora relations for recipes is shown in Fig. 2.
+
+In anaphora resolution, an antecedent is a linguistic expression that anchors the interpretation of a second expression, the anaphor, which cannot be interpreted in isolation or has little meaning on its own. Anaphors are linked to antecedents via anaphora relations. Consistent with previous work, we limit anaphors to link to antecedents appearing earlier in the text (i.e. we do not annotate instances of cataphora, which we found to occur very rarely in recipe texts), and the direction of links is preserved.
+
+# 3.3.1 Coreference
+
+In general applications, coreference focuses on expressions that refer to the same entity in the real-world (Clark and Manning, 2015; Ng, 2017). In procedural text, the state of an entity can be changed by an action applied to that entity. To capture state changes, we add an extra constraint on coreference in requiring that the two mentions refer to the same entity in the same state.
+
+To eliminate ambiguity in linking coreferent mentions, the closet antecedent is linked for a given anaphor.
+
+# 3.3.2 Bridging
+
+As discussed in Section 3.3.1, we consider the state of entities to interface with anaphora in procedural text. As such, we define three subtypes of bridging relations, based on the state of the entities involved.
+
+TRANSFORMED A one-to-one anaphoric link for an ingredient that is meaning-wise the same
+
+| Combination
+Process | Chemical
+Patents | ...5-Isopropylisoxazol-3-carboxylic acid (1.00 g, 6.45 mmol) was dissolved in methanol (20 mL), and thionyl chloride (1.51 g, 12.9 mmol) was slowly added at 0°C. The reaction solution was slowly warmed to 25°C and stirred for 12 hour... |
| Recipes | ... mix 2 tablespoons of the olive oil, chili powder, allspice, salt, and pepper in a small bowl and brush the turkey all over with the spice mixture... |
| Removal
+Process | Chemical
+Patents | ...the mixture was extracted three times with ethyl acetate (50 mL). The combined ethyl acetate layer was washed with saturated brine (50 mL) and dried over anhydrous sodium sulfate... |
| Recipes | ...add chicken thighs to the broth and simmer until cooked through, about 10 minutes. remove chicken with slotted spoon and set aside; when cool enough to handle, slice thinly. continue to simmer broth, return to pot... |
+
+Table 1: Examples of processes in chemical patents and recipes.
+
+but has undergone physical/chemical change (e.g. peeling, baking, or boiling). For example, in Fig. 1, the biscuits in line 4 and 5 are annotated as TRANSFORMED because of the bake action that changes the state of the biscuits in line 4.
+
+# INGREDIENT(WITHOUT-STATE-CHANGE)-
+
+ASSOCIATED A one-to-many relationship between a processed food mention and its source ingredients, where the source ingredients have not undergone a state change (i.e. physical/chemical change). As shown in Fig. 1, the cheese in line 5 refers to its source ingredients the mozzarella and Parmesan cheese in line 4 and there is no state change. Thus, they are annotated as INGREDIENT(WITHOUT-STATE-CHANGE)-ASSOCIATED.
+
+# INGREDIENT(WITH-STATE-CHANGE)-
+
+ASSOCIATED A one-to-many relationship between a processed food mention and its source ingredients, involving a state change. As an example, the biscuits in Fig. 1 line 6 is a combination of previously-mentioned source ingredients (i.e. the sauce, a pinch of the oregano, pepperoni, the cheese, and the biscuits) involving a state change through baking. They are thus annotated as INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED.
+
+# 3.4 Comparison with Chemical Patents
+
+As shown in Table 1, chemical patents and recipes have many commonalities. They use similar language to describe the application of processes (e.g. combination or removal) to source entities to obtain new entities, making it feasible to adapt the anaphora annotation scheme from chemical patents (Fang et al., 2021a,b) to recipes.
+
+However, there are some key differences in the annotation schemes.
+
+- Domain Differences: Some relation types defined for chemical patents are domain-specific,
+
+e.g. the WORK-UP relation is specific to chemistry and cannot be directly applied to recipes.
+
+- Determining State Change: In both chemical patents and recipes, anaphora resolution aims to capture anaphoric relations between mentions involving possible state changes. In the chemical domain, we are most concerned with chemical changes (e.g. oxidation or acidification). However, in the recipe domain, we are also interested in physical changes (e.g. chop or slice).
+- Rich Semantic Meaning in Recipes: Ingredient terms in recipes may represent a combination of ingredients. As shown in Fig. 1, the biscuits in line 6 represent a combination of previously-mentioned ingredients and not just the biscuit ingredient itself. However, in chemical patents, chemical names have specific meanings and cannot be semantically extended. This is a key challenge in resolving anaphora in recipes.
+- Variability in Instruction Descriptions: Although chemical patents and recipes have similar structure, instruction descriptions in recipes are structurally more variable. In chemical patents, processed entities are mostly directly used in the immediately-proceeding process. However, processed entities in recipes can be mentioned far later in the text (esp. in "modular" recipes, e.g. where a cake, cake filling, and cake icing are separately prepared, and only combined in a final step).
+- Hierarchical Structure in Recipe Relation Types: Anaphora relation types in recipes are defined hierarchically (as shown in Fig. 2), such that a simplified version of the recipe anaphora resolution task, without considering state change, can be easily derived. In chemical patents, there is no clear way of simplify
+
+ | RecipeRef | ChEMU-ref |
| Documents | 80 | 1,125 |
| Sentences | 999 | 5,768 |
| Tokens per sentence | 12.6 | 27.6 |
| Mentions | 1,408 | 17,023 |
| Mentions per doc | 17.6 | 15.1 |
| COREF | 229 / 415 | 3,243 |
| COREF per doc | 2.9 / 5.2 | 2.9 |
| Bridging* | 1,104 / 918 | 12,796 |
| Bridging* per doc | 13.8 / 11.5 | 11.4 |
| TR | 186 / — | — |
| IWOA | 91 / 918 | — |
| IwA | 827 / — | — |
+
+Table 2: Corpus statistics. For ChEMU-ref, we include the training and development set. "COREF", "TR", "IWOA" and "IWA" denote the COREREFERENCE, TRANSFORMED, INGREDIENT(WITHOUT-STATE-CHANGE)-ASSOCIATED and INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED relations, respectively. "/" shows the number of relations with and without consideration of state change. "Bridging*" is the total number of bridging relations across all subtypes.
+
+ing the scheme while preserving the anaphoric relations.
+
+# 4 Task Definition
+
+Following the approach of Fang et al. (2021a), anaphora resolution is modeled as a two-step task: (1) mention detection; and (2) anaphora relation detection.
+
+As anaphora relation types in recipes are defined hierarchically, we can derive a simplified version of the recipe anaphora resolution task by removing state changes. That is, COREFERENCE and TRANSFORMED can be merged when we remove consideration of state changes, and INGREDIENT(WHOTOUT-STATE-CHANGE)-ASSOCIATED and INGREDIENT(WHOTOUT-STATE-CHANGE)-ASSOCIATED can similarly be merged. As such, we evaluate recipe anaphora resolution both with state change (4-way), and without state change (2-way).
+
+As our corpus includes one-to-many anaphoric relations for bridging, standard coreference evaluation metrics (Luo, 2005; Recasens and Hovy, 2011; Moosavi and Strube, 2016), which assume a given mention only occurs in a unique cluster, are not suitable for this task. Although coreferences involving one-to-one relations in this task could be evaluated with these metrics, to maintain a unified evaluation for bridging and coreference, we utilize precision,
+
+recall and F1 as our core metrics. Specifically, we follow the evaluation of the ChEMU-ref corpus, scoring coreference from two perspectives: (1) surface coreference, where a coreferent anaphor links to its closest antecedent; and (2) atom coreference, where a coreferent anaphor links to a correct antecedent (Kim et al., 2012).
+
+For manual annotation, we use the Brat rapid annotation tool. In an attempt to achieve high quality, we went through 8 rounds of annotation training and refinement of the anaphora annotation with two annotators experienced with the recipe domain. In each round of training, the annotators independently annotated 10 recipes (different for each round of annotation) and met afterwards to compare annotation results. Further refinements of the annotation guidelines were made based on the discussion.
+
+After training, we reached a high inter-annotator agreement (IAA) of Krippendorff's $\alpha = 0.85$ , mention-level $F1 = 0.88$ , and relation-level $F1 = 0.67$ . As a point of comparison, the respective values after the first round of annotator training were 0.45, 0.51 and 0.29, respectively.
+
+We use 80 double-annotated recipes with harmonized annotations as our corpus. The statistics of this corpus in comparison with the ChEMU-ref corpus (Fang et al., 2021a) are shown in Table 2.
+
+# 5 Methodology
+
+To investigate the benefit of transfer learning from the chemical domain, we follow the configuration of Fang et al. (2021a), modeling bridging as a classification task and adopting the benchmark end-to-end neural coreference model of Lee et al. (2017, 2018) for joint training of the two anaphora resolution types.
+
+For each span $x_{i}$ , the model learns: (1) a mention score $s_{m_i}$ for mention detection:
+
+$$
+s _ {m} (i) = w _ {s} \cdot \mathrm {F F N N} _ {s} (s _ {i})
+$$
+
+and (2) a distribution $P(\cdot)$ over possible antecedent spans $Y(i)$ for coreference resolution:
+
+$$
+P (y) = \frac {\exp \left(s _ {c} (i , y)\right)}{\sum_ {y ^ {\prime} \in Y} \exp \left(s _ {c} (i , y ^ {\prime})\right)}
+$$
+
+where $s_c(i, y)$ is the output of a feed-forward neural network with span pair embedding $s_{i,y}$ , and (3) a pair-wise score $s_b(i, y)$ of each possible antecedent span $y$ for bridging resolution:
+
+$$
+s _ {b} (i, y) = \mathrm {s o f t m a x} (w _ {b} \cdot \mathrm {F F N N} _ {b} (s _ {i, y}))
+$$
+
+A span representation $s_i$ is the concatenation of output token representations $(x_i^*)$ from a bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997), the syntactic head representation $(h_i)$ obtained from an attention mechanism (Bahdanau et al., 2015), and a feature vector of the mention $(\phi(i))$ :
+
+$$
+s _ {i} = [ x _ {\mathrm {S T A R T} (i)} ^ {*}, x _ {\mathrm {E N D} (i)} ^ {*}, h _ {i}, \phi (i) ]
+$$
+
+where START(i) and END(i) represent the starting and ending token index for span $i$ , respectively.
+
+A span pair embedding $s_{i,y}$ is obtained by the concatenation of each span embedding $(s(i), s(y))$ and the element-wise multiplication of the span embeddings $(s(i) \circ s(y))$ and a feature vector $(\phi(i, y))$ for span pair $i$ and $y$ :
+
+$$
+s _ {i, y} = [ s (i), s (y), s (i) \circ s (y), \phi (i, y) ]
+$$
+
+For mention loss, we use cross-entropy loss:
+
+$$
+\begin{array}{l} L _ {m} = - \sum_ {i = 1} ^ {\lambda T} m _ {i} * \log (\operatorname {s i g m o i d} (s _ {m} (i))) \\ + \left(1 - m _ {i}\right) * \log \left(1 - \operatorname {s i g m o i d} \left(s _ {m} (i)\right)\right) \\ \end{array}
+$$
+
+where:
+
+$$
+m _ {i} = \left\{ \begin{array}{l l} 0 & \text {s p a n} i \notin \mathrm {G O L D} _ {m} \\ 1 & \text {s p a n} i \in \mathrm {G O L D} _ {m} \end{array} \right.
+$$
+
+and $\mathrm{GOLD}_m$ is the set of gold mentions that are involved in anaphora relations.
+
+For coreference resolution, we compute the loss as follows, where $\mathrm{GOLD}_c(i)$ is the gold coreferent antecedents that span $i$ refers to:
+
+$$
+L _ {c} = \log \prod_ {i = 1} ^ {\lambda T} \sum_ {\hat {y} \in Y (i) \bigcap \mathrm {G O L D} _ {c} (i)} P (\hat {y})
+$$
+
+For bridging resolution, the loss is obtained by multiclass cross-entropy:
+
+$$
+L _ {b} = - \sum_ {c = 1} ^ {K _ {c}} \sum_ {i = 1} ^ {\lambda T} \sum_ {y} b _ {i, j, c} \log (s _ {b} (i, y, c))
+$$
+
+where $K_{c}$ represents the number of bridging categories, $s_b(i,j,c)$ denotes the prediction of $s_b(i,j)$ under category $c$ , and:
+
+$$
+b _ {i, j, c} = \left\{ \begin{array}{l l} 0 & \text {s p a n p a i r} (i, j) \notin \operatorname {G O L D} _ {b} (c) \\ 1 & \text {s p a n p a i r} (i, j) \in \operatorname {G O L D} _ {b} (c) \end{array} \right.
+$$
+
+where $\mathrm{GOLD}_b(c)$ is the gold bridging relation under category $c$ .
+
+We compute the total loss as $L = L_{m} + L_{ref}$ where:
+
+$$
+L _ {r e f} = \left\{ \begin{array}{l l} L _ {c} & \text {f o r c o r e f e r e n c e} \\ L _ {b} & \text {f o r b r i d g i n g} \\ L _ {c} + L _ {b} & \text {f o r j o i n t t r a i n i n g} \end{array} \right.
+$$
+
+# 6 Experiments
+
+In this section, we present experimental results both with and without state change for recipe anaphora resolution. We use a similar configuration to Lee et al. (2018). Specifically, we use the concatenation of 300-dimensional GloVe embeddings (Pennington et al., 2014), 1024-dimensional ELMo word representations (Peters et al., 2018), and 8-dimensional character embeddings that are learned from a character CNN with windows of 3, 4, and 5 characters as the pretrained token embeddings. Each feed-forward neural network consists of two hidden layers with 150 dimensions and rectified linear units (Nair and Hinton, 2010). The gold mentions are separated in coreference and bridging. For joint training, the gold mentions are combined.
+
+We use 10-fold cross-validation to evaluate our model on recipe anaphora resolution. Since end-to-end model performance varies due to random initialization (Lee et al., 2017), we randomly shuffle the dataset 5 times and run cross-validation 3 times for each shuffle. Averaged results are reported.
+
+Table 3 shows our primary results, without state change. For coreference resolution, we provide experimental results on both surface and atom coreference metrics. For bridging resolution, we focus on overall bridging results. Since surface and atom coreference metrics show the same trends in performance, we use surface coreference and overall bridging to compute overall results.
+
+Overall, joint training achieves $26.2\%$ $F_{1}$ score for surface coreference and $26.9\%$ $F_{1}$ score for bridging, with $+1.4\%$ and $+0.9\%$ $F_{1}$ score absolute improvement over the component-wise models. As such, joint training improves the performance of both tasks. Compared to precision, recall in
+
+| Relation | Method | PA | RA | FA | PR | RR | FR |
| COREF (Surface) | coreference | 62.0 ± 1.0 | 37.8 ± 0.8 | 46.1 ± 0.8 | 33.6 ± 0.9 | 20.4 ± 0.6 | 24.8 ± 0.7 |
| joint_train | 65.2 ± 0.9 | 37.5 ± 0.9 | 46.7 ± 0.8 | 36.8 ± 0.9 | 21.0 ± 0.6 | 26.2 ± 0.7 |
| COREF (Atom) | coreference | 62.0 ± 1.0 | 37.8 ± 0.8 | 46.1 ± 0.8 | 46.8 ± 1.1 | 26.1 ± 0.7 | 32.9 ± 0.7 |
| joint_train | 65.2 ± 0.9 | 37.5 ± 0.9 | 46.7 ± 0.8 | 50.4 ± 1.1 | 26.7 ± 0.7 | 34.4 ± 0.8 |
| Bridging | bridging | 56.1 ± 1.2 | 35.1 ± 0.9 | 41.7 ± 0.8 | 36.3 ± 0.9 | 21.5 ± 0.8 | 26.0 ± 0.7 |
| joint_train | 57.7 ± 1.3 | 35.5 ± 0.9 | 42.7 ± 0.8 | 38.0 ± 0.8 | 21.9 ± 0.7 | 26.9 ± 0.7 |
| Overall | joint_train | 62.1 ± 0.7 | 37.0 ± 0.5 | 46.0 ± 0.5 | 37.4 ± 0.7 | 21.8 ± 0.5 | 27.1 ± 0.5 |
+
+anaphor and relation detection is lower, indicating the complexity in anaphoric forms in recipes.
+
+We also experimented with joint coreference resolution and change-of-state classification, and observed similar trends in the results, at reduced performance levels due to the difficulty in additionally predicting state changes (as shown in Appendix A).
+
+Table 3: Anaphora resolution results based on 10-fold cross validation without considering state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction.
+
+| Relation | Method | FA | FR |
| COREF (Surface) | coreference | 46.1 ± 0.8 | 24.8 ± 0.7 |
| - w/ transfer | 46.7 ± 0.8 | 25.3 ± 0.7 |
| joint_train | 46.7 ± 0.8 | 26.2 ± 0.7 |
| - w/ transfer | 45.3 ± 0.9 | 26.9 ± 0.7 |
| COREF (Atom) | coreference | 46.1 ± 0.8 | 32.9 ± 0.7 |
| - w/ transfer | 46.7 ± 0.8 | 33.5 ± 0.8 |
| joint_train | 46.7 ± 0.8 | 34.4 ± 0.8 |
| - w/ transfer | 45.3 ± 0.9 | 33.9 ± 0.8 |
| Bridging | bridging | 41.7 ± 0.8 | 26.0 ± 0.7 |
| - w/ transfer | 40.6 ± 0.9 | 26.7 ± 0.7 |
| joint_train | 42.7 ± 0.8 | 26.9 ± 0.7 |
| - w/ transfer | 43.4 ± 0.8 | 27.9 ± 0.7 |
| Overall | joint_train | 46.0 ± 0.5 | 27.1 ± 0.5 |
| - w/ transfer | 45.2 ± 0.6 | 27.9 ± 0.5 |
+
+Table 4: Experiments with transfer learning, without considering state change. “ $F_A$ ” denotes the F1 score for anaphor prediction, and “ $F_R$ ” for relation prediction.
+
+As discussed in Section 3.4, chemical patents and recipes have similar text structure. Based on the hypothesis that this structural similarity can lead to successful domain transfer, we experiment with transfer learning from the chemical domain to recipes. Specifically, we pretrain the anaphora resolution model on the ChEMU-ref corpus (Fang et al., 2021a,b) with 10,000 epochs, and fine-tune it over the recipe corpus.
+
+Table 4 shows the results with transfer learning, demonstrating consistent improvements over coreference and bridging resolution. Overall, we achieve $27.9\%$ $F_{1}$ score for relation prediction under joint
+
+training and transfer learning, obtaining $+0.8\%$ $F_{1}$ score absolute improvement. Incorporating procedural knowledge also improves component-wise models by $+0.5\%$ and $+0.7\%$ $F_{1}$ score (absolute) for surface coreference and bridging, respectively.
+
+We performance error analysis on 5 randomly-selected batches from 10-fold cross-validation based on joint training. There are two primary causes of error. First, the model struggles to capture the semantics of ingredient terms as they are combined with other ingredients. As discussed in Section 3.4, ingredient terms can semantically represent a mixture. E.g. the biscuits in Fig. 1 line 6 and the yellowtail in Table 5 Ex 1 both represent a mixture of previous ingredients which includes the key ingredient of biscuits and yellowtail, respectively. The model fails to capture the fact that these mentions incorporate multiple antecedents, and incorrectly analyzes them as COREFERENCE. The second cause of error is in failing to detect state change, mostly in falsely analyzing TRANSFORMED as COREFERENCE, and INGREDIENT(WHOUT-STATE-CHANGE)-ASSOCIATED as INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED.
+
+Errors in coreference resolution occur due to two primary factors: (1) imbalance of coreference and bridging; and (2) entities with different surface expressions. As shown in Table 2, coreference relations are not common in recipes, making it hard for models to capture coreference links. Models also fail to capture the coreference relationship of entities in the face of lexical variation.
+
+In bridging resolution, models also tend to predict anaphoric links as INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED due to its predominance in the annotated data. Furthermore, given that it is a many-to-one relation, models
+
+| 1 | Season the yellowtail fillets with salt and pepper, then dust 1 side only with flour, shaking off any excess. in a medium sized saute pan, heat the olive oil until just nearly smoking and add the yellowtail, flour side down... |
| 2 | In a bowl, mash the corned beef as much as you can. Add the tinned tomatoes, onions and curry powder. Mix well until the mixture becomes free of any lump of corned beef. Transfer to a frying pan on a medium heat, cook the mixture for about 10 – 15 minutes until the mixture is heated through... |
| 3 | In a ceramic or glass bowl, combine chiles, orange juice, lemon juice, and orange peel. Add the fish and refrigerate for 4 to 6 hours, stirring occasionally until the fish loses all translucency. You may leave in the refrigerator overnight to marinate, if desired. Remove the fish, reserving the juice. |
| 4 | ...Add the white wine and passion fruit. Over medium heat, reduce by 3/ the liquid in the pan will begin to look thick and bubbly. Remove the pan from the heat and slowly whisk in the butter a little bit at a time, making sure all butter is whisked in before adding more... |
+
+Table 5: Examples of anaphora phenomena from the RecipeRef dataset.
+
+tend to over-predict INGREDIENT(WITH-STATE-CHANGE)-ASSOCIATED relations to mentions which are not associated with the given anaphor. A natural explanation for this is that span-pair predictions are made independent of one another, and there is no way for the model to capture interactions between anaphors. Simultaneously evaluating candidate antecedents might address this issue.
+
+By incorporating procedural knowledge via transfer learning, models achieve better performance. The improvement occurs in two main forms. First, mention detection improves. For example in Table 5 Ex 3, the juice and its related anaphoric relations are predicted by models with transfer learning, yet not captured by standard joint training models. Second, detection of lexically-varienced coreferent mentions improves. With Ex 4, standard joint training models fail to capture the the COREFERENCE relation between the butter and all butter due to variation in expression, but this relation is correctly captured by models with transfer learning.
+
+Directions for future work include: (1) joint learning with COREFERENCE and TRANSFORMED relations, which differ only in whether there is a state change or not, such that considering them together may be effective; (2) incorporation of external knowledge, including knowledge about ingredient entities, which may further improve transfer learning; and (3) utilization of transformer based models (Joshi et al., 2020; Xia and Van Durme, 2021), which have been shown to perform well in general-domain coreference settings.
+
+# 7 Conclusion
+
+In this paper, we have extended earlier work on anaphora resolution over chemical patents to the domain of recipes. We adapted the annotation schema and guidelines for chemical patents, and created a labeled anaphora resolution corpus for recipes. We further defined two tasks for modeling anaphora phenomena in recipes, with and without consider-
+
+ation of state change. Our experiments show the benefit of joint training, and also transfer learning from the chemical domain.
+
+# Acknowledgements
+
+This work was done in the framework of the ChEMU project, supported by Australian Research Council Linkage Project project number LP160101469 and Elsevier. A graduate research scholarship was provided by the University of Melbourne Faculty of Engineering and IT to Biaoyan Fang. We would also like to thank Dr. Christian Druckenbrodt, Dr. Saber A. Akhondi, and Dr. Camilo Thorne from Elsevier, as well as our two expert recipe annotators Kate Baldwin and Ayah Tayeh, for their contributions in refining the annotation guidelines.
+
+# References
+
+Rahul Agarwal and Kevin Miller. 2011. Information extraction from recipes. Department of Computer Science, Stanford University-2008.
+Nicholas Asher and Alex Lascarides. 1998. Bridging. Journal of Semantics, 15(1):83-113.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations (ICLR 2015), San Diego, USA.
+Devansh Batra, Nirav Diwan, Utkarsh Upadhyay, Jushaan Singh Kalra, Tript Sharma, Aman Kumar Sharma, Dheeraj Khanna, Jaspreet Singh Marwah, Srilakshmi Kalathil, Navjot Singh, Rudraksh Tuwani, and Ganesh Bagler. 2020. RecipeDB: A resource for exploring recipes. Database, 2020.
+Arthur Brack, Daniel Uwe Müller, Anett Hoppe, and Ralph Ewerth. 2021. Coreference resolution in research papers from multiple domains. In Proc. of the 43rd European Conference on Information Retrieval, online.
+Kevin Clark and Christopher D Manning. 2015. Entity-centric coreference resolution with model stacking.
+
+In Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1405-1415, Beijing, China.
+Kevin Clark and Christopher D. Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. In Proc. of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2256-2262, Austin, USA.
+Kevin Clark and Christopher D. Manning. 2016b. Improving coreference resolution by learning entity-level distributed representations. In Proc. of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 643-653, Berlin, Germany.
+K Bretonnel Cohen, Arrick Lanfranchi, Miji Joo-young Choi, Michael Bada, William A Baumgartner, Natalya Panteleyeva, Karin Verspoor, Martha Palmer, and Lawrence E Hunter. 2017. Coreference annotation and resolution in the Colorado Richly Annotated Full Text (CRAFT) corpus of biomedical journal articles. BMC Bioinformatics, 18(1):372.
+Zeyu Dai, Hongliang Fei, and Ping Li. 2019. Coreference aware representation learning for neural named entity recognition. In *IJCAI*, pages 4946-4953.
+Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: A challenge dataset and models for process paragraph comprehension. In *NAACL*.
+Biaoyan Fang, Christian Druckenbrodt, Saber A Akhondi, Jiayuan He, Timothy Baldwin, and Karin Verspoor. 2021a. ChEMU-ref: A corpus for modeling anaphora resolution in the chemical domain. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1362-1375, Online. Association for Computational Linguistics.
+Biaoyan Fang, Christian Druckenbrodt, Saber A. Akhondi, Camilo Thorne, Timothy Baldwin, and Karin Verspoor. 2022. RecipeRef corpus for modeling anaphora resolution from the procedural text of recipes. Mendeley Data.
+Biaoyan Fang, Christian Druckenbrodt, Colleen Yeow Hui Shiuan, Sacha Novakovic, Ralph Hössel, Saber A. Akhondi, Jiayuan He, Meladel Mistica, Timothy Baldwin, and Karin Verspoor. 2021b. ChEMU-Ref dataset for modeling anaphora resolution in the chemical domain. Mendeley Data.
+Annemarie Friedrich, Heike Adel, Federico Tomazic, Johannes Hingerl, Renou Benteau, Anika Marusczyk, and Lukas Lange. 2020. The SOFC-exp corpus and neural approaches to information extraction in the materials science domain. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1255-1268, Online. Association for Computational Linguistics.
+
+Abbas Ghaddar and Philippe Langlais. 2016. WikiCoref: An English coreference-annotated corpus of Wikipedia articles. In Proc. of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 136-142, Porto-rož, Slovenia.
+Loic Grobol. 2019. Neural coreference resolution with limited lexical context and explicit mention detection for oral French. In Proc. of the Second Workshop on Computational Models of Reference, Anaphora and Coreference, pages 8-14, Minneapolis, USA.
+Jun Harashima, Michiaki Ariga, Kenta Murata, and Masayuki Ioki. 2016. A large-scale recipe and meal data collection as infrastructure for food research. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2455-2459, Porto Roz, Slovenia. European Language Resources Association (ELRA).
+Jun Harashima and Makoto Hiramatsu. 2020. Cookpad parsed corpus: Linguistic annotations of Japanese recipes. In Proceedings of the 14th Linguistic Annotation Workshop, pages 87-92, Barcelona, Spain. Association for Computational Linguistics.
+Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. *Neural Computation*, 9(8):1735-1780.
+Yufang Hou. 2018a. A deterministic algorithm for bridging anaphora resolution. In Proc. of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1938-1948, Brussels, Belgium.
+Yufang Hou. 2018b. Enhanced word representations for bridging anaphora resolution. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 1-7, New Orleans, USA.
+Yufang Hou. 2020. Bridging anaphora resolution as question answering. In Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1428-1438, Online.
+Yufang Hou, Katja Markert, and Michael Strube. 2014. A rule-based system for unrestricted bridging resolution: Recognizing bridging anaphora and finding links to antecedents. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 2082-2093, Doha, Qatar.
+Yufang Hou, Katja Markert, and Michael Strube. 2018. Unrestricted bridging resolution. Computational Linguistics, 44(2):237-284.
+De-An Huang, Joseph J Lim, Li Fei-Fei, and Juan Carlos Niebles. 2017. Unsupervised visual-linguistic reference resolution in instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2183-2192.
+
+Yiwei Jiang, Klim Zaporojets, Johannes Deleu, Thomas Demeester, and Chris Develder. 2020. Recipe instruction semantics corpus (RISc): Resolving semantic structure and zero anaphora in recipes. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 821-826, Suzhou, China. Association for Computational Linguistics.
+Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
+Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In Proc. of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019), pages 673-677, Florence, Italy.
+Chloe Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi. 2015. Mise en place: Unsupervised interpretation of instructional recipes. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 982-992, Lisbon, Portugal. Association for Computational Linguistics.
+Jin-Dong Kim, Ngan Nguyen, Yue Wang, Jun'ichi Tsujii, Toshihisa Takagi, and Akinori Yonezawa. 2012. The Genia event and protein coreference tasks of the BioNLP shared task 2011. BMC Bioinformatics, 13(11):S1.
+Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proc. of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark.
+Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, USA.
+Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219-3232, Brussels, Belgium. Association for Computational Linguistics.
+Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proc. of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (EMNLP 2005), pages 25-32, Vancouver, Canada.
+
+Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, and Antonio Torralba. 2019. *Recipe1m+: A dataset for learning cross-modal embeddings for cooking recipes and food images*. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(1):187-203.
+Ashutosh Modi, Tatjana Anikina, Simon Ostermann, and Manfred Pinkal. 2016. InScript: Narrative texts annotated with script information. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3485-3493, Porto-rož, Slovenia. European Language Resources Association (ELRA).
+Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proc. of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Berlin, Germany.
+Shinsuke Mori, Hirokuni Maeta, Yoko Yamakata, and Tetsuro Sasada. 2014. Flow graph corpus from recipe texts. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2370-2377, Reykjavik, Iceland. European Language Resources Association (ELRA).
+Sheshera Mysore, Zachary Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, and Elsa Olivetti. 2019. The materials science procedural text corpus: Annotating materials synthesis procedures with shallow semantic structures. In Proceedings of the 13th Linguistic Annotation Workshop, pages 56-64, Florence, Italy. Association for Computational Linguistics.
+Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proc. of the 33rd International Conference on Machine Learning (ICML 2016), New York, USA.
+Vincent Ng. 2017. Machine learning for entity coreference resolution: A retrospective look at two decades of research. In Proc. of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI'17), pages 4877-4884, San Francisco, USA.
+Ngan Nguyen, Jin-Dong Kim, and Jun'ichi Tsujii. 2011. Overview of BioNLP 2011 protein coreference shared task. In Proc. of BioNLP Shared Task 2011 Workshop, pages 74-82, Portland, USA.
+Taichi Nishimura, Suzuki Tomori, Hayato Hashimoto, Atsushi Hashimoto, Yoko Yamakata, Jun Harashima, Yoshitaka Ushiku, and Shinsuke Mori. 2020. Visual grounding annotation of recipe flow graph. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4275-4284, Marseille, France. European Language Resources Association.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. of the 2014 Conference on
+
+Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar.
+Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, USA.
+Massimo Poesio, Ron Artstein, et al. 2008. Anaphoric annotation in the ARRAU corpus. In Proc. of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco.
+Massimo Poesio, Roland Stuckardt, and Yannick Versley. 2016. Anaphora Resolution. Springer.
+Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Proc. of EMNLP-CoNLL 2012: Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1-40, Jeju, Korea.
+Marta Recasens and Eduard Hovy. 2011. BLANC: Implementing the Rand index for coreference evaluation. Natural Language Engineering, 17(4):485-510.
+Ina Rosiger. 2016. Scicorp: A corpus of English scientific articles annotated for information status analysis. In Proc. of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1743-1749, Portoorž, Slovenia.
+Ina Rösiger. 2018a. BASHI: A corpus of Wall Street Journal articles annotated with bridging links. In Proc. of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).
+Ina Rosiger. 2018b. Rule- and learning-based methods for bridging resolution in the ARRAU corpus. In Proc. of the First Workshop on Computational Models of Reference, Anaphora and Coreference, pages 23-33, New Orleans, USA.
+Ina Rosiger. 2019. Computational modelling of coreference and bridging resolution. Ph.D. thesis, Stuttgart University.
+Ina Rosiger, Arndt Riester, and Jonas Kuhn. 2018. Bridging resolution: Task definition, corpus resources and rule-based experiments. In Proc. of the 27th International Conference on Computational Linguistics, pages 3516-3528, Santa Fe, USA.
+Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.
+
+Niket Tandon, Bhavana Dalvi Mishra, Joel Grus, Wen tau Yih, Antoine Bosselut, and Peter Clark. 2018. Reasoning about actions and state changes by injecting commonsense knowledge. In EMNLP.
+Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni-anwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes release 5.0. Linguistic Data Consortium Catalog No. LDC2013T19.
+Sam Wiseman, Alexander M. Rush, Stuart Shieber, and Jason Weston. 2015. Learning anaphoricity and antecedent ranking features for coreference resolution. In Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1416-1426, Beijing, China.
+Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In Proc. of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 994-1004, San Diego, USA.
+Patrick Xia and Benjamin Van Durme. 2021. Moving on from OntoNotes: Coreference resolution model transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5241-5256, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Yoko Yamakata, Shinsuke Mori, and John Carroll. 2020. English recipe flow graph corpus. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 5187-5194, Marseille, France. European Language Resources Association.
+Juntao Yu and Massimo Poesio. 2020. Multitask learning-based neural bridging reference resolution. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3534-3546, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581-612.
+Rui Zhang, Cicero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang, and Dragomir Radev. 2018. Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering. In Proc. of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 102-107, Melbourne, Australia.
+
+# A Additional Experimental Results
+
+In the following tables, we provide detailed experimental results.
+
+Table 6 provides anaphora resolution results with state changes based on 10-fold cross validation.
+
+Table 7 provides a full comparison of transfer learning per anaphora relation with state change based on 10-fold cross validation.
+
+Table 8 provides a full comparison of transfer learning per anaphora relation without state change based on 10-fold cross validation.
+
+Table 9 provides a full comparison of transfer learning for coreference resolution based on 10-fold cross validation, under standard coreference evaluation metrics, i.e. MUC, BCUBED, and CRAFE. Specifically, models are trained with the same parameters (e.g. data partitions, training epochs, etc.) discussed in Section 6 but with a change of coreference evaluation metric, i.e. standard coreference evaluation metrics. We consider the "Ave. $F$ " as the main evaluation metric, computed by averaging F1 scores of MUC, BCUBED, and CRAFE.
+
+| Relation | Method | PA | RA | FA | PR | RR | FR |
| COREF (Surface) | coreference | 46.5 ± 2.2 | 13.3 ± 0.7 | 19.7 ± 0.9 | 22.7 ± 2.0 | 6.2 ± 0.5 | 9.2 ± 0.7 |
| joint_train | 48.6 ± 1.9 | 15.3 ± 0.7 | 22.0 ± 0.9 | 28.7 ± 1.7 | 8.6 ± 0.5 | 12.5 ± 0.7 |
| COREF (Atom) | coreference | 46.5 ± 2.2 | 13.3 ± 0.7 | 19.7 ± 0.9 | 27.9 ± 2.1 | 7.5 ± 0.5 | 11.2 ± 0.8 |
| joint_train | 48.6 ± 1.9 | 15.3 ± 0.7 | 22.0 ± 0.9 | 33.5 ± 1.8 | 9.8 ± 0.5 | 14.4 ± 0.7 |
| Bridging | bridging | 51.7 ± 1.0 | 25.3 ± 0.6 | 33.2 ± 0.6 | 36.3 ± 0.8 | 19.4 ± 0.6 | 24.5 ± 0.6 |
| joint_train | 52.6 ± 1.0 | 24.6 ± 0.6 | 32.7 ± 0.7 | 37.7 ± 0.8 | 19.1 ± 0.6 | 24.7 ± 0.6 |
| TR | bridging | 47.0 ± 2.3 | 16.6 ± 0.9 | 23.0 ± 1.2 | 32.9 ± 1.9 | 13.2 ± 0.8 | 17.3 ± 0.9 |
| joint_train | 52.0 ± 2.3 | 16.0 ± 0.9 | 22.9 ± 1.1 | 37.5 ± 2.2 | 13.2 ± 0.8 | 17.9 ± 1.0 |
| IWOA | bridging | 5.9 ± 1.6 | 3.3 ± 1.1 | 3.7 ± 1.1 | 3.1 ± 1.1 | 2.3 ± 1.1 | 2.3 ± 1.0 |
| joint_train | 4.3 ± 1.3 | 2.4 ± 0.7 | 2.7 ± 0.7 | 2.5 ± 1.0 | 0.9 ± 0.4 | 1.1 ± 0.4 |
| IWA | bridging | 55.2 ± 1.2 | 36.8 ± 1.0 | 42.9 ± 0.9 | 37.9 ± 0.9 | 22.7 ± 0.8 | 27.3 ± 0.7 |
| joint_train | 55.6 ± 1.2 | 35.8 ± 1.0 | 42.3 ± 0.9 | 39.4 ± 1.0 | 22.4 ± 0.8 | 27.5 ± 0.7 |
| Overall | joint_train | 51.6 ± 0.8 | 21.5 ± 0.4 | 29.9 ± 0.5 | 36.3 ± 0.7 | 17.3 ± 0.5 | 23.0 ± 0.5 |
+
+Table 6: Anaphora resolution results based on 10-fold cross validation with state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction.
+
+| Relation | Method | PA | RA | FA | PR | RR | FR |
| COREF (Surface) | coreference | 45.6 ± 2.3 | 13.9 ± 0.8 | 20.0 ± 1.0 | 27.9 ± 2.1 | 8.3 ± 0.6 | 11.9 ± 0.8 |
| joint_train | 43.4 ± 2.3 | 12.3 ± 0.7 | 18.1 ± 1.0 | 24.5 ± 1.9 | 6.5 ± 0.5 | 9.7 ± 0.6 |
| COREF (Atom) | coreference | 45.6 ± 2.3 | 13.9 ± 0.8 | 20.0 ± 1.0 | 32.9 ± 2.2 | 9.4 ± 0.6 | 13.7 ± 0.8 |
| joint_train | 43.4 ± 2.3 | 12.3 ± 0.7 | 18.1 ± 1.0 | 29.1 ± 2.1 | 7.6 ± 0.5 | 11.3 ± 0.7 |
| Bridging | bridging | 53.4 ± 1.0 | 24.9 ± 0.5 | 33.3 ± 0.6 | 38.9 ± 0.8 | 19.8 ± 0.6 | 25.7 ± 0.6 |
| joint_train | 55.2 ± 1.0 | 25.6 ± 0.6 | 34.3 ± 0.6 | 39.6 ± 0.8 | 19.7 ± 0.5 | 25.8 ± 0.6 |
| TR | bridging | 50.6 ± 2.2 | 17.8 ± 0.9 | 24.3 ± 1.0 | 37.8 ± 2.1 | 14.3 ± 0.8 | 18.9 ± 0.9 |
| joint_train | 53.8 ± 2.4 | 16.5 ± 0.9 | 23.5 ± 1.2 | 36.3 ± 2.2 | 12.9 ± 0.8 | 17.3 ± 0.9 |
| IWOA | bridging | 4.4 ± 1.4 | 1.9 ± 0.6 | 2.3 ± 0.7 | 1.2 ± 0.5 | 0.5 ± 0.2 | 0.6 ± 0.2 |
| joint_train | 5.0 ± 1.5 | 2.9 ± 1.1 | 3.3 ± 1.1 | 2.6 ± 1.1 | 1.9 ± 1.0 | 2.0 ± 1.0 |
| IWA | bridging | 56.9 ± 1.2 | 35.4 ± 1.0 | 42.4 ± 0.9 | 40.5 ± 0.9 | 23.1 ± 0.7 | 28.5 ± 0.7 |
| joint_train | 58.2 ± 1.2 | 37.8 ± 1.0 | 44.4 ± 0.9 | 41.5 ± 0.9 | 23.4 ± 0.7 | 29.0 ± 0.7 |
| Overall | joint_train | 53.2 ± 0.8 | 21.3 ± 0.4 | 30.0 ± 0.5 | 37.9 ± 0.7 | 17.5 ± 0.4 | 23.6 ± 0.5 |
+
+Table 7: Experiments with transfer learning based on 10-fold cross validation with state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction.
+
+| Relation | Method | PA | RA | FA | PR | RR | FR |
| COREF (Surface) | coreference | 63.3 ± 0.9 | 37.8 ± 0.8 | 46.7 ± 0.8 | 34.4 ± 0.9 | 20.5 ± 0.6 | 25.3 ± 0.7 |
| joint_train | 66.4 ± 1.0 | 35.4 ± 0.9 | 45.3 ± 0.9 | 39.7 ± 1.0 | 21.0 ± 0.6 | 26.9 ± 0.7 |
| COREF (Atom) | coreference | 63.3 ± 0.9 | 37.8 ± 0.8 | 46.7 ± 0.8 | 47.8 ± 1.1 | 26.3 ± 0.7 | 33.5 ± 0.8 |
| joint_train | 66.4 ± 1.0 | 35.4 ± 0.9 | 45.3 ± 0.9 | 52.2 ± 1.2 | 25.8 ± 0.7 | 33.9 ± 0.8 |
| Bridging | bridging | 55.5 ± 1.3 | 33.1 ± 0.9 | 40.6 ± 0.9 | 38.0 ± 1.0 | 21.5 ± 0.7 | 26.7 ± 0.7 |
| joint_train | 58.4 ± 1.2 | 35.8 ± 0.9 | 43.4 ± 0.8 | 40.3 ± 1.0 | 22.3 ± 0.6 | 27.9 ± 0.7 |
| Overall | joint_train | 63.0 ± 0.7 | 35.8 ± 0.6 | 45.2 ± 0.6 | 39.8 ± 0.6 | 22.0 ± 0.5 | 27.9 ± 0.5 |
+
+Table 8: Experiments with transfer learning based on 10-fold cross validation without state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (total $5 \times 3 \times 10$ runs). Models are trained for "coreference", "bridging" or "joint_train" (both tasks jointly). "F_A" denotes the F1 score for anaphor prediction, and "F_R" for relation prediction.
+
+| State | Method | MUC | BCUBED | CRAFE | Ave. F |
| P | R | F | P | R | F | P | R | F |
| With state | coreference | 30.1 ± 2.0 | 8.8 ± 0.6 | 12.7 ± 0.8 | 37.9 ± 1.8 | 10.8 ± 0.5 | 15.7 ± 0.7 | 46.2 ± 1.7 | 12.1 ± 0.5 | 18.5 ± 0.7 | 15.6 ± 0.7 |
| - w/ transfer | 35.1 ± 2.0 | 11.2 ± 0.6 | 16.0 ± 0.8 | 40.8 ± 1.8 | 12.4 ± 0.6 | 17.8 ± 0.7 | 48.3 ± 1.7 | 12.9 ± 0.5 | 19.6 ± 0.7 | 17.8 ± 0.7 |
| joint_train | 30.4 ± 1.7 | 10.9 ± 0.7 | 15.3 ± 0.9 | 37.1 ± 1.6 | 12.3 ± 0.6 | 17.4 ± 0.8 | 43.0 ± 1.6 | 13.5 ± 0.6 | 19.9 ± 0.8 | 17.5 ± 0.8 |
| - w/ transfer | 36.4 ± 2.2 | 9.5 ± 0.6 | 14.2 ± 0.8 | 41.8 ± 2.0 | 10.5 ± 0.5 | 15.7 ± 0.7 | 46.1 ± 1.8 | 11.4 ± 0.5 | 17.6 ± 0.7 | 15.8 ± 0.7 |
| Without state | coreference | 50.5 ± 1.1 | 32.2 ± 0.8 | 38.7 ± 0.8 | 49.3 ± 0.9 | 30.2 ± 0.7 | 36.5 ± 0.6 | 54.6 ± 0.8 | 28.1 ± 0.7 | 36.5 ± 0.7 | 37.2 ± 0.7 |
| - w/ transfer | 51.9 ± 1.1 | 30.3 ± 0.8 | 37.7 ± 0.8 | 51.9 ± 1.0 | 28.4 ± 0.6 | 35.7 ± 0.6 | 55.4 ± 0.8 | 27.6 ± 0.5 | 36.5 ± 0.5 | 36.6 ± 0.6 |
| joint_train | 53.4 ± 1.1 | 32.2 ± 0.9 | 39.5 ± 0.9 | 53.6 ± 1.0 | 30.1 ± 0.8 | 37.5 ± 0.7 | 56.2 ± 0.8 | 29.6 ± 0.7 | 38.2 ± 0.7 | 38.4 ± 0.7 |
| - w/ transfer | 54.5 ± 1.1 | 30.2 ± 0.8 | 38.2 ± 0.8 | 55.4 ± 1.1 | 28.4 ± 0.6 | 36.6 ± 0.6 | 57.0 ± 0.8 | 29.2 ± 0.6 | 38.1 ± 0.6 | 37.6 ± 0.7 |
+
+Table 9: Results based on standard coreference evaluation metrics, i.e. MUC, BCUBED, and CRAFE, based on 10-fold cross validation without state change. Models were trained over 10,000 epochs, and averaged over 3 runs with 5 different random seeds (a total of $5 \times 3 \times 10$ runs). Models are trained for "coreference", or "joint_train" (both tasks jointly). "Ave. F" denotes the average F1 score of MUC, BCUBED, and CRAFE.
\ No newline at end of file
diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/images.zip b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..745c250306cf4427f35700c26ece0e295264fb6b
--- /dev/null
+++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:149d98e16ab25f412b2c19dc0b00ddcb7050c283d930853169c03a1fe273d96f
+size 821339
diff --git a/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/layout.json b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5d38bfc48ae166b7971d59963c24c4aadb366cbd
--- /dev/null
+++ b/whatdoesittaketobakeacakethereciperefcorpusandanaphoraresolutioninproceduraltext/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22741fdc7eaf76da4f9c751c7e07731e3546ab5c2ebb23f828a61e8ead9630fd
+size 421922
diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_content_list.json b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6c8aa4514b096d71ff0c48ad92e765a3c0e24f34
--- /dev/null
+++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03c189b408ea05e3caaee25aa98d8ca58f8ecbc9cf486e898d3a587789ac5104
+size 76436
diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_model.json b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..014d6a36ac7f3ba1438c3df9b732e83e6582e4ca
--- /dev/null
+++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f922418710645f70845575ce27570ae477f0a5bbc96d5fd780f595a3dce7a8c0
+size 93601
diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_origin.pdf b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c2e1c50aca4a4af414927c32e1f2b09e3fd20118
--- /dev/null
+++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/3f6b5ce8-aea6-402b-a84c-fccce228cf0f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe35aa83e423e74b3c123520a29c343de090bf85499d8cbc92544dc93a4a8ad6
+size 604606
diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/full.md b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e850b91d5010d683686c00cc5e9561d66196f702
--- /dev/null
+++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/full.md
@@ -0,0 +1,390 @@
+# What is wrong with you?: Leveraging User Sentiment for Automatic Dialog Evaluation
+
+Sarik Ghazarian $^{1*}$ Behnam Hedayatnia $^{2}$ Alexandros Papangelis $^{2}$ Yang Liu $^{2}$ Dilek Hakkani-Tur $^{2}$
+
+1 University of Southern California / Information Sciences Institute
+
+2 Amazon Alexa AI
+
+sarik@isi.edu
+
+{behnam,papangea,yangliud,hakkanit}@amazon.com
+
+# Abstract
+
+Accurate automatic evaluation metrics for open-domain dialogs are in high demand. Existing model-based metrics for system response evaluation are trained on human annotated data, which is cumbersome to collect. In this work, we propose to use information that can be automatically extracted from the next user utterance, such as its sentiment or whether the user explicitly ends the conversation, as a proxy to measure the quality of the previous system response. This allows us to train on a massive set of dialogs with weak supervision, without requiring manual system turn quality annotations. Experiments show that our model is comparable to models trained on human annotated data. Furthermore, our model generalizes across both spoken and written open-domain dialog corpora collected from real and paid users.
+
+# 1 Introduction
+
+Relying on human evaluation to determine the quality of open-domain dialog systems is not an efficient approach in terms of time and cost. Automatic evaluation can be a good replacement for human annotations and can increase the pace of open-domain dialog system development. However, standard word-overlap metrics (BLEU, ROUGE, Perplexity) do not correlate well with human judgements of open-domain dialog systems (Deriu et al., 2020; Liu et al., 2016) because of the diverse set of outputs that can be relevant given a dialog context.
+
+A solution for better automatic evaluation methods is to train reference-free evaluators that learn how to assess the generated responses given dialog contexts from different aspects such as relevancy (Tao et al., 2018; Ghazarian et al., 2019; Lan et al., 2020), engagement (Ghazarian et al., 2020), fluency (Zhang et al., 2021b; Pang et al., 2020),
+
+contradiction (Pang et al., 2020; Nie et al., 2021) amongst others. It is also important to get some holistic evaluation at the dialog level in order to assess the dialogs as a whole (Zhang et al., 2021a; Li et al., 2021; Mehri and Eskenazi, 2020; Finch et al., 2021).
+
+Recently, Mehri and Eskenazi (2020); Eskenazi et al. (2019) have shown the effectiveness of looking into the next user utterance as a proxy to measure the quality of the chatbot's generated responses. See and Manning (2021) have shown that predicting next user satisfaction helps to select more relevant system utterances. Inspired by works in this area, we propose to automatically extract features from the next user utterance, such as sentiment, to use as a proxy to evaluate system responses. The advantage of our method is that we do not need to train on data with human annotations for turn level quality, and instead can rely on available large datasets with automatically extracted annotations.
+
+Most existing automatic evaluators focus purely on open-domain text-based dialog systems. In addition to textual interactions, we perform experiments on voice-based interactions that were collected via paid and real users. Furthermore, we compute correlations with a real user's own (referred to as first party, 1P) rating when available, in addition to annotations by third party (3P) annotators. Our contributions include:
+
+1. training an automatic evaluator on the sentiment of the next user utterance in a weakly supervised fashion to evaluate system responses,
+2. outperforming existing automatic evaluation metrics on both text and voice-based open-domain dialog datasets,
+3. a turn-level annotated open-domain text-based dialog dataset that we will release. $^1$
+
+
+Figure 1: Training/Inference for turn quality estimation. The dotted arrows show how $q_{i}$ , which represents the system turn quality for system response $r_{i}$ , is constructed for training. For our regression model indicated by the red arrow, $s_{i + 1}$ (user sentiment) and $e_{i + 1}$ (user stop) are summed together to create $q_{i}$ . For our classification model indicated by the blue arrow, $q_{i}$ is equal to $t_{i}$ . In the example dialog, the user expresses negative sentiment in $u_{i + 1}$ . The sentiment score -1.97 is used as the reference label $q_{i}$ , indicating the quality of response $r_{i}$ .
+
+# 2 Methods for Automatic Evaluation
+
+For turn quality estimation, the task is defined as follows: given a dialog context and a system response in the last turn, $D = [u_{1}, r_{1} \dots u_{i}, r_{i}]$ (where $u_{i}$ and $r_{i}$ are the user utterance and system response respectively for the $i^{th}$ turn in a dialog), determine if $r_{i}$ is an appropriate response. $q_{i}$ indicates the quality of response $r_{i}$ and will be used as our reference label when training the model. Figure 1 shows our model architecture. We train a BERT-base (Devlin et al., 2019) model that encodes the dialog context and the latest system response. We use the pooled representation output by the BERT model and pass it through a linear layer to determine the quality of the response. Depending on the reference label used to train this model, we adopt a classification or regression setup, described below.
+
+- Classification model trained using turn level annotations. When annotations for system responses are available in our training data (a binary label $t_i$ as shown in Figure 1 for response $r_i$ , indicating if the system response is appropriate), we train a classification model
+
+using such reference labels.
+
+- Regression model trained using next user sentiment. Obtaining turn level annotations for dialogs is costly. In this work, we explore using weak supervision to approximate response quality. Eskenazi et al. (2019) stated that given a system response, the follow up user's utterance should be used to evaluate the quality of the system response as it increased agreement amongst human annotators. Motivated by this, we propose to use the sentiment of the next user utterance as a proxy to estimate the quality of the previous system response. In Figure 1, $s_{i+1}$ is the sentiment score for the next user utterance $u_{i+1}$ . Note that this information is automatically generated from the user utterance, and thus allows us to leverage data without a turn level annotation. Since such sentiment scores are often continuous, we use a regression model for these target labels.
+
+- Next user stop signal. We also examine if the next user utterance stops a dialog ( $e_{i+1}$ in Figure 1). $e_{i+1}$ is 0 if the user stops the dialog and 1 if they continue the dialog. We use this as an additional signal by summing it with the sentiment information above as target labels for model training.
+
+For dialog level evaluation, we follow previous work and use mean aggregation techniques to estimate dialog level ratings from turn level scores (Lowe et al., 2017; Ghazarian et al., 2019, 2020; Lan et al., 2020; Yeh et al., 2021). In our experiments, we show how aggregated turn level quality and user sentiment scores correlate with dialog level ratings.
+
+# 3 Dialog Datasets
+
+As described earlier, most previous work in automatic evaluation focuses on text-based open-domain dialog systems (Yeh et al., 2021; Lan et al., 2020; Sinha et al., 2020; Huang et al., 2020; Ghazarian et al., 2020). Additionally most dialog datasets are collected via crowdworkers. While we also evaluate on written (text-based) dialogs, the primary dataset in our work consists of spoken (voice-based) interactions between a dialog system and a real user.
+
+# 3.1 Open Domain Dialog System
+
+We first describe the open-domain dialog system used for our spoken dialog data collection. The
+
+| Dialog Split | Number of Interactions (Train/Dev/Test) | Avg. Number of Turns (Train/Dev/Test) | 3P turn quality | 3P rating | 1P rating |
| PUI | -/-/87 | - / - / 14.5 | ✓ | ✓ | |
| RUI-1P | 6215 / 690 / - | 10.3 / 10.8 / - | | | ✓ |
| RUI-3P | 500 / 55 / 132 | 11.1 / 10.7 / 14.3 | ✓ | ✓ | ✓ |
| ConTurE | - / - / 119 | - / - / 8.95 | ✓ | ✓ | |
+
+Table 1: Dataset Statistics for Spoken and Written dialog datasets. RUI (Real User Interactions)
+
+
+Figure 2: Architecture of our open-domain dialog system. NER = Named Entity Recognition, DA = Dialog Act
+
+architecture of our dialog system is shown in Figure 2. Every user utterance in the dialog is sent into an ASR system whose output goes through a series of NLU modules that classifies topics, dialog acts, sentiment, extracts entities, and detects if the user utterance is offensive. Our system then calls multiple response generators (called responders) for the given dialog context and logs all the generated response candidates within the State Manager. The final response is selected based on a rule-based ranking strategy, and then sent to the TTS module whose output is presented to the user.
+
+For popular topics in open domain dialogs, such as movies, music, recent news, we develop template-based responders (highlighted in green in Figure 2) for the given dialog state. An example state and response for the movie domain is: when the user turns mentions a movie name (based on the NER result), we respond with information about the actor, the rating, or the plot of this certain movie. In addition to topic-specific template-based responders, our system includes other template-based responders for some special dialog contexts, such as greetings, topic switches, etc.
+
+For every user turn, we also apply a neural network-based response generation (NRG) model to produce a response, highlighted in purple in Figure 2. Our NRG Responder is a GPT2-XL (Radford et al., 2019) based model trained on real user-system interactions described in Section 3.2.
+
+The rule-based response ranker uses predefined logic and selects a template-based responder when
+
+it is available and the user topic matches that responder, otherwise it uses the NRG response as a fall back. In our system since we have just a few template-based responders, the system uses NRG responses most of the time.
+
+# 3.2 Spoken Dialogs
+
+We deploy the dialog system described above within the Alexa Prize Socialbot framework (Ram et al., 2018) to interact with real users. A user initiates an interaction with our dialog system and consents to have their data collected. A turn within an interaction is specified as a user utterance-system response pair. These interactions end when the user requests to stop the conversation. At the end of each interaction, users are given the opportunity to leave a rating in the range of 1 to 5. We define these ratings as $1P$ rating as they come from the same users who interacted with the conversational agent. We denote this dataset as Real User Interactions $(RUI)^{2}$ . Our data consists of approximately 100k interactions and 5 million turns. This dataset is used to train our NRG Responder mentioned in the previous section. We discuss its training details in the Appendix.
+
+Not every user leaves a rating; therefore, we take a sample of interactions from $RUI$ that contain user ratings and denote this dataset as $RUI - 1P$ .
+
+In addition to real user interactions, we form a dataset of interactions from paid users who were
+
+instructed to speak to the same dialog system. We denote these interactions as paid user interactions $PUI^2$ . The difference between paid and real users is that the former are internal workers who are recruited to rigorously test and probe the dialog system and as a result are much more proactive in the dialogs as opposed to real users who are known to be less proactive in these social conversations (Juraska et al., 2021; Finch et al., 2020). These internal workers are considered paid as their primary job consists of assisting with data collection. Real users, however, are consenting to a dialog with our dialog system but are not being paid.
+
+To obtain turn quality labels, we annotate a subset of $RUI-1P$ at the turn level. Given a complete interaction, an experienced annotator was asked to annotate each system response either as 1 or 0, where 1 indicates the response is appropriate and vice versa for 0. Additionally, we ask annotators to leave a dialog level rating in the range of 1 to 5. We define this turn and dialog level annotations as $3P$ turn quality and $3P$ ratings respectively, since they came from annotators who rated other users' interactions. We denote this annotated data as $RUI-3P$ . An example of a turn level annotation is shown in the Appendix. We also perform the same annotation on the $PUI$ data. Table 1 shows the statistics for each of these collections and available annotations for each dataset.
+
+To obtain sentiment labels, we leverage the BiLSTM sentiment model from (Kim et al., 2020), which was trained on spoken dialog data and automatically tag user utterances with sentiment. The model takes in both audio and textual features and outputs a real-valued valence score on a scale from -3 to 3, which measures the degree of the utterance's positivity/negativity.
+
+# 3.3 Written Dialogs
+
+We sample a set of dialogs released from the Interactive Evaluation of Dialog track (Gunasekara et al., 2020) to be annotated for turn quality. These dialogs were collected from invited participants conversing with knowledge-grounded response generation models through textual exchanges, and have been publicly released4. The original authors of this dataset asked Amazon Mechanical Turk (AMT) workers to rate 2200 interactions on multiple dialog level dimensions, such as coher
+
+ent, informative, overall. The full list of dialog level annotation dimensions is included in the Appendix. However, these dialogs do not have turn level annotations. In order to evaluate our models at the turn level, we sample 119 dialogs with an average length of 8 turns. For each turn, we ask three AMT workers to rate whether they dislike, somewhat like or like the Chatbot's response with a score of either 0, 1, or 2 respectively. To help workers judge response quality, we ask them to look at how relevant and interesting a response is. We use majority voting to determine the final score. In the case of ties we use a score from an internal author. The Krippendorff's alpha score is 0.31 representing fair agreement between annotators. We denote these assessments as $3P$ turn quality since the AMT workers are rating other workers' dialogs. We denote this dataset as Conversational Turns Evaluation (ConTurE) and publicly release it. $^5$
+
+# 4 Results and Discussions
+
+We compare our method with a suite of open source models from (Yeh et al., 2021) $^4$ including RUBER, BERT-RUBER, PONE, PredictiveEngagement and FED (Tao et al., 2018; Ghazarian et al., 2019; Lan et al., 2020; Ghazarian et al., 2020; Mehri and Eskenazi, 2020).
+
+Table 2 shows the automatic turn level quality estimation, measured using both Pearson and Spearman correlation against turn level annotations on three datasets for different methods. On the spoken dialog testsets (RUI-3P and PUI) the baseline models perform poorly. In contrast, our Classification(3P) model trained using $3P$ turn quality achieves the highest correlation (0.29/0.28) on RUI-3P. This can be partly explained by the matched training and testing setup. We observe promising results for the Reg (Sentiment + User Stop) model which was trained with next user sentiment information combined with stop signal which achieves the highest correlation on the PUI test set and a correlation of (0.22/0.23) on RUI-3P. This demonstrates the effectiveness of weak supervision. We compare different training sizes RUI-1P (40%) versus RUI-1P and show the expected benefit of more data for model training. We also see that our models outperform the baseline models on the ConTurE testset. It is important to note that all the baseline models have been designed and evaluated
+
+| Training Set | Model (Ref label) | RUI-3P (test set) | PUI | ConTurE |
| | Pearson | Spearman | Pearson | Spearman | Pearson | Spearman |
| - | RUBER | -0.08 | -0.07 | -0.1 | -0.1 | -0.01 | -0.03 |
| - | BERT-RUBER | 0.01 | 0.02 | -0.02 | -0.02 | -0.007 | 0.004 |
| - | PONE | 0.01 | 0.004 | -0.02 | -0.03 | 0 | 0.01 |
| - | PredictiveEng | -0.11 | -0.11 | -0.06 | -0.05 | -0.11 | -0.09 |
| - | FED | -0.006 | -0.02 | -0.03 | -0.04 | 0.11 | 0.10 |
| Our method |
| RUI-3P | Classification (3P) | 0.29 | 0.28 | 0.23 | 0.24 | -0.01 | 0.11 |
| RUI-1P | Reg (Sentiment) | 0.15 | 0.12 | 0.19 | 0.16 | 0.34 | 0.34 |
| RUI-1P | Reg (Sentiment + User Stop) | 0.22 | 0.23 | 0.35 | 0.3 | 0.3 | 0.33 |
| RUI-1P (40%) | Reg (Sentiment + User Stop) | 0.2 | 0.22 | 0.29 | 0.24 | 0.31 | 0.32 |
+
+using written dialogs, and though our models were fine-tuned only on spoken dialog, they are able to generalize to a different modality. FED has been shown to be a good dialog-level evaluator (Yeh et al., 2021). However we see in Table 2 that FED achieves low performance for turn-level evaluation. This matches the conclusion in (Mehri and Eskenazi, 2020) who point out that FED captures the dialog-level qualities from its training data Reddit better than turn-level qualities.
+
+Table 3 shows the correlation results of the aggregated turn level scores with $3P$ ratings and $1P$ ratings on the spoken dataset. From the first row, we can see that there is a moderate positive correlation between the aggregated mean of $3P$ turn quality and $3P$ ratings (0.50 / 0.46), but see a very low positive correlation with $1P$ ratings (0.16 / 0.12). This may be due to the fact that Likert scale ratings can have lower inter-annotator agreement (Belz and Kow, 2010). Additionally, the 3P annotators had access to the whole interaction and could re-read the context. This is in contrast to 1P users who may forget what happened earlier in the interaction as it is spoken. Another reason is that 3P annotators only read the transcript of the dialog for turn or dialog evaluation, and may miss the tones in utterances that may affect 1P user ratings. When using the user sentiment scores, we can see through mean aggregation it has positive correlation with both $3P$ ratings (0.48/0.46) and $1P$ ratings (0.38/0.37). The higher correlation of user sentiment (as opposed to $3P$ turn quality) with $1P$ rating is partly because of the different signals used in 3P annotation as discussed above. These results suggest sentiment can be used to estimate dialog level ratings, as done in previous work such as (Kim et al., 2020).
+
+Overall, we see that the next user utterance sentiment serves as a reasonable proxy to the quality of the previous system response, hence when there
+
+is not much data with turn level quality annotation, we can train models using weak supervision coming from the next user utterance. In this study, we use the sentiment scores obtained from user utterances in speech based dialogs, therefore, acoustic features were used to obtain such sentiment information. Since speech based sentiment or emotion recognition has been widely studied, it does not require much additional annotation to train the sentiment model for user utterances, and thus we can rely on existing models. We also explored using sentiment just based on text, but observed some issues in our preliminary study. For example, when users reply with a 'no' to a question, it is classified as negative, however, this may not indicate a problem with the previous system response. We plan to further investigate this in our future work, which will allow us to better utilize more available text based dialog data. Example outputs from both FED and our model are shown in the Appendix.
+
+Table 2: Correlation between both baseline and our model outputs against ${3P}$ turn quality for spoken and written datasets. For our method, reference labels used for Classification or Reg (Regression) models are presented.
+
+ | 3P Ratings | 1P Ratings |
| P | S | P | S |
| 3P turn quality | 0.50 | 0.46 | 0.16 | 0.12 |
| User sentiment | 0.48 | 0.46 | 0.38 | 0.37 |
+
+Table 3: Correlation between turn level information (3P turn quality and user turn sentiment) and dialog level rating on RUI-3P. P=Pearson, S=Spearman.
+
+# 5 Conclusion
+
+In this work, we show that instead of training on manually annotated data we can train on sentiment from the next user utterance in a weakly supervised manner to evaluate system responses. We show that our model has better cross domain generalization and performs well on a written dialog dataset. In our future work we will investigate other methods beyond simple aggregation for dialog level estimation and using more text based dialog data.
+
+# 6 Ethics and Broader Impact
+
+Our work involves leveraging user sentiment to evaluate the quality of system responses. We acknowledge that we are using data from real users who have not been paid for these interactions. We also acknowledge there may be biases in the demographics of the user population. We conducted our ConTurE annotation through Amazon Mechanical Turk. We pay turkers $12 per hour, which is well above the federal minimum age.
+
+# References
+
+Anja Belz and Eric Kow. 2010. Comparing rating scales and preference judgements in language evaluation. In Proceedings of the 6th International Natural Language Generation Conference.
+Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2020. Survey on evaluation methods for dialogue systems. Artificial Intelligence Review, pages 1-56.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Maxine Eskenazi, Shikib Mehri, Evgeniia Razumovskaia, and Tiancheng Zhao. 2019. Beyond turing: Intelligent agents centered on the user. arXiv preprint arXiv:1901.06613.
+James D Finch, Sarah E Finch, and Jinho D Choi. 2021. What went wrong? explaining overall dialogue quality through utterance-level impacts. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, pages 93-101.
+Sarah E Finch, James D Finch, Ali Ahmadvand, Xiangjue Dong, Ruixiang Qi, Harshita Sahijwani, Sergey Volokhin, Zihan Wang, Zihao Wang, Jinho D Choi, et al. 2020. Emora: An inquisitive social chatbot who cares for you. Alexa Prize Proceedings.
+Sarik Ghazarian, Johnny Wei, Aram Galstyan, and Nanyun Peng. 2019. Better automatic evaluation of open-domain dialogue systems with contextualized embeddings. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 82-89.
+Sarik Ghazarian, Ralph Weischedel, Aram Galstyan, and Nanyun Peng. 2020. Predictive engagement: An efficient metric for automatic evaluation of open-domain dialogue systems. In Proceedings of the
+
+AAAI Conference on Artificial Intelligence, volume 34, pages 7789-7796.
+Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, et al. 2020. Overview of the ninth dialog system technology challenge: Dstc9. arXiv preprint arXiv:2011.06486.
+Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, and Xiaodan Liang. 2020. Grade: Automatic graph-enhanced coherence metric for evaluating open-domain dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9230-9240.
+Juraj Juraska, Kevin K Bowden, Lena Reed, Vrindavan Harrison, Wen Cui, Omkar Patil, Rishi Rajasekaran, Angela Ramirez, Cecilia Li, Eduardo Zamora, et al. 2021. Athena 2.0: Contextualized dialogue management for an alexa prize socialbot. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP).
+Yelin Kim, Joshua Levy, and Yang Liu. 2020. Speech sentiment and customer satisfaction estimation in socialbot conversations. Proc. Interspeech 2020, pages 1833-1837.
+Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+Tian Lan, Xian-Ling Mao, Wei Wei, Xiaoyan Gao, and Heyan Huang. 2020. Pone: A novel automatic evaluation metric for open-domain generative dialogue systems. ACM Transactions on Information Systems (TOIS), 39(1):1-37.
+Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, and Jie Zhou. 2021. Conversations are not flat: Modeling the dynamic information flow across dialogue utterances. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 128-138.
+Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132.
+Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. arXiv preprint arXiv:1708.07149.
+Shikib Mehri and Maxine Eskenazi. 2020. Unsupervised evaluation of interactive dialog with dialogpt. In Proceedings of the 21th Annual Meeting of the
+
+Special Interest Group on Discourse and Dialogue, pages 225-235.
+Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, and Jason Weston. 2021. I like fish, especially dolphins: Addressing contradictions in dialogue modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1699-1713.
+Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, and Kewei Tu. 2020. Towards holistic and automatic evaluation of open-domain dialogue generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3619-3629.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
+Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604.
+Abigail See and Christopher Manning. 2021. Understanding and predicting user dissatisfaction in a neural generative chatbot. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 1-12, Singapore and Online. Association for Computational Linguistics.
+Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L Hamilton, and Joelle Pineau. 2020. Learning an unreferenced metric for online dialogue evaluation. arXiv preprint arXiv:2005.00583.
+Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Rubber: An unsupervised method for automatic evaluation of open-domain dialog systems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
+Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.
+Yi-Ting Yeh, Maxine Eskenazi, and Shikib Mehri. 2021. A comprehensive assessment of dialog evaluation metrics. In The First Workshop on Evaluations and Assessments of Neural Conversation Systems, pages 15-33.
+Chen Zhang, Yiming Chen, Luis Fernando D'Haro, Yan Zhang, Thomas Friedrichs, Grandee Lee, and Haizhou Li. 2021a. Dynaeval: Unifying turn and dialogue level evaluation. In Proceedings of the
+
+59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5676-5689.
+Chen Zhang, Luis Fernando D'Haro, Rafael E Banchs, Thomas Friedrichs, and Haizhou Li. 2021b. Deep am-fm: Toolkit for automatic dialogue evaluation. In Conversational Dialogue Systems for the Next Decade, pages 53-69. Springer.
+
+# A Appendices
+
+# A.1 Hyperparameters for the turn level quality estimation model
+
+All our BERT models were finetuned with a batch size of 8 and a learning rate of 1e-5 with the Adam optimizer (Kingma and Ba, 2014). We train each model for 10 epochs and select the best model by computing correlation on the RUI-3P (dev set).
+
+# A.2 NRG responder hyperparameters
+
+We train our NRG Responder models on the RUI dataset described in Section 3.2. This dataset is split into a 80/10/10 train, valid, test set. Our model is initialized with GPT2-XL (Radford et al., 2019) and is finetuned with a batch size of 2 and a learning rate of $6.25\mathrm{e} - 5$ with the Adam optimizer. We train the model for 3 epochs and we finetune both the language modeling head and multiple choice Head of GPT2 in a TransferTransfo fashion (Wolf et al., 2019). For the multiple choice head, one randomly selected negative candidate was used. We leverage the HuggingFace's transformers library for all our models. $^6$
+
+# A.3 Turn level annotation example
+
+# Example
+
+Dialog History:
+
+System: I can talk about news, movies, music. What topic would you like to talk about today?
+
+User: news
+
+System: Sure. I love sports! what is the sport that you watched the most?
+
+Turn quality: 0
+
+Reason for annotation: The system response was off-topic
+
+Figure 3: Example of 3P turn annotations. Due to privacy concerns, this example dialog is from an internal author.
+
+# A.4 Turn level statistics
+
+# ConTurE
+
+| Percentage of Turns with score 0 | 30.7% |
| Percentage of Turns with score 1 | 22.2% |
| Percentage of Turns with score 2 | 47% |
| RUI-turn | |
| Percentage of Turns with score 0 | 35.3% |
| Percentage of Turns with score 1 | 64.7% |
+
+# A.5 Dialog level scores for ConTurE
+
+We take the mean aggregation of the turn level annotations and compute the correlation against each dialog level dimension in the original DSTC9 dataset. We see that the annotations have moderate correlation with all the parameters, with the highest being with 'human (overall)' (0.45/0.48). This shows that even though the turn and dialog level annotations were left by two different Turkers the turn annotations seem reliable.
+
+Table 4: Statistics of Turn level annotations for both ConTurE and RUI-3P datasets.
+
+| Dialog level parameter | Pearson | Spearman |
| consistent | 0.38 | 0.40 |
| likeable | 0.42 | 0.45 |
| diverse | 0.23 | 0.25 |
| informative | 0.30 | 0.34 |
| coherent | 0.32 | 0.37 |
| human (overall) | 0.45 | 0.48 |
| understanding | 0.36 | 0.42 |
| flexible | 0.33 | 0.40 |
| topic depth | 0.34 | 0.35 |
| error recovery | 0.37 | 0.40 |
| inquisitive | 0.20 | 0.27 |
+
+Table 5: Correlation between ConTurE turn-level and dialog-level scores.
+
+# A.6 Example outputs of turn quality estimation
+
+We pick a mixture of examples to show where our model works and fails. For each example we show the ground truth turn quality left by our human annotators and the score from each model.
+
+# Example 1
+
+Dialog History:
+
+User: Who is Donald Trump?
+
+System: that's a funny question, well it isn't really surprising that he isn't one of the three wealthiest presidents in american history, i know they measure by inflation but still.
+
+Human Annotation (0-2) = 2
+
+Classification(3P) = 0
+
+Reg(Sentiment) = 0.14
+
+$\mathrm{Reg}(\mathrm{Sentiment} + \mathrm{User}\mathrm{Stop}) = 1.24$
+
+# Example 2
+
+Dialog History:
+
+User: Covid19 is a virus that's spreading all over the world
+
+System: i guess it is his book then, do you like to read?.
+
+Human Annotation (0-2) = 0
+
+Classification(3P) = 0
+
+Reg(Sentiment) = -0.12
+
+$\mathrm{Reg}(\mathrm{Sentiment} + \mathrm{User}\mathrm{Stop}) = 1.04$
+
+Figure 4: Examples from both our Classification and Reg (Regression) models on the ConTurE dataset. In example 1 the classification model incorrectly predicts the system response as a bad response. For the Reg models there is no threshold set on the output and scores must be looked at in a relative matter. We see for the Reg models that the output score from the model in example 2 is lower than that in example 1, showing the model is capturing that the system response in example 2 was of worse quality than the system response in example 1.
+
+# Example 1
+
+Dialog History:
+
+··
+
+System: i see. i find sports so exciting.
+
+User: i don't
+
+System: well, lets switch the subject. i am fascinated with the world. are you interested in travel?
+
+Human Annotation $(0,1) = 1$
+
+Classification(3P) = 1
+
+$\operatorname{Reg}(\text{Sentiment}) = 0.004$
+
+$\mathrm{Reg}(\mathrm{Sentiment} + \mathrm{User}\mathrm{Stop}) = 0.96$
+
+# Example 2
+
+Dialog History:
+
+System: i like dogs as well. i think they are much better than dogs. so, what's one of your favorite places you've been?
+
+User: i don't like dogs
+
+System: i am not a big fan of dogs either. i would rather stay in the united states. so, what did you eat for dinner last night?
+
+Human Annotation $(0,1) = 0$
+
+Classification(3P) = 0
+
+Reg(Sentiment) = -0.15
+
+Reg(Sentiment + User Stop) = 0.67
+
+# Example 3
+
+Dialog History:
+
+··
+
+System: i recently read a really nice uplifting news article. want to hear about it?
+
+User: yes
+
+System: sure. here's some positive, uplifting news i found. every day, we find our balance. want to hear more?
+
+User: yes
+
+System: that's great. what do you think about it?
+
+Human Annotation $(0,1) = 0$
+
+Classification(3P) = 0
+
+Reg(Sentiment) = 0.20
+
+Reg(Sentiment + User Stop) = 1.20
+
+Figure 5: Examples from both our Classification and Reg (Regression) models on a spoken interaction. Due to privacy concerns, this example interaction is from an internal author. The classification model is able to predict all examples correctly, which makes sense as it was trained on similar annotations. Both Reg models output a lower score for the system response in example 2 than in example 1, showing they are both able to capture that the system response in example 2 is of worse quality. However in example 3 the Reg models incorrectly give a high score to the system response that ignores the user's request.
+
+# A.7 FED versus our turn quality estimation
+
+We show examples from the ConTurE dataset to compare the FED model against our Reg (Sentiment) model, which had the best performance on ConTurE. The max and min value score from FED are 4.39 and 1.74 respectively. The higher the output score from the FED model means the response is more appropriate. Below are examples where
+
+the FED model incorrectly predicts the system response is appropriate even though it was marked as inappropriate by the human annotator.
+
+# Example 1
+
+Dialog History:
+
+System: probably!
+
+User: you're not a good chat bot
+
+System: i haven't been there myself
+
+Human Annotation (0-2) = 0
+
+$\mathbf{FED} = 4.13$
+
+Reg(Sentiment) = -0.01
+
+# Example 2
+
+Dialog History:
+
+中
+
+System: do you like horses?
+
+User: No
+
+System: no, not really. you?
+
+Human Annotation (0-2) = 0
+
+FED = 4.39
+
+Reg(Sentiment) = 0.09
+
+# Example 3
+
+Dialog History:
+
+System: hello! how are you? i just found out that judge judy makes $ 123,000 per episode!
+
+User: oh wow! that's a lot! how much did oprah make?
+
+System: she gets paid $ 123
+
+Human Annotation $(0 - 2) = 0$
+
+$\mathbf{FED} = 4.18$
+
+$\operatorname{Reg}(\text{Sentiment}) = 0.17$
+
+# Example 4
+
+Dialog History:
+
+System: I love gold! Especially when I buy trucks with it!
+User: hell yeah brother
+
+System: Yeah. I also like gold, money, power metal, and trucks.
+
+Human Annotation (0-2) = 0
+
+$\mathbf{FED} = 4.03$
+
+Reg(Sentiment) = 0.29
+
+Figure 6: In both example 1 and 2 the last system response ignores the previous user utterance and therefore is marked as inappropriate. The FED model assigns a high score to these system responses. For example 3 both the FED and Reg(Sentiment) model incorrectly give a high score to the system response, which is factually incorrect. For example 4 both the FED and Reg(Sentiment) model incorrectly give a high score to the system response, which shows repetition.
+
+
+(a) Regression (Sentiment)
+
+
+(b) Regression (Sentiment + User Stop)
+Figure 7: We plot the model output scores for the Regression (Sentiment) and Regression (Sentiment + User Stop) models for each reference label i.e. Class 0 and Class 1. We see that for Regression (Sentiment + User Stop) in Figure 7b the separation between model outputs for Class 0 and Class 1 become more pronounced as compared to Regression (Sentiment) in Figure 7a.
+
+
+Figure 8: We plot the model probability outputs from the Classification(3P) model for each reference label i.e. Class 0 and Class 1. We use a threshold of 0.5 such that any score above or equal to that is considered a good response (1) and vice versa. We see that for the reference label Class 1 most probability scores are below the threshold.
\ No newline at end of file
diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/images.zip b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1c6bd44599c491e214cd64da18ee62ee9c1fc8e2
--- /dev/null
+++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a5d780fff426e1c79bb0f73bace415c2d1b1b9e76a184c5361f28af9acac7978
+size 320129
diff --git a/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/layout.json b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..30fafb6901622eb064628e4db1e93fe0650f8d04
--- /dev/null
+++ b/whatiswrongwithyouleveragingusersentimentforautomaticdialogevaluation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:826f5efa711480dddc7b2cf3ae2c99307ed6e5a7a103a9ea2b5a474fa30d086e
+size 373966
diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_content_list.json b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7527bda93ee3852decc0d53d4bf284551fff05fc
--- /dev/null
+++ b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91f848dad30cd4cbf418e3de040488048e312b1c862425499c626848b735268b
+size 107535
diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_model.json b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8c55fe5b93f909019703ad8b9f72107a959c9d21
--- /dev/null
+++ b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:37267828779f30029bc572cd3bf76eb4c28b677cd4ff0177b94cdc15e20d429d
+size 128155
diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_origin.pdf b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..08be2eb8f524626ad474876c5399aafa9599482f
--- /dev/null
+++ b/whattolearnandhowtowardeffectivelearningfromrationales/ade13865-242c-4e8b-88fd-b3ce5fc00cc7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a92ee1b62d4620ca14f40a959c5de13c6851536b90c3b14bbb0c789c791a307a
+size 1104866
diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/full.md b/whattolearnandhowtowardeffectivelearningfromrationales/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..45d05ee5c91861b823b825d5401760b47177d7a4
--- /dev/null
+++ b/whattolearnandhowtowardeffectivelearningfromrationales/full.md
@@ -0,0 +1,468 @@
+# What to Learn, and How: Toward Effective Learning from Rationales
+
+Samuel Carton
+
+University of Chicago
+
+carton@uchicago.edu
+
+Surya Kanoria
+
+University of Colorado Boulder
+
+surya.kanoria@colorado.edu
+
+# Chenhao Tan
+
+University of Chicago
+
+chenhao@uchicago.edu
+
+# Abstract
+
+Learning from rationales seeks to augment model prediction accuracy using human-annotated rationales (i.e. subsets of input tokens) that justify their chosen labels, often in the form of intermediate or multitask supervision. While intuitive, this idea has proven elusive in practice. We make two observations about human rationales via empirical analyses: 1) maximizing rationale supervision accuracy is not necessarily the optimal objective for improving model accuracy; 2) human rationales vary in whether they provide sufficient information for the model to exploit for prediction. Building on these insights, we propose several novel loss functions and learning strategies, and evaluate their effectiveness on three datasets with human rationales. Our results demonstrate consistent improvements over baselines in both label and rationale accuracy, including a $3\%$ accuracy improvement on MultiRC. Our work highlights the importance of understanding properties of human explanations and exploiting them accordingly in model training.
+
+# 1 Introduction
+
+In the past several years, explainability has become a prominent issue in machine learning, addressing concerns about the safety and ethics of using large, opaque models for decision-making. As interest has grown in explanations for understanding model behavior, so has interest grown in soliciting gold-standard explanations from human annotators and using them to inject useful inductive biases into models (Hase and Bansal, 2021). Many such explanation datasets have become available recently (Wiegreffe and Marasović, 2021).
+
+A common format for explanations in NLP is the rationale, a subset of input tokens that are relevant to the decision. A popular architecture for generating such explanations is the rationale model,
+
+# (A) Unsupervised rationale
+
+[CLS] susan wanted to have a birthday party. she called all of her friends . she has five friends . her mom said that susan can invite them all to the party . her first friend could not go to the party because she was sick . her second friend was going out of town . her third friend was not so sure if her parents would let her . the fourth friend said maybe . the fifth friend could go to the party for sure . susan was a little sad , on the day of the party , all five friends showed up . each friend had a present for susan . susan was happy and sent each friend a thank you card the next week . [SEP] how many people did susan call ? | 5 [SEP]
+
+# Prediction: False
+
+# (B) Human rationale
+
+[CLS] susan wanted to have a birthday party. she called all of her friends . she has five friends , her mom said that susan can invite them all to the party . her first friend could not go to the party because she was sick . her second friend was going out of town . her third friend was not so sure if her parents would let her . the fourth friend said maybe . the fifth friend could go to the party for sure . susan was a little sad . on the day of the party , all five friends showed up . each friend had a present for susan . susan was happy and sent each friend a thank you card the next week . [SEP] how many people did susan call ? | 5 [SEP]
+
+# Prediction: True
+
+Table 1: An example of unsupervised versus human-provided rationale in MultiRC. The unsupervised model struggles to localize its attention and makes an incorrect prediction. The same model makes a correct prediction by only looking at the human rationale.
+
+an explain-then-predict architecture which first extracts a rationale from the input and then makes a prediction from the rationale-masked text (that is, only the tokens included in rationale) (Lei et al., 2016; DeYoung et al., 2019). Without external supervision on this rationale, we typically pursue parsimony via a sparsity objective. Table 1A shows an example unsupervised rationale.
+
+With the benefit of a human-annotated rationale for the true label, we can begin to understand model mistakes in terms of reliance on inappropriate features (and correct them). In the example above, the unsupervised rationale suggests that the model's
+
+mistake is due to missing key information about how many friends Susan has (i.e., "five"). Forcing the model to see these key tokens by only using the human rationale as the input fixes this mistake (Table 1B). Prior work has shown that this is not a fluke. For some datasets, human rationales consistently improve model accuracy over baseline when used as an input mask, by orienting model attention toward informative tokens and away from confounding ones (Carton et al., 2020).
+
+Knowing that human rationales contain useful predictive signal, the key question becomes: can we improve model prediction accuracy by incorporating human rationales into training?
+
+Numerous approaches to using human rationales in training have been tried, including: regularizing the parameters of a (linear) model (Zaidan et al., 2007); regularizing model output gradients (Ross et al., 2017); regularizing internal transformer attention weights (Jayaram and Allaway, 2021); and direct supervision on a rationale model (DeYoung et al., 2019), which serves as our baseline approach in this paper. These approaches have generally failed to significantly improve model prediction accuracy (Hase and Bansal, 2021).
+
+A quality these prior approaches have in common is treating human rationales as internally and collectively uniform in predictive utility. That is, any token included in the human rationale is treated as equally important to include in the input representation; vice versa for tokens excluded. Furthermore, all human rationales are weighted equally.
+
+The reality, we demonstrate empirically via ablation studies in §4, is that the predictive utility of human rationales is distributed unevenly between tokens in a rationale, and unevenly between rationales in a dataset. Based on this analysis, we suggest that learning objectives which weight every token equally (accuracy in the case of direct supervision), and every rationale equally, are not optimal for improving downstream model accuracy.
+
+We operationalize these hypotheses in four distinct modifications to the baseline rationale model architecture. Three of these modify the naive token-wise accuracy supervision objective, and the fourth implements "selective supervision", ignoring unhelpful human rationales in training.
+
+Evaluating on three datasets, our proposed methods produce varying levels of improvement over both a baseline BERT model and a baseline BERT-to-BERT supervised rationale model, ranging from
+
+substantial for MultiRC $(3\%)$ to marginal for ESNLI $(0.4\%)$ . Additionally, our methods also improve rationale prediction performance.
+
+Taken together, our results demonstrate the importance of considering the variance of predictive utility both between and within human rationales as a source of additional training signal. Our proposed modifications help pave the way toward truly effective and general learning from rationales.
+
+# 2 Related Work
+
+# 2.1 Rationalization
+
+The extractor-predictor rationale model proposed by Lei et al. (2016) and described in more detail in §5, is an approach to feature attribution, which is one among many families of explanation methods (see Vilone and Longo (2020) for a recent survey).
+
+Recent work has extended the original architecture in various ways, including replacing the use of reinforcement learning with differentiable binary variables (Bastings et al., 2020; DeYoung et al., 2019), alternatives to the original sparsity objective (Paranjape et al., 2020; Antognini and Faltings, 2021), and additional modules which change the interaction dynamics between the extractor and predictor (Carton et al., 2018; Yu et al., 2019; Chang et al., 2020). Pipeline models (Lehman et al., 2019) are similar, but train the two modules separately rather than end-to-end.
+
+Rationale models are a powerful approach to NLP explanations because of how specific objectives can be put on the properties of the rationale, but they have some downsides. First, they are unstable, the extractor often collapsing to all-0 or all-1 output (DeYoung et al., 2019; Yu et al., 2019). We introduce an engineering trick in §5 that appears to lessen this risk. Also, with end-to-end training comes the risk of information leakage between the extractor and predictor (Jethani et al., 2021; Hase et al., 2020; Yu et al., 2021). This idea of leakage plays a part in how we estimate explanation predictive utility in section §4.
+
+# 2.2 Learning from Explanations
+
+Wiegreffe and Marasović (2021) present a review of explainable NLP datasets, a number of which have been incorporated into the ERASER collection and benchmark (DeYoung et al., 2019).
+
+Early work in learning from human explanations include Zaidan et al. (2007) and Druck et al. (2009), and a line of work termed "explanatory debugging"
+
+(Kulesza et al., 2015; Lertvittayakumjorn and Toni, 2021). More recent work spans a variety of approaches, categorized by Hase and Bansal (2021) into regularization (e.g., Ross et al. (2017)), data augmentation (e.g., Hancock et al. (2018)), and supervision over intermediate outputs (e.g., DeYoung et al. (2019); Jayaram and Allaway (2021)).
+
+Significant improvements to model accuracy as a result of explanation learning have proven elusive. Studies occasionally claim such improvement, such as Rieger et al. (2020), which observes general improvements on a medical vision task. More commonly their claims pertain to secondary objective such as explanation quality (e.g., Plumb et al. (2020)), robustness (e.g., Ross et al. (2017), Srivastava et al. (2020)), or few-shot learning (e.g., Yao et al. (2021)). Hase and Bansal (2021) gives an overview of the problem and discusses circumstances under which learning from explanations is liable to work. Our paper contributes to this discussion by considering the variance of training signal quality both within and between human rationales, and how to exploit these variances.
+
+# 3 Data
+
+We consider three datasets in this work. All three are document-query text comprehension tasks, where the task is to determine whether the query is true or false given the document. We use the train, development, test splits offered by DeYoung et al. (2019). Table 2 shows the basic statistics of each dataset based on the training set.
+
+- MultiRC (Khashabi et al., 2018). A reading comprehension dataset of 32,091 document-question-answer triplets that are true or false. Rationales consist of 2-4 sentences from a document that are required to answer the given question.
+- FEVER (Thorne et al., 2018). A fact verification dataset of 76,051 snippets of Wikipedia articles paired with claims that they support or refute. Rationales consist of a single contiguous snippet, so the basic unit of rationale is sentence.
+- E-SNLI (Camburu et al., 2018). A textual entailment dataset of 568,939 short snippets and claims for which each snippet either refutes, supports, or is neutral toward. Input texts are much shorter than MultiRC and FEVER, and rationales are at the token level.
+
+| Dataset | Text length | Rationale length | Rationale granularity |
| MultiRC | 336.0 | 52.0 | sentence |
| FEVER | 355.9 | 47.0 | sentence |
| E-SNLI | 23.5 | 6.1 | token |
+
+Table 2: Basic statistics of the datasets.
+
+# 4 Analysis
+
+To understand properties of human rationales for the purpose of learning from rationales, we analyze the effect of human rationales when they are used as inputs to a trained model.
+
+# 4.1 Human Rationales have Predictive Utility
+
+A basic question about the viability of learning from rationales is whether human rationales bear the potential for improving model performance. That is, do human explanations successfully reveal useful tokens while occluding confounding tokens, such that a model evaluated only on the revealed tokens is able to get improved performance relative to the full input? We refer to such rationale-redacted inputs as rationalized inputs.
+
+We define sufficiency-accuracy (SA) as how accurate the model is across a corpus of rationalized input. This is an aggregate measure, similar to sufficiency as defined in DeYoung et al. (2019) but focused on absolute performance rather than similarity to baseline model output. We refer to the sufficiency-accuracy of the human rationales as human sufficiency-accuracy (HSA).
+
+Estimating sufficiency-accuracy is problematic. The natural way to probe whether the tokens in a rationale are sufficient for an accurate prediction is to remove the non-included tokens from the input, run the model on just the included tokens, and assess its accuracy. But a version of the input where a majority of tokens are removed or masked (by a [MASK] special token in the case of BERT), is out-of-distribution relative to the training data, which has no removal or masking. This difference may lead to unpredictable output from the model when tested on masked input. This masking-is-OOD problem has not received much discussion in the literature, though Jacovi and Goldberg (2021) propose to mitigate it with random masking during model training. The effect of this problem will be to underestimate the sufficiency-accuracy of rationales tested against an un-adapted model.
+
+The opposite problem stems from overfitting rather than OOD issues: label leakage. A human rationale may contain signal about the true label
+
+
+(a) Fine-tuned on full input (unadapted).
+
+
+(b) Fine-tuned on both full and human-rationalized input (adapted).
+
+
+(a) All samples
+
+
+Figure 1: Baseline performance vs. human sufficiency-accuracy for rationalized inputs with token removal and [MASK] token substitution. As rationalized inputs are different from the full text inputs that the original training set includes, we build a calibrated model where the model is trained on both full text inputs and rationalized inputs.
+(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$
+
+
+Figure 2: Sufficiency-accuracy of human rationales on baseline BERT model with increasing levels of corruption via swaps, drops and additions. Model performance decreases quickly when we drop rationale tokens, but stays high as we add non-rationale tokens. These effects are moderated by HSA.
+
+that goes beyond the semantics of the tokens included in the rationale, and a model trained on human-rationalized input may learn to pick up on these spurious signals. A known example is in ESNLI, where annotators had different explanation instructions based on their chosen label. This issue is discussed in several recent papers (Yu et al., 2021; Jethani et al., 2021; Hase et al., 2020), albeit mostly concerning model-generated rather than human explanations. The effect of this problem will be to overestimate the sufficiency-accuracy of rationales tested against an adapted model.
+
+Fig. 1 shows sufficiency-accuracy results for human rationales on both unadapted and adapted models. We expand on the analysis presented by Carton et al. (2020) by showing results for both masking-via-removal and masking-via-[MASK]-token-substitution.
+
+Fig. 1a shows that token removal suffers less from the masking-is-OOD problem on an unadapted model than [MASK] token substitution. [MASK] token substitution results in lower accuracy across the board, while removal improves baseline accuracy for MultiRC, matches it for FEVER, and lowers it for E-SNLI.
+
+With adaptation (Fig. 1b), token removal and [MASK] token substitution have near-identical effects, improving accuracy by a large margin for MultiRC and E-SNLI, and a small margin for FEVER. The near- $100\%$ sufficiency-accuracy for E-SNLI is probably due to label leakage.
+
+If an unadapted model is liable to underestimate sufficiency model, and an adapted model to overestimate, then we suggest that the potential benefit of learning from rationales lies somewhere between the two. On this hypothesis, this figure suggests that MultiRC has a large potential benefit, FEVER a small one, and E-SNLI an unclear benefit depending on how much of the predictive utility of E-SNLI rationales is due to label leakage. The results in §6 ultimately bear out these expectations.
+
+# 4.2 Importance of Rationale Accuracy
+
+We focus on MultiRC, where evaluating a non-rationale-adapted fine-tuned BERT model on human-rationalized data results in a sufficiency-accuracy of $74\%$ , a significant improvement over the normal test accuracy of $68\%$ . But how robust is this improvement to rationale prediction error? We examine how the sufficiency-accuracy of human rationales changes as they are corrupted by random addition, dropping, and swapping of tokens.
+
+In this analysis, an $N\%$ drop removes $N\%$ of tokens from each rationale in the dataset, reducing recall to $100 - N$ . An $N\%$ addition adds tokens numbering $N\%$ the size of each rationale, from the set of non-rationale tokens, reducing precision to $\frac{100}{100 + N}$ . An $N\%$ swap performs both operations, swapping $N\%$ of rationale tokens for the same number of non-rationale tokens.
+
+The "dropped" curve in Fig. 2a shows that human rationales afford improved accuracy over the
+
+baseline until roughly $40\%$ of tokens have been dropped from them, suggesting that a minimum of $60\%$ recall is needed to derive an advantage from human rationales over the full input. Per the "added" curve, adding the same number of irrelevant tokens to the rationale has a much less severe impact on accuracy, suggesting that errors of omission are significantly worse than errors of inclusion for learning from rationales.
+
+Fig. 2b and 2c respectively show the effect of this perturbation on high- and low-sufficiency-accuracy human rationales, which constitute $74\%$ and $26\%$ of rationales respectively for this model. High-SA rationales follow a similar trend to the whole population, but the recall requirement is lower than Fig. 2a to exceed model accuracy with the full input (the "dropped" curve meets the blue line at $50\%$ ). In comparison, low-SA rationales demonstrate interesting properties. These rationales actually have a sabotaging effect in a quarter of cases: the model would have an accuracy of $27\%$ with the full input, which is lowered to $0\%$ by the presence of these rationales. Also, addition and dropping have a similar effect in mitigating this sabotage. Similar results hold on FEVER and E-SNLI except the apparent required recall is much higher $(>90\%)$ for both methods (see the appendix), indicating challenges for learning from rationales on these datasets.
+
+In summary, our analyses inspire two general observations about learning from rationales: 1) moving away from naive accuracy (toward recall, for example) as a rationale supervision objective, and 2) focusing on useful rationales over harmful ones.
+
+# 5 Methods
+
+We propose architecture changes based on these insights. Our code is available at https://github.com/ChicagoHAI/learning-from-rationales.
+
+# 5.1 Background and Baseline Models
+
+Our training data include input tokens, their corresponding rationales, and labels. Formally, an instance is denoted as $(x,\alpha ,y)$ , where $x = (x_{1},\ldots ,x_{L})$ is a text sequence of length $L$ and human rationale $\alpha$ of the same length. $\alpha_{i} = 1$ indicates that token $x_{i}$ is part of the rationale (and relevant for the prediction), $\alpha_{i} = 0$ otherwise.
+
+We use HuggingFace's BERT-base-uncased (Devlin et al., 2018; Wolf et al., 2020) as the basis for our experiments and analysis. Used in the standard way, BERT ignores $\alpha$ and is fine-tuned on tuples
+
+
+Figure 3: Illustration of our multi-task framework. Our main innovation lies in how we define rationale loss for the supervised case and the masking function $m$ .
+
+of $(x,y)$ . This is our simplest baseline.
+
+Rationale model. We use the rationale model of Lei et al. (2016) for both supervised and unsupervised rationale generation, in its updated BERT-to-BERT form (DeYoung et al., 2019). This model consists of two BERT modules: a rationale extractor $g$ that generates a binary attention mask $\hat{\alpha}$ as the rationale, and a predictor $f$ which makes a prediction using the rationalized input via a masking function $m$ on $x$ and $\hat{\alpha}$ (Fig. 3):
+
+$$
+\begin{array}{l} g (\boldsymbol {x}) \rightarrow \hat {\boldsymbol {\alpha}}, \\ f (m (\boldsymbol {x}, \hat {\boldsymbol {\alpha}})) \rightarrow \hat {y}. \\ \end{array}
+$$
+
+The two components are trained in tandem. In the unsupervised scenario, the joint objective function consists of a prediction loss term and a rationale sparsity term, encouraging the model to retain only those tokens in $x$ that are necessary for accurate prediction:
+
+$$
+\mathcal {L} _ {u} = \mathcal {L} _ {p} (y, \hat {y}) + \lambda_ {s p} | | \hat {\boldsymbol {\alpha}} | |,
+$$
+
+where $\mathcal{L}_p$ is typically cross entropy.
+
+In the supervised scenario, given a human rationale $\alpha$ , we replace the sparsity objective with a rationale supervision objective:
+
+$$
+\mathcal {L} _ {s u} = \mathcal {L} _ {p} (y, \hat {y}) + \frac {\lambda_ {s u}}{L} \sum_ {i = 1} ^ {L} \mathcal {L} _ {p} (\boldsymbol {\alpha} _ {i}, \hat {\boldsymbol {\alpha}} _ {i}),
+$$
+
+where $\lambda_{su}$ is a hyperparameter that controls the weight of rationale loss compared to label loss.
+
+Each of these scenarios represents a baseline for our experiment. We refer to the unsupervised version as unsupervised rationale model, and the supervised version as supervised rationale model.
+
+Implementation details. The original Lei et al. (2016) model generates binary rationales by Bernoulli sampling from continuous probability values produced by the generator, and uses the REINFORCE algorithm (Williams, 1992) to prop
+
+agate approximate gradients through this non-differentiable operation.
+
+We instead use Gumbel Softmax (Jang et al., 2017) to generate differentiable approximate binary rationale masks. In this framework, the generator produces logits $z_{i}$ to which are added random noise $G \sim Gumbel(0,1)$ , before applying a softmax to produce class probabilities $c_{i}$ . This approximates a discrete distribution parameterized by $e^{z_i}$ . We then use the positive class probability $c_{i}^{1}$ as the rationale value $\hat{\alpha}_{i}$ .
+
+$$
+\pmb {c} _ {i} = \mathrm {s o f t m a x} (\pmb {z} _ {i} + \pmb {G} \sim G u m b e l (0, 1)); \hat {\alpha} _ {i} = \pmb {c} _ {i} ^ {1}
+$$
+
+Generating stable rationales. We find it helpful as an engineering trick to pre-train the predictor layer of this model on the full input before constraining the predictor and extractor on the joint objective. This step appears to mitigate some of the issues this model has with rationale collapse, noted for example by DeYoung et al. (2019).
+
+Given $\hat{\alpha}_i$ , we mask non-rationale tokens by multiplicatively substituting the [MASK] token vector across their vector representations, analogously to what is done during the MASK-LM pretraining of the BERT model:
+
+$$
+m _ {s} (\boldsymbol {x} _ {i}, \hat {\boldsymbol {\alpha}} _ {i}) = \hat {\boldsymbol {\alpha}} _ {i} \cdot \boldsymbol {e} _ {i} + (1 - \hat {\boldsymbol {\alpha}} _ {i}) \cdot \boldsymbol {e} _ {[ \mathrm {M A S K} ]},
+$$
+
+where $e_i$ represents the embedding associated with $x_i$ and $e_{[\mathrm{MASK}]}$ is the embedding for the [MASK] token. We never mask special tokens [CLS] or [SEP], and we set $\hat{\alpha}_i = 1$ for the query in MultiRC and FEVER as well because the query is always part of human rationales in these two datasets.
+
+# 5.2 Learning from Human Rationales
+
+Inspired by the analysis in §4, we propose four strategies for improving the efficacy of learning from rationales: 1) tuning class weights for rationale supervision; 2) enforcing sentence-level rationalization; 3) using non-occluding "importance embeddings"; and 4) selectively supervising only rationales with high sufficiency-accuracy. The first three are designed to loosen the supervision's dependence on flat tokenwise accuracy, while the last tries to operationalize our observations about helpful versus non-helpful rationales.
+
+Class weights. Rationales may become more effective enablers of model prediction accuracy at different balances of precision and recall. We can adjust this balance simply by using differing weights to positive and negative classes in rationale supervi
+
+sion:
+
+$$
+\mathcal {L} _ {w} = \mathcal {L} _ {p} (y, \hat {y}) + \frac {1}{L} \sum_ {i = 1} ^ {L} (1 + \lambda_ {s u} ^ {1} \alpha_ {i}) \mathcal {L} _ {p} (\alpha_ {i}, \hat {\alpha} _ {i}),
+$$
+
+where $\lambda_{su}^{1}$ controls the relative weight of rationale vs. non-rationale tokens. In particular, as we will discuss in §4, we find that increased recall is associated with increased model accuracy. Thus, we explore several values for $\lambda_{su}^{1}$ in our experiment to encourage higher recall.
+
+Sentence-level rationalization. Another divergence from strict token-wise accuracy is to rationalize at the sentence rather than the token level. Given a function sent mapping a token $x_{i}$ to its corresponding sentence $s$ consisting of tokens $\{.,x_i,..\}$ , we average token-level logits $z_{i}$ across each sentence to produce a binary mask at the sentence level and then propagate that mask value to all sentence tokens:
+
+$$
+\hat {\boldsymbol {\alpha}} _ {i} = \hat {\boldsymbol {\alpha}} _ {s e n t (i)} ^ {s},
+$$
+
+where $z^s = \frac{1}{|\{i|sent(i) = s\}|}\sum_{\{i|sent(i) = s\}}z_i$ is used to generate $\hat{\alpha}_{sent(i)}^{s}$ .
+
+Importance embeddings. Another way to mitigate the impact of false negatives in predicted rationales is for these negatives to still remain visible to the predictor. This variant uses additive embeddings for rationalization rather than occluding masks, using a two-element embedding layer $e$ constituting one embedding for rationale tokens and one for nonrationale tokens, added to the input vectors according to the predicted rationale. This way, input tokens are tagged as important or unimportant, but the predictor $f$ has the freedom to learn how to engage with these tags for maximum label accuracy, rather than being fully blinded to "unimportant" tokens.
+
+$$
+m _ {e} (\pmb {x} _ {i}, \hat {\pmb {\alpha}} _ {i}) = \pmb {e} _ {i} + (1 - \hat {\pmb {\alpha}} _ {i}) \cdot \pmb {e} _ {\mathrm {n o n - r a t i o n a l e}} + \hat {\pmb {\alpha}} _ {i} \cdot \pmb {e} _ {\mathrm {r a t i o n a l e}}.
+$$
+
+An important drawback of this approach is that the predictor now has access to the full input instead of only the rationalized input, so these rationales provide a weak guarantee that important tokens are actually used to make predictions. This method also represents a large distribution shift from full text, so we find it necessary to calibrate the predictor using human rationales, as described in Fig. 1b.
+
+Selective supervision. Our fourth modification attempts to improve rationale prediction performance on high-sufficiency-accuracy rationales by selectively supervising only on human rationales with this property, ignoring those where human ratio
+
+| Dataset | Model | Acc. | Rationale prediction | Human Suff. Acc. | Methods |
| F1 | Prec. | Rec. | Masking | Granularity | Pos. class weight | Selective supervision |
| MultiRC | BERT baseline | 68.1 | - | - | - | 73.9 | - | Tokens | - | - |
| Unsupervised rationale model | 67.2 | 22.2 | 18.5 | 27.9 | 71.2 | [MASK] | Tokens | - | - |
| Supervised rationale model | 67.0 | 46.5 | 41.5 | 52.9 | 70.8 | [MASK] | Tokens | 1.0 | No |
| Best overall model | 71.2 | 57.1 | 44.9 | 78.4 | 74.5 | Embeddings | Sentences | 5.0 | No |
| FEVER | BERT baseline | 90.2 | - | - | - | 89.4 | - | Tokens | - | - |
| Unsupervised rationale model | 88.3 | 22.6 | 20.5 | 25.1 | 88.7 | [MASK] | Tokens | - | - |
| Supervised rationale model | 90.7 | 68.4 | 61.7 | 76.7 | 91.1 | [MASK] | Tokens | 1.0 | No |
| Best overall model | 91.5 | 81.2 | 83.5 | 79.1 | 91.6 | Embeddings | Sentences | 1.0 | No |
| E-SNLI | BERT baseline | 89.7 | - | - | - | 73.9 | - | Tokens | - | - |
| Unsupervised rationale model | 88.9 | 40.6 | 28.2 | 72.6 | 85.0 | [MASK] | Tokens | - | - |
| Supervised rationale model | 87.8 | 58.7 | 47.7 | 76.0 | 89.4 | [MASK] | Tokens | 1.0 | No |
| Best overall model | 90.1 | 59.6 | 45.5 | 86.2 | 92.3 | Embeddings | Tokens | 3.0 | No |
+
+Table 3: Best-performing model variant compared to baseline models.
+
+nales do not allow a correct prediction.
+
+Specifically, for every training batch, we use the true human rationales $\alpha$ as an input mask for the BERT predictor to get the HSA for each document. HSA then serves as a weight on the human rationale supervision during the main training batch:
+
+$$
+\mathcal {L} _ {s s} = \mathcal {L} _ {p} (y, \hat {y}) + I (y = f (m (\boldsymbol {x}, \boldsymbol {\alpha})) \frac {\lambda_ {s u}}{L} \sum_ {i = 1} ^ {L} \mathcal {L} _ {p} (\boldsymbol {\alpha} _ {i}, \hat {\boldsymbol {\alpha}} _ {i}).
+$$
+
+By weighting supervision this way, we hope to ignore low-quality human rationales during training and focus instead on those that enable good accuracy.
+
+# 6 Results
+
+# 6.1 Experiment Setup
+
+Our goal in this experiment is to understand the impact of our four proposed model/training modifications. We do this with a comprehensive scan: We try three positive rationale supervision class weights $\lambda_{su}^{1}$ (\{0, 2, 4\}), and toggle sentence-level rationalization, importance embedding, selective supervision on and off. In addition, we vary rationale supervision loss weight $\lambda_{su}$ in $\{0.5, 1, 2\}$ . This resulted in 72 models for MultiRC and FEVER, and 36 models for E-SNLI (for which sentence-level rationalization is not applicable).
+
+The best resultant model is our best overall model. The best model with $\lambda_{su^1} = 1$ (i.e., identical class weights for human rationales) and no other learning strategy enabled is our baseline supervised rationale model. We additionally train three unsupervised rationale models with sparsity weights 0.15, 0.25, and 0.35, selecting as representative the one which produced the sparsest rationales while maintaining a reasonable level of accuracy (because in this architecture, there is invariably a trade-off between accuracy and sparsity).
+
+To evaluate the performance of our models, we consider both accuracy of the predicted labels $(\hat{y})$
+
+and performance of rationale prediction in terms of F1, precision, and recall. We use Pytorch Lightning (Falcon et al., 2019) for training with a learning rate of 2e-5 and gradient accumulation over 10 batches for all models. Early stopping was based on validation set loss with a patience of 3, evaluated every fifth of an epoch. Training was performed on two 24G NVidia Titan RTX GPUs.
+
+# 6.2 Model Performance
+
+Table 3 compares our best overall model against the baselines, and presents the learning strategies used in the models.
+
+Prediction accuracy. For MultiRC, this best model includes every proposed modification (sentence-level rationalization, importance embeddings, class weights) except for selective supervision, and yields a 3-point improvement from the baseline accuracy of $68.1\%$ to $71.2\%$ . We observe a more modest improvement on FEVER, with the best model using sentence-level rationalization and importance embeddings, and scoring a 1-point improvement from $90.2\%$ to $91.5\%$ . We note, however, that this approaches the accuracy of the model with access to a human rationale oracle $(91.6\%)$ . Finally, we observe a tiny improvement of $0.4\%$ on E-SNLI, though our proposed methods do improve upon the baselines of unsupervised and supervised rationale model, which causes a performance drop.
+
+A McNemar's significance test with Bonferroni correction between the best and baseline model finds that the accuracy improvement is significant for MultiRC and FEVER $(p = 2\mathrm{e} - 7$ and 3e-6 respectively) and not significant for E-SNLI $(p = 0.1)$ . The limited improvement in E-SNLI echos the performance drop in Fig. 1a without adaptation, suggesting that human rationales in this dataset are too idiosyncratic to improve model performance.
+
+Factor analysis. We use regression analysis to
+
+| Method | Coefficients |
| MultiRC | FEVER | E-SNLI |
| Sentences | .015*** | .001 | - |
| Class weights | .017*** | .007*** | .005 |
| Importance embeddings | .012*** | .006*** | -.010** |
| Selective supervision | 0.004 | -.006*** | -.032*** |
+
+Table 4: Regression coefficients for effect each proposed method on overall prediction accuracy
+
+| Dataset | Sel. Sup. | Acc. | F1. |
| High-HSA | Low-HSA |
| MultiRC | No | 71.2 | 59.3 | 57.2 |
| Yes | 71.0 | 56.2 | 54.1 |
| FEVER | No | 91.5 | 79.0 | 72.5 |
| Yes | 90.6 | 61.2 | 57.0 |
| E-SNLI | No | 90.1 | 61.2 | 48.0 |
| Yes | 88.8 | 49.0 | 44.9 |
+
+Table 5: Label accuracy and predicted rationale F1 for high- versus low-HSA examples.
+
+understand the impact of the different modifications on model accuracy. Table 4 suggests that rationale class weighting has the highest positive effect on accuracy across datasets. Importance embeddings have a positive effect for MultiRC and FEVER and a negative effect for E-SNLI, while sentence-level rationalization improves only MultiRC.
+
+Selective supervision is found to have a nonexistent or negative effect across all three datasets. Table 5 details this result, showing model accuracy and rationale performance for the best model with (yes) vs. without (no) selective supervision. If our method succeeded, F1 for high-HSA examples would increase from the "No" to the "Yes" models and remain flat or decrease for low-HSA examples. Indeed, we observe lower rationale F1 for low-HSA examples, but the rationale F1 also drops substantially for high-HSA examples, possibly because of the reduced available training data.
+
+Rationale performance. Although our modifications are designed to improve label prediction performance, they also improve rationale prediction performance in most cases. The only exception is the reduced precision in E-SNLI compared to the supervised rationale model.
+
+# 6.3 Qualitative Analysis
+
+Table 6 shows three examples, each drawn from a different dataset, to illustrate different outcomes. For each example, we show the human rationale and predicted rationales for both the baseline supervised rationale model and our best overall model. Incorrect predictions are colored red.
+
+Example 6a shows an instance sampled from
+
+MultiRC where our best model, with higher recall and sentence-level rationalization, more successfully captures the (sufficient) information present in the human rationale, allowing for a correct prediction where the supervised rationale model fails.
+
+Example 6b presents a contrasting example from the FEVER dataset. The human rationale omits important context, that Legendary Entertainment is a subsidiary of Wanda Group, making it harder to infer that it is not a subsidiary of Warner Bros. Our best model succeeds at capturing this snippet in its rationale, but still predicts the incorrect label, illustrating that a sufficient (for humans) rationale does not always produce a correct label.
+
+Finally, example 6c shows a case where the baseline supervised rationale model succeeds while our best model fails. This is a hard-to-interpret example, mainly a demonstration of the limitations of rationales as an explanatory device for certain kinds of task. This begs a question: how relevant are rationales as an explanation or learning mechanism when models like GPT-3 (Brown et al., 2020) are increasingly capable of human-level natural language explanations (Table 7)?
+
+Our position is that however an explanation is presented, meaning is still localized within text, so rationales can still serve as a useful interface for scrutinizing or controlling model logic, even if they require additional translation to be comprehensible to humans. Works that hybridize the two ideas such as Zhao and Vydiswaran (2020) may represent a good way of resolving this issue.
+
+# 7 Discussion
+
+The analysis in section §4 explores the limits of potential improvement from learning from rationales. It suggests two insights toward improved learning from rationales: 1) that insofar as they boost model accuracy, not all human rationale tokens are equally valuable, e.g., with false positives causing less degradation than false negatives; and 2) we could in principle boost label accuracy with good rationale accuracy on useful (high-SA) rationales and low accuracy on useless (low-SA) ones.
+
+We exploit these two insights with four modifications to the baseline architecture. Three of these diverge from flat rationale supervision accuracy: rationale supervision class weighting, sentence-level rationalization, and importance embeddings. The last, selective supervision, pursues utility-discriminative weighting during model training.
+
+| Human rationale | Baseline supervised rationale | Best model |
| (A) MultiRC: Best model beats supervised baseline |
| [CLS] there have been many organisms that have lived in earths past . only a tiny number of them became fossils . still , scientists learn a lot from fossils . fossils are our best clues about the history of life on earth . fossils provide evidence about life on earth . they tell us that life on earth has changed over time . fossils in younger rocks look like animals and plants that are living today . fossils in older rocks are less like living organisms . fossils can tell us about where the organism lived . was it land or marine ? fossils can even tell us if the water was shallow or deep . fossils can even provide clues to ancient climates . [SEP] what can we tell about former living organisms from fossils ? | | how they adapted [SEP] | [CLS] there have been many organisms that have lived in earths past . only a tiny number of them became fossils . still , scientists learn a lot from fossils . fossils are our best clues about the history of life on earth . fossils provide evidence about life on earth . they tell us that life on earth has changed over time . fossils in younger rocks look like animals and plants that are living today . fossils in older rocks are less like living organisms . fossils can tell us about where the organization lived . was it land or marine ? fossils can even tell us if the water was shallow or deep . fossils can even provide clues to ancient climates . [SEP] what can we tell about former living organisms from fossils ? | | how they adapted [SEP] | [CLS] there have been many organisms that have lived in earths past . only a tiny number of them became fossils . still , scientists learn a lot from fossils . fossils are our best clues about the history of life on earth. fossils provide evidence about life on earth . they tell us that life on earth has changed over time . fossils in younger rocks look like animals and plants that are living today . fossils in older rocks are less like living organisms . fossils can tell us about where the organism lived . was it land or marine ? fossils can even tell us if the water was shallow or deep . fossils can even provide clues to ancient climates . [SEP] what can we tell about former living organisms from fossils ? | | howthey adapted [SEP] |
| Prediction: False | Prediction: True | Prediction: False |
| (B) FEVER: Human rationale is insufficient |
| [CLS] legendary entertainment - lrb - also known as legendary pictures or legendary - rrb - is an american media company based in burbank , california , the company was founded by thomas tull in 2000 and in 2005 , concluded an agreement to co - produce and co - finance films with warner bros ., and began a similar arrangement with universal studios in 2014 . since 2016 , legendary has been a subsidiary of the chinese conglomerate wanda group . [SEP] legendary entertainment is a subsidiary of warner bros pictures . [SEP] | [CLS] legendary entertainment - lrb - also known as legendary pictures or legendary - rrb - is an american media company based in burbank , california , the company was founded by thomas tull in 2000 and in 2005 , concluded an agreement to co - produce and co - finance films with warner bros ., and began a similar arrangement with universal studios in 2014 . since 2016 , legendary has be a subsidiary of the chinese conglomerate wanda group . [SEP] legendary entertainment is a subsidiary of warner bros pictures . [SEP] | [CLS] legendary entertainment - lrb - also known as legendary pictures or legendary - rrb - is an american media company based in burbank , california , the company was founded by thomas tull in 2000 and in 2005 , concluded an agreement to co - produce and co - finance films with warmer bros ., and began a similar arrangement with universal studios in 2014 . since 2016 , legendary has been a subsidiary of the chinese conglomerate wanda group . [SEP] legendary entertainment is a subsidiary of warner bros pictures . [SEP] |
| Prediction: Supports | Prediction: Supports | Prediction: Supports |
| (C) E-SNLI: Supervised baseline beats best model |
| [CLS] a big dog catches a ball on his nose [SEP] a big dog is sitting down while trying to catch a ball [SEP] | [CLS] a big dog catches a ball on his nose [SEP] a big dog is sitting down while trying to catch a ball [SEP] | [CLS] a big dog catches a ball on his nose [SEP] a big dog is sitting down while trying to catch a ball [SEP] |
| Prediction: Neutral | Prediction: Neutral | Prediction: Contradiction |
+
+Table 6: Examples of human, supervised baseline, and best model rationales and predictions.
+
+| Source | Natural language explanation |
| Human | There is no indication that the dog is sitting down while playing catch on his nose. |
| Human | A dog can catch a ball by not to sitting down. |
| GPT-3 | The entailment of this sentence is that the dog is sitting down, and the contradiction would be if the dog was standing up. This sentence is neutral, meaning it doesn’t entail or contradict anything. |
+
+Table 7: Examples of natural language explanations for the "neutral" prediction on E-SNLI example from Table 6c. See Appendix §D for GPT-3 prompt details.
+
+Taken together, our proposed methods yield a substantial $3\%$ improvement over baseline performance for MultiRC, a $1\%$ improvement on FEVER, and a tiny $4\%$ improvement on E-SNLI, mirroring the potential improvements observed in the analysis. We find that all three token supervision methods are useful in achieving this, while selective supervision has a marginal or negative effect.
+
+In summary, our results support the potential for learning from rationales in certain datasets, and demonstrate the importance of understanding the properties of human rationales to properly exploit them for this purpose. We believe that these two insights are useful steps towards effective learning from rationales, and could yield even greater improvements if operationalized optimally.
+
+Limitation. A limitation of our analysis is that
+
+all three datasets are document-query style reading comprehension tasks, as opposed to, e.g., sentiment analysis. Because of the popularity of this type of task in NLP benchmarks, this type of dataset represents a majority of what is available in the ERASER collection (DeYoung et al., 2019). By contrast, sentiment is often scattered throughout a text, so human rationales for sentiment are likely to contain redundant signal, which could impact their predictive utility. We leave a more comprehensive survey of NLP tasks for future work.
+
+Acknowledgments. We thank anonymous reviewers for their feedback, and members of the Chicago Human+AI Lab for their insightful suggestions. This work is supported in part by research awards from Amazon, IBM, Salesforce, and NSF IIS-2126602.
+
+# References
+
+Diego Antognini and Boi Faltings. 2021. Rationalization through Concepts. arXiv:2105.04837 [cs]. ArXiv: 2105.04837.
+Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2020. Interpretable Neural Predictions with Differentiable Binary Variables. arXiv:1905.08160 [cs]. ArXiv: 1905.08160.
+Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs]. ArXiv: 2005.14165.
+Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In Proceedings of NeurIPS.
+Samuel Carton, Qiaozhu Mei, and Paul Resnick. 2018. Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3497-3507, Brussels, Belgium. Association for Computational Linguistics.
+Samuel Carton, Anirudh Rathore, and Chenhao Tan. 2020. Evaluating and Characterizing Human Rationales. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9294-9307, Online. Association for Computational Linguistics.
+Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant Rationalization. In Proceedings of the 37th International Conference on Machine Learning, pages 1448-1458. PMLR. ISSN: 2640-3498.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs]. ArXiv: 1810.04805.
+Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. ERASER: A Benchmark to Evaluate Rationalized NLP Models. arXiv:1911.03429 [cs]. ArXiv:1911.03429.
+Gregory Druck, Burr Settles, and Andrew McCallum. 2009. Active Learning by Labeling Features. In Proceedings of the 2009 Conference on Empirical
+
+Methods in Natural Language Processing, pages 81-90, Singapore. Association for Computational Linguistics.
+William Falcon et al. 2019. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorchlightning, 3.
+Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In Proceedings of ACL.
+Peter Hase and Mohit Bansal. 2021. When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data. arXiv:2102.02201 [cs]. ArXiv: 2102.02201.
+Peter Hase, Shiyue Zhang, Harry Xie, and Mohit Bansal. 2020. Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language? arXiv:2010.04119 [cs]. ArXiv: 2010.04119.
+Alon Jacovi and Yoav Goldberg. 2021. Aligning Faithful Interpretations with their Social Attribution. arXiv:2006.01067 [cs]. ArXiv:2006.01067.
+Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical Reparameterization with Gumbel-Softmax. arXiv:1611.01144 [cs, stat]. ArXiv: 1611.01144.
+Sahil Jayaram and Emily Allaway. 2021. Human Rationales as Attribution Priors for Explainable Stance Detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5540-5554, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Neil Jethani, Mukund Sudarshan, Yindalon Aphinyanaphongs, and Rajesh Ranganath. 2021. Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations. arXiv:2103.01890 [cs, stat]. ArXiv:2103.01890.
+Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers).
+Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces, pages 126-137, Atlanta Georgia USA. ACM.
+Eric Lehman, Jay DeYoung, Regina Barzilay, and Byron C. Wallace. 2019. Inferring Which Medical Treatments Work from Reports of Clinical Trials. In
+
+Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3705-3717, Minneapolis, Minnesota. Association for Computational Linguistics.
+Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117.
+Piyawat Lertvittayakumjorn and Francesca Toni. 2021. Explanation-Based Human Debugging of NLP Models: A Survey. arXiv:2104.15135 [cs]. ArXiv: 2104.15135.
+Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction. arXiv:2005.00652 [cs]. ArXiv:2005.00652.
+Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, and Ameet Talwalkar. 2020. Regularizing Black-box Models for Improved Interpretability. arXiv:1902.06787 [cs, stat]. ArXiv: 1902.06787.
+Laura Rieger, Chandan Singh, William Murdoch, and Bin Yu. 2020. Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge. In International Conference on Machine Learning, pages 8116-8126. PMLR. ISSN: 2640-3498.
+Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the Right: Reasons: Training Differentiable Models by Constraining their Explanations. arXiv preprint arXiv:1703.03717. 00000.
+Megha Srivastava, Tatsunori Hashimoto, and Percy Liang. 2020. Robustness to Spurious Correlations via Human Annotations. arXiv:2007.06661 [cs, stat]. ArXiv: 2007.06661.
+James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of NAACL.
+Giulia Vilone and Luca Longo. 2020. Explainable Artificial Intelligence: a Systematic Review. arXiv:2006.00093 [cs]. ArXiv:2006.00093.
+Sarah Wiegrefe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, and Yejin Choi. 2021. Reframing Human-AI Collaboration for Generating Free-Text Explanations. arXiv:2112.08674 [cs]. ArXiv: 2112.08674.
+Sarah Wiegrefe and Ana Marasovic. 2021. Teach Me to Explain: A Review of Datasets for Explanable NLP. arXiv:2102.12060 [cs]. ArXiv: 2102.12060.
+
+Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2020. HuggingFace's Transformers: State-of-the-art Natural Language Processing. arXiv:1910.03771 [cs]. ArXiv: 1910.03771.
+Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, and Xiang Ren. 2021. Refining Neural Networks with Compositional Explanations. arXiv:2103.10415 [cs]. ArXiv: 2103.10415.
+Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S. Jaakkola. 2019. Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control. arXiv preprint. ArXiv: 1910.13294.
+Mo Yu, Yang Zhang, Shiyu Chang, and Tommi S. Jaakkola. 2021. Understanding Interlocking Dynamics of Cooperative Rationalization. arXiv:2110.13880 [cs]. ArXiv:2110.13880.
+Omar Zaidan, Jason Eisner, and Christine Piatko. 2007. Using "Annotator Rationales" to Improve Machine Learning for Text Categorization. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 260-267, Rochester, New York. Association for Computational Linguistics.
+Xinyan Zhao and V. G. Vinod Vydiswaran. 2020. LIREx: Augmenting Language Inference with Relevant Explanation. arXiv:2012.09157 [cs]. ArXiv: 2012.09157.
+
+| Dataset | Method | Role | Accuracy | Rationale prediction | Human Suff. Acc. |
| F1 | Precision | Recall |
| MultiRC | Sentences | Best with | 71.2 | 57.1 | 44.9 | 78.4 | 74.5 |
| Sentences | Best without | 70.6 | 41.6 | 27.7 | 84.1 | 75.8 |
| Class-weights | Best with | 71.2 | 57.1 | 44.9 | 78.4 | 74.5 |
| Class-weights | Best without | 70.8 | 55.2 | 66.1 | 47.4 | 76.5 |
| Importance embeddings | Best with | 71.2 | 57.1 | 44.9 | 78.4 | 74.5 |
| Importance embeddings | Best without | 71.0 | 53.6 | 39.7 | 82.5 | 75.8 |
| Selective supervision | Best with | 71.0 | 53.6 | 39.7 | 82.5 | 75.8 |
| Selective supervision | Best without | 71.2 | 57.1 | 44.9 | 78.4 | 74.5 |
| FEVER | Sentences | Best with | 91.5 | 81.2 | 83.5 | 79.1 | 91.6 |
| Sentences | Best without | 91.3 | 72.4 | 61.3 | 88.5 | 91.6 |
| Class-weights | Best with | 91.5 | 79.6 | 73.1 | 87.3 | 91.8 |
| Class-weights | Best without | 91.5 | 81.2 | 83.5 | 79.1 | 91.6 |
| Importance embeddings | Best with | 91.5 | 81.2 | 83.5 | 79.1 | 91.6 |
| Importance embeddings | Best without | 91.4 | 80.0 | 74.9 | 85.9 | 91.8 |
| Selective supervision | Best with | 90.6 | 56.4 | 41.4 | 88.6 | 90.4 |
| Selective supervision | Best without | 91.5 | 81.2 | 83.5 | 79.1 | 91.6 |
| E-SNLI | Class-weights | Best with | 90.1 | 59.6 | 45.5 | 86.2 | 92.3 |
| Class-weights | Best without | 89.9 | 62.2 | 55.7 | 70.4 | 92.0 |
| Importance embeddings | Best with | 90.1 | 59.6 | 45.5 | 86.2 | 92.3 |
| Importance embeddings | Best without | 89.9 | 33.5 | 20.2 | 100.0 | 72.5 |
| Selective supervision | Best with | 88.8 | 49.0 | 33.2 | 93.4 | 84.0 |
| Selective supervision | Best without | 90.1 | 59.6 | 45.5 | 86.2 | 92.3 |
+
+Table 8: Comparison of best model with each proposed factor against best model without that factor.
+
+# A Detailed Factor Analysis
+
+Table 8 compares, for each proposed method, the performance of the best model using that method and the best model not using it. The story shown here is similar to the regression analysis in Table 4, but one new insight is that the improvement in model prediction performance appears to be driven by the sentence-level rationalization method, as it cuts down on stray tokens dropped from or added to the predicted rationales.
+
+# B Rationale Perturbation on FEVER and E-SNLI
+
+Furthering the analysis in §4.2, we extend the human rationale perturbation experiment to FEVER and E-SNLI.
+
+Fig. 4 show the result for FEVER. Fig. 4a shows that the baseline accuracy is so high for this dataset that to match just the baseline accuracy for FEVER, we require near perfect prediction of human rationales.
+
+Moreover, even for documents with HSA = 1, the model performance drops below baseline on dropping just $\sim 10\%$ tokens (synonymous with rationale recall = $\sim 0.9$ ) in Fig. 4b. Interestingly, the model performance remains consistently above the
+
+baseline when adding non-rationale tokens (synonymous with decreasing rationale precision). In comparison, the model performance for MultiRC in Fig. 2b drops below baseline after dropping $\sim 50\%$ of the tokens.
+
+For FEVER examples with HSA = 0 (Fig. 4c), the model performance remains below the baseline accuracy consistently, supporting the second hypothesis in §4.2. The near-perfect need to predict rationales in FEVER may explain behind the difference in improvements of model performance between MultiRC and FEVER.
+
+Fig. 5 covers E-SNLI. We see that the model performance decreases after dropping rationale tokens (signifying decreasing recall) and it consistently remains below the baseline. In contrast, the model performance shows a slight improvement after adding non-rationale tokens (signifying decrease in rationale precision). Moreover, for documents with HSA = 1, the model performance drops below baseline at $\sim 3\%$ for dropping and swapping rationale tokens, where as the model performance plateaus with addition of non-rationale tokens. These insights highlights the substantial challenges in learning from explanations for E-SNLI.
+
+
+(a) All samples
+
+
+(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$
+
+
+
+
+Figure 4: Performance of corrupted rationale for FEVER. Model performance drops below baseline accuracy immediately on both dropping human rationales (i.e., recall $\downarrow$ ) and adding non-rationale tokens (i.e., precision $\downarrow$ ). For HSA = 1, model performance remains consistently above baseline on adding non-rationale tokens (i.e. precision $\downarrow$ )
+(a) All samples
+Figure 5: Performance of corrupted rationales for E-SNLI. Model performance for human rationale remains below baseline accuracy and slightly increases with addition of non-rationale tokens (i.e. precision $\downarrow$ ). Even for HSA $= 1$ , model performance drops below baseline accuracy at just $\sim 4\%$ corruption.
+
+
+(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$
+
+
+
+# C Rationale Perturbation for Adapted Models
+
+We perform the same perturbation analysis on calibrated model trained on both full and rationalized input, for which distribution shift from masking are less of a concern.
+
+In Fig. 6, for MultiRC, we find that model performance plateaus with addition of non-rationale tokens and drops quickly with rationale tokens even for a calibrated model. This observation is consistent for FEVER (Fig. 7).
+
+For E-SNLI, we find different properties using a calibrated BERT model compared to the standard BERT model show in Fig. 5a.
+
+In contrast to MultiRC and FEVER, we find that the model performance drops more rapidly with the addition of non-rationale tokens compared to removal of rationale tokens. This is consistent for documents with HSA = 1, suggesting that for E-SNLI, rationale precision maybe more important when using a calibrated model. Similar to FEVER, we see the model performance drop below the baseline with very little corruption of rationales, echoing the need to perfectly mimic human rationalization for effective learning from rationales for this dataset.
+
+# D GPT-3 Prompt
+
+We generate a zero-shot GPT-3 (Brown et al., 2020) explanation using the Davinci model variant on the OpenAI playground1, and a modified version of the prompt proposed by Wiegreffe et al. (2021):
+
+Let's explain classification decisions.
+
+A big dog catches a ball on his nose.
+
+question: A big dog is sitting down while trying to catch a ball.
+
+entailment, contradiction, or neutral?
+
+A second step prompting for an explanation is not needed, as GPT-3 gives its prediction in the form of a natural language explanation.
+
+
+(a) All samples
+
+
+Figure 6: Performance of corrupted rationales for MultiRC using a calibrated model. Model performance decreases consistently when we drop human rationales (i.e., recall $\downarrow$ ), where as the model performance stays high as we add non-rationale tokens (i.e., precision $\downarrow$ ). The impact of recall is moderated when HSA = 1.
+
+
+(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$
+
+
+(a) All samples
+
+
+Figure 7: Performance of corrupted rationales for FEVER using a calibrated model. Model performance decreases quickly when we drop human rationales (i.e., recall $\downarrow$ ), where as the model performance remains above baseline as we add non-rationale tokens (i.e., precision $\downarrow$ ).
+
+
+(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$
+
+
+(a) All samples
+
+
+Figure 8: Performance of corrupted rationales for E-SNLI using a calibrated model. Model performance decreases quickly when we add non-rationale tokens (i.e., precision $\downarrow$ ), where as the model performance drops less rapidly as we drop rationale tokens (i.e., recall $\downarrow$ ).
+
+
+(b) Human sufficiency-accuracy $= 1$ (c) Human sufficiency-accuracy $= 0$
\ No newline at end of file
diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/images.zip b/whattolearnandhowtowardeffectivelearningfromrationales/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b810dfc192167963018d372e1f3b0d4239512b6f
--- /dev/null
+++ b/whattolearnandhowtowardeffectivelearningfromrationales/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce63919c5808d935b14430bd34f76232e0c1a261e1044a6cb4900e03209e1981
+size 974769
diff --git a/whattolearnandhowtowardeffectivelearningfromrationales/layout.json b/whattolearnandhowtowardeffectivelearningfromrationales/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ca493626643659bec08c759af77bb059a0126169
--- /dev/null
+++ b/whattolearnandhowtowardeffectivelearningfromrationales/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9a733306d46bc29bb11d6317d20c36e396f087d119ff2f1e70d572a194c2e1d
+size 511061
diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_content_list.json b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..897cbf869b759107c49e142b59c5f7e63c8a30d6
--- /dev/null
+++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20ac2f36d167851586ab1fa9cf9da8fe85c5af8470215707aebc93de4929575f
+size 90019
diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_model.json b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2eda0c97c9ac1259d64117e37bc62e045b3a03fd
--- /dev/null
+++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68325b608247f0a50e74e158f17655123bccb6c314340a6c1ece598099686a3a
+size 108890
diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_origin.pdf b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9a68d3fbb4dfefbb57de7bb8373dacfc5e71f385
--- /dev/null
+++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/18573deb-d0f8-465d-9b30-4e20d6c39e29_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f639dd18ba26fdb137ca6dad9691054a31fd39d6ee85a0db82f4480fe0fed546
+size 489198
diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/full.md b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..81eb4381753dac812741099ac8f4390c470552f7
--- /dev/null
+++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/full.md
@@ -0,0 +1,301 @@
+# What Works and Doesn't Work, A Deep Decoder for Neural Machine Translation
+
+Zuchao Li $^{1}$ , Yiran Wang $^{2}$ , Masao Utiyama $^{2,*}$ , Eiichiro Sumita $^{2}$ , Hai Zhao $^{1*}$ , and Taro Watanabe $^{3}$
+
+$^{1}$ Shanghai Jiao Tong University (SJTU), Shanghai, China
+
+$^{2}$ National Institute of Information and Communications Technology (NICT), Kyoto, Japan
+
+$^{3}$ Nara Institute of Science and Technology (NAIST), Nara, Japan
+
+charlee@sjtu.edu.cn, {yiran.wang,mutiyama}@nict.go.jp,
+
+eiichiro-sumita@nict.go.jp, zhaohai@cs.sjtu.edu.cn, taro@is.naist.jp
+
+# Abstract
+
+Deep learning has demonstrated performance advantages in a wide range of natural language processing tasks, including neural machine translation (NMT). Transformer NMT models are typically strengthened by deeper encoder layers, but deepening their decoder layers usually results in failure. In this paper, we first identify the cause of the failure of the deep decoder in the Transformer model. Inspired by this discovery, we then propose approaches to improving it, with respect to model structure and model training, to make the deep decoder practical in NMT. Specifically, with respect to model structure, we propose a cross-attention drop mechanism to allow the decoder layers to perform their own different roles, to reduce the difficulty of deep-decoder learning. For model training, we propose a collapse reducing training approach to improve the stability and effectiveness of deep-decoder training. We experimentally evaluated our proposed Transformer NMT model structure modification and novel training methods on several popular machine translation benchmarks. The results showed that deepening the NMT model by increasing the number of decoder layers successfully prevented the deepened decoder from degrading to an unconditional language model. In contrast to prior work on deepening an NMT model on the encoder, our method can deepen the model on both the encoder and decoder at the same time, resulting in a deeper model and improved performance.
+
+# 1 Introduction
+
+With the help of the deep neural network, the feature extraction capability of models has been
+
+substantially enhanced (Schmidhuber, 2015; LeCun et al., 2015). Deep neural network models are also popular for natural language processing (NLP) tasks. The most typical deep neural network model in NLP is based on the convolutional neural network (CNN) (Gehring et al., 2017) and Transformer (Vaswani et al., 2017) structures, and the deep pretrained Transformer language model has begun to dominate NLP. The deep neural network model has also attracted substantial interest in neural machine translation (NMT), for both theoretical research (Wang et al., 2019; Li et al., 2020a, 2021a; Kong et al., 2021) and competition evaluation (Zhang et al., 2020; Wu et al., 2020b,a; Meng et al., 2020). Because it has been demonstrated that deep neural network models can benefit from an enriched representation, deep NMT models also show advantages with respect to translation performance (Wu et al., 2019; Wei et al., 2020).
+
+Although deep models have been extensively studied in machine translation and are frequently used to improve translation performance, almost all work on deepening models has focused on increasing the number of encoder layers; there has been very little research on deepening the decoder. Through preliminary experiments on varying the number of decoder layers in the Transformer NMT model, we observed that, when the decoder is deepened beyond a certain number of layers, the translation performance of the overall model fails to improve; moreover, it declines rapidly to near zero. This demonstrates that there are flaws in the current structure or training method, and the deep-decoder NMT model cannot be trained.
+
+By analyzing the training process of the deep-decoder model, we found that the training perplexity of the model was relatively low, but the translation performance of the obtained model was much worse than that of a shallow model. Inspired by this phenomenon, we hypothesize that, as the decoder deepens, the model may increasingly ignore the
+
+source inputs and degenerate to an unconditional language model, even though a low perplexity can be obtained on the training set. In this case, the purpose of translation learning is not achieved, and thus the model training fails.
+
+According to our hypotheses, preventing the decoder from degenerating to an unconditional language model is the key to overcoming the failure of deep-decoder NMT model training. Consequently, we propose two aspects of model improvement: model structure and model training. In model structure, the only difference between the decoder of the NMT model and that of the unconditional language model is cross-attention; therefore, we focus mainly on this structure. In model training, we aim to make the decoder output distant from the output of the unconditional language model to avoid fitting the target sentences while ignoring the source inputs in the training dataset.
+
+Specifically, we propose a cross-attention drop (CAD) mechanism for the deep-decoder layer structure. The original intention of this mechanism is that we suspected that the degeneration of the deep decoder to an unconditional language model was caused by the training difficulties resulting from too many cross-attentions. Because the purpose of cross-attention is to force the decoder layer to obtain features from the source representation, the different layers in the deep decoder should perform distinct roles. However, the conventional deep decoder requires each layer to extract source features similarly, thus increasing the training difficulty. As a result, to minimize training loss, the model chooses to memorize the training target sentences directly and ignore the source inputs. In this mechanism, we drop the cross-attention in some decoder layers to lower the overall training difficulty, thereby preventing the failure of deep-decoder training. In addition to structural changes, we also propose a decoder dropout regularization (DDR) loss and anti-LM-degradation (ALD) loss for joint model optimization, based on contrastive learning; these increase the stability of deep-decoder NMT model training and avoid degeneration to an unconditional language model.
+
+Our experiments were conducted mainly on two popular machine translation benchmarks: WMT14 English-to-German and English-to-French. The results of the experimental exploration of decoders with different depths show that a successfully trained depth decoder greatly benefits the overall
+
+translation performance and can work with the deep encoder to achieve higher translation performance. Moreover, the novel training approaches that we propose both increase the stability of the training of the deep-decoder model and enable additional improvements.
+
+# 2 Related Work
+
+# 2.1 Deep NMT Model
+
+In computer vision tasks, it has been found that increasing the depth of convolutional neural networks can significantly increase the performance (He et al., 2016). As deep neural networks have become widely used in NLP tasks, machine translation tasks have also incorporated deep neural networks for modeling, using an encoder-decoder architecture based on a recurrent neural network (RNN) (Sutskever et al., 2014; Bahdanau et al., 2015). Since the emergence of the Transformer-based model (Vaswani et al., 2017), the deep model has become the mainstream baseline model for machine translation (Li et al., 2021d). The Transformer NMT model employs a deeper architecture than the RNN-based model, with six encoder layers and six decoder layers. During the same time period, Gehring et al. (2017) introduced an encoder-decoder architecture wholly based on CNNs, which increased both the number of encoder layers and the number of decoder layers to 20. In addition to structural design, unsupervised learning have also become another important branch of NMT (Lample et al., 2018; Li et al., 2019a, 2020b, 2021c; Nguyen et al., 2021).
+
+Because greater model capacity has the potential to contribute significantly to quality improvement (Zhang et al., 2019b; Li et al., 2019b; Parnow et al., 2021), deepening a model is regarded as a good method of boosting the capacity of the model with the same architecture. It has been shown that more expressive features are extracted (Mhaskar et al., 2016; Telgarsky, 2016; Eldan and Shamir, 2016), which has resulted in improved performance for vision tasks (He et al., 2016; Srivastava et al., 2015) over the past few years. In Transformer NMT models, there have also been numerous studies on deepening the model for better performance. Bapna et al. (2018) took the first step toward training extraordinarily deep models by deepening the encoders for translation, but discovered that simply increasing the encoder depth of a basic Transformer model was insufficient. Because of the difficulty of
+
+training, models utterly fail to learn. Transparent attention has also been proposed to regulate deep-encoder gradients; this eases the optimization of deeper models and results in consistent gains with a 16-layer Transformer encoder.
+
+Following research on deepening the encoder to obtain a deep NMT model, as in (Bapna et al., 2018), Wu et al. (2019) proposed a two-stage training strategy with three special model structural designs for constructing deep NMT models with eight encoder layers. Wang et al. (2019) proposed a dynamic linear combination mechanism and successfully trained a Transformer model with a 30-layer encoder, with the proposed mechanism shortening the path from upper-level layers to lower-level layers to prevent the gradient from vanishing or exploding. Zhang et al. (2019a) proposed a depth-scale initialization for improving norm preservation and a merged attention sublayer that integrates a simplified average-based self-attention sublayer into the cross-attention module. Fan et al. (2020) employed a layer-drop mechanism to train a 12-6 Transformer NMT model and pruned subnetworks during inference without fine-tuning. More recently, Wei et al. (2020) proposed to attend the decoder to multigranular source information with different space-scales, thereby boosting the training of very deep encoders without special training strategies. Li et al. (2020a) developed a shallow-to-deep training strategy and employed sparse connections across blocks to successfully train a 48-layer encoder model. Kong et al. (2021) studied using deep-encoder and shallow-decoder models to improve decoding speed while maintaining high translation quality. Most of these related studies focused on deepening the encoder for deep NMT models, whereas there have been very few studies on deepening the decoder. Herein lies the most significant dissimilarity between our work and this related work.
+
+# 2.2 Contrastive Learning in NLP
+
+Contrastive learning (Hadsell et al., 2006) is an effective approach to learning and is usually used for unsupervised learning because of its unique characteristics. It has achieved significant success in various computer vision tasks (Misra and van der Maaten, 2020; Zhuang et al., 2019; Tian et al., 2020; He et al., 2020; Chen et al., 2020). Gao et al. (2021) introduced a simple contrastive learning framework for unsupervised learning of sen
+
+tence embedding, which performed as well as previous supervised approaches. Wu et al. (2020c) employed multiple sentence-level augmentation strategies—such as word and span deletion, reordering, and substitution—with a sentence-level contrastive learning objective to pretrain a language model for a noise-invariant sentence representation. Fang et al. (2020) pretrained language representation models using contrastive self-supervised learning at the sentence level by predicting whether two back-translated sentences originate from the same sentence. In (Giorgi et al., 2021), a universal sentence embedding encoder was trained to minimize the distance between the embeddings of textual segments randomly sampled from nearby locations in the same document by a self-supervised contrastive objective. Pan et al. (2021) demonstrated the effectiveness of contrastive learning in NMT, particularly for the zero-shot machine translation situation. Current contrastive learning for NMT primarily employs cross-lingual representation similarity, whereas we aim to prevent the outputs of the deep decoder and the unconditional language model from becoming too similar, thus preventing degradation. Li et al. (2021b) presented an contrastive learning-reinforced domain adaptation approach for NMT. Part of our method is similar to (Miao et al., 2021) in purpose, but it is designed to avoid the NMT model from over-confident, while ours is to tackle the problem of the deep decoder collapsing into an unconditional language model.
+
+# 3 Our Method
+
+Given bilingual parallel sentences $\langle \mathbf{X},\mathbf{Y}\rangle$ , the NMT model learns a set of parameters $\Theta$ by maximizing the likelihood $\mathcal{J}(\mathbf{Y}|\mathbf{X},\Theta)$ , which is represented as the product of the conditional probabilities of all target words:
+
+$$
+\begin{array}{l} \mathcal {I} _ {\mathrm {N L L}} (\mathbf {Y} | \mathbf {X}; \boldsymbol {\Theta}) = \prod_ {i = 1} ^ {| \mathbf {Y} |} P \left(\mathrm {Y} _ {i} \mid \mathbf {Y} _ {< i}, \mathbf {X}; \boldsymbol {\Theta}\right) \tag {1} \\ = - \sum_ {i = 1} ^ {| \mathrm {Y} |} \log P (\mathbf {Y} _ {i} | \mathbf {Y} _ {< i}, \mathbf {X}; \boldsymbol {\Theta}), \\ \end{array}
+$$
+
+where $|\mathbf{Y}|$ represents the sequence length of $\mathbf{Y}$ , $\mathrm{Y}_i$ represents the $i$ -th token of sequence $\mathbf{Y}$ , and $\mathbf{Y}_{ p _ {\text {n e t}} ^ {l}\right) \cdot \hat {\mathbf {H}} _ {d} ^ {l} + \right. \\ \mathbb {1} \left(U ^ {l} < p _ {\mathrm {n e t}} ^ {l}\right) \cdot \left(\mathrm {C R O S S A T T N} \big (\hat {\mathbf {H}} _ {d} ^ {l}, \mathbf {H} _ {e} ^ {L _ {e}} \big) + \hat {\mathbf {H}} _ {d} ^ {l})\right). \\ \end{array}
+$$
+
+where $\mathbb{1}(\cdot)$ is an indicator function. For layer $l$ , with probability $p_{\mathrm{net}}^l$ , only self-attention is used; with probability $(1 - p_{\mathrm{net}}^l)$ , both of the two attentions are used. During the inference stage, both attentions are used for the $\tilde{\mathbf{H}}_d^l$ calculation. For the simplicity of implementation, we adopted a same fixed $p_{\mathrm{net}}$ for layers $1 \leq l \leq \mathcal{L}_{dr}$ (i.e. $p_{\mathrm{net}}^l = p_{\mathrm{net}}, 1 \leq l \leq \mathcal{L}_{dr}$ ), while set $p_{\mathrm{net}}^l = 1.0$ for layers $l > \mathcal{L}_{dr}$ . We denote $\mathcal{L}_{dr}$ as the drop depth and $p_{\mathrm{net}}$ as the drop ratio.
+
+# 3.4 Collapse Reducing Training
+
+In addition to the model structure, we introduced two extra losses into model training: one for stable optimization and another to minimize the risk of the
+
+decoder degenerating to an unconditional language model. These are the DDR loss and ALD loss, both of which are inspired by the concept of contrastive learning.
+
+Because of the use of dropout and drop-net in the decoder, we propose a simple regularization loss, DDR loss, which is based on the randomness of the model structure. The purpose of this loss, which is inspired by R-drop (Wu et al., 2021), is to regularize the output predictions from different substructures of the deep decoder and increase the stability of the optimization. Specifically, because the same source representation and target tokens are input twice, the two predicted distributions $P_{1}$ and $P_{2}$ are forced to be mutually consistent. The probability forms of two separate passes for the decoder only are written as $P_{1}(\mathrm{Y}_{i}|\mathbf{Y}_{< i},\mathbf{H}_{e}^{Le};\Theta_{d})$ and $P_{2}(\mathrm{Y}_{i}|\mathbf{Y}_{< i},\mathbf{H}_{e}^{Le};\Theta_{d})$ , in which $\Theta_{d}$ denotes the parameters of the decoder. The similarity loss of the two prediction distributions is implemented as the minimization of the bidirectional Kullback-Leibler (KL) divergence between the two distributions:
+
+$$
+\begin{array}{l} \mathcal {J} _ {\mathrm {D D R}} = \frac {1}{2} ( \\ \mathcal {D} _ {\mathrm {K L}} \left(P _ {1} \left(\mathrm {Y} _ {i} \mid \mathbf {Y} _ {< i}, \mathbf {H} _ {e} ^ {L e}; \boldsymbol {\Theta} _ {d}\right) \mid \mid P _ {2} \left(\mathrm {Y} _ {i} \mid \mathbf {Y} _ {< i}, \mathbf {H} _ {e} ^ {L e}; \boldsymbol {\Theta} _ {d}\right) + \right. \\ \mathcal {D} _ {\mathrm {K L}} \big (P _ {2} (\mathbf {Y} _ {i} | \mathbf {Y} _ {< i}, \mathbf {H} _ {e} ^ {L _ {e}}; \boldsymbol {\Theta} _ {d}) | | P _ {1} (\mathbf {Y} _ {i} | \mathbf {Y} _ {< i}, \mathbf {H} _ {e} ^ {L _ {e}}; \boldsymbol {\Theta} _ {d}) \big), \\ \end{array}
+$$
+
+where $\mathcal{D}_{\mathrm{KL}}(p||q)$ denotes the logarithmic difference between probabilities $p$ and $q$ . A decoder with drop-net and dropout can converge stably by contrastive learning from the two passes' output distributions of the same input.
+
+With the DDR loss, regularization training is applied to the deep decoder with dropout and drop-net to help the decoder converge; however, the risk of the model degenerating to an unconditional language model remains. To solve this problem, we propose the ALD loss, the primary purpose of which is to allow the model to be aware that the amount of source information used determines the effect on the decoder output, when performing contrastive learning. That is, the output with more source information used should be more similar to the output using full source information than the output with less source information used.
+
+The traditional definition of contrastive learning assumes a set of paired examples, $\mathcal{D} = \{(z_i,z_i^+)\}_{i = 1}^M$ , where $z_{i}$ and $z_{i}^{+}$ are semantically related. In contrastive learning, $z_{i}^{+}$ is used as a positive instance of $z_{i}$ , and other in-batch examples are used as the negative instances. Specifically, the loss
+
+| Systems | WMT14 En→De | WMT14 En→Fr |
| Enc. | Dec. | Ratio | Params | Time | BLEU | sacreBLEU | Params | Time | BLEU | sacreBLEU |
| (Vaswani et al., 2017) (BIG) | 6 | 6 | 1.0 | 213M | N/A | 28.40 | N/A | 222M | N/A | 41.00 | N/A |
| (Shaw et al. 2018) (BIG) | 6 | 6 | 1.0 | 210M | N/A | 29.20 | N/A | 222M | N/A | 41.30 | N/A |
| (Ott et al., 2018) (BIG) | 6 | 6 | 1.0 | 210M | N/A | 29.30 | 28.6 | 222M | N/A | 43.20 | 41.4 |
| (Wu et al., 2019) (BIG) | 8 | 8 | 1.0 | 270M | N/A | 29.92 | N/A | 281M | N/A | 43.27 | N/A |
| (Wang et al., 2019) (BIG, DEEPE) | 30 | 6 | 5.0 | 137M | N/A | 29.30 | N/A | N/A | N/A | N/A | N/A |
| (Wei et al., 2020) (BASE, DEEPE) | 48 | 6 | 8.0 | 272M | N/A | 30.19 | N/A | N/A | N/A | N/A | N/A |
| (Wei et al., 2020) (BIG, DEEPE) | 18 | 6 | 3.0 | 512M | N/A | 30.56 | N/A | N/A | N/A | N/A | N/A |
| (Li et al., 2020a) (BASE, DEEPE) | 24 | 6 | 4.0 | 118M | 6.16 | 29.02 | 27.9 | 124M | 33.81 | 42.42 | 40.6 |
| (Li et al., 2020a) (BASE, DEEPE) | 48 | 6 | 8.0 | 194M | 10.65 | 29.60 | 28.5 | 199M | 55.35 | 42.82 | 41.0 |
| (Li et al., 2020a) (BIG, DEEPE) | 24 | 6 | 4.0 | 437M | 18.31 | 29.93 | 28.7 | N/A | N/A | N/A | N/A |
| BASE (Pre-Norm) | 6 | 6 | 1.0 | 63M | 4.79 | 27.05 | 26.0 | 65M | 27.11 | 41.00 | 39.2 |
| DEEPE | 24 | 6 | 4.0 | 118M | 8.66 | 28.95 | 27.8 | 119M | 48.43 | 42.40 | 40.6 |
| DEEPE | 48 | 6 | 8.0 | 194M | 16.38 | 29.44 | 28.3 | 195M | 90.85 | 42.75 | 41.0 |
| DEEP | 15 | 15 | 1.0 | 123M | 9.82 | 0.55 | 0.2 | 124M | 49.96 | 0.93 | 0.3 |
| DEEP+CAD+CRT | 15 | 15 | 1.0 | 123M | 10.52 | 29.09 | 28.1 | 124M | 50.13 | 42.86 | 41.0 |
| DEEP | 27 | 27 | 1.0 | 199M | 16.56 | 0.31 | 0.1 | 200M | 78.82 | 0.65 | 0.1 |
| DEEP+CAD+CRT | 27 | 27 | 1.0 | 199M | 17.92 | 30.31 | 28.8 | 200M | 79.96 | 43.57 | 41.6 |
| BIG (Pre-Norm) | 6 | 6 | 1.0 | 210M | 36.05 | 28.79 | 27.7 | 212M | 97.51 | 42.40 | 40.6 |
| DEEPE | 24 | 6 | 4.0 | 437M | 42.41 | 29.90 | 28.7 | 439M | 102.14 | 43.11 | 40.9 |
| DEEP | 15 | 15 | 1.0 | 448M | 45.32 | 0.40 | 0.2 | 449M | 108.02 | 0.71 | 0.2 |
| DEEP+CAD+CRT | 15 | 15 | 1.0 | 448M | 46.52 | 30.69 | 29.0 | 449M | 110.5 | 43.95 | 41.9 |
+
+Table 1: Number of model parameters, training time (hours), BLEU scores $(\%)$ , and sacreBLEU scores $(\%)$ of translation models on WMT14 En $\rightarrow$ De and En $\rightarrow$ Fr tasks. We use BASE and BIG to represent the different parameter settings of the NMT model, DEEP represents the deep NMT model, and DEEPE specifically refers to the deep NMT model with a deep encoder.
+
+of contrastive learning is realized as a cross-entropy loss, and can be represented as follows:
+
+$$
+\mathcal {J} _ {\mathrm {C L}} = - \log \frac {e ^ {\operatorname* {s i m} \left(\mathcal {G} \left(z _ {i}\right) , \mathcal {G} \left(z _ {i} ^ {+}\right)\right) / \tau}}{\sum_ {j = 1} ^ {N} e ^ {\operatorname* {s i m} \left(\mathcal {G} \left(z _ {i}\right) , \mathcal {G} \left(z _ {j}\right)\right) / \tau}}, \tag {5}
+$$
+
+where $N$ is the size of a mini-batch, $\mathcal{G}(\cdot)$ denotes a function that transforms a sequence input into a final single-vector representation, $\mathrm{sim}(\mathbf{v}_1,\mathbf{v}_2)$ denotes the cosine similarity $\frac{\mathbf{v}_1^\top\mathbf{v}_2}{\|\mathbf{v}_1\|\cdot\|\mathbf{v}_2\|}$ , and $\tau$ is a softmax temperature hyperparameter. In SimCSE (Pan et al., 2021), the $\mathcal{G}(\cdot)$ function is implemented as the model with an additional pooling layer that obtains the sentence representation. Because the presence of dropout in the model results in different outputs for the same input, the input is treated as a positive instance of $z_{i}$ itself.
+
+In ALD loss, our purpose is entirely different from the above. We consider using more source inputs as positive instances and fewer as negative instances of $z_{i}$ , with all source inputs. Specifically, for the translation pair $\langle \mathbf{X},\mathbf{Y}\rangle$ , we randomly sample a ratio $\gamma \in [0,p_{\mathrm{ALD}})$ , $0 < p_{\mathrm{ALD}} < 0.5$ , replace the token in $\mathbf{X}$ with UNK in the ratio $\gamma$ to obtain $\mathbf{X}^{+}$ , and replace the $X$ in the ratio $(1 - \gamma)$ with UNK to obtain $\mathbf{X}^{-}$ .
+
+$$
+\mathcal {J} _ {\mathrm {A L D}} = - \log \frac {e ^ {\operatorname* {s i m} (\mathcal {G} (\mathbf {X} , \mathbf {Y}) , \mathcal {G} (\mathbf {X} ^ {+} , \mathbf {Y})) / \tau}}{\sum_ {*} \in [ + , - ] e ^ {\operatorname* {s i m} (\mathcal {G} (\mathbf {X} , \mathbf {Y}) , \mathcal {G} (\mathbf {X} ^ {*} , \mathbf {Y})) / \tau}}, \tag {6}
+$$
+
+where $G(\cdot ,\cdot)$ denotes average pooling output on the hidden state from the top layer of the decoder (i.e., $\mathcal{G}(\mathbf{X},\mathbf{Y}) = \mathrm{AvgPOOL}(\mathbf{H}_d^{\mathcal{L}_d}))$ . When using ALD loss, if the decoder ignores the source inputs and degenerates to an unconditional language model, the source inputs will have very little impact on the output: $\mathcal{G}(\mathbf{X},\mathbf{Y}),\mathcal{G}(\mathbf{X}^{+},\mathbf{Y})$ , and $\mathcal{G}(\mathbf{X}^{-},\mathbf{Y})$ will all be similar, resulting in confusion for the contrastive learning.
+
+# 3.5 Discussion
+
+Inspired by the wildly discussed KL divergence vanishing problem (Bowman et al., 2016) of variational autoencoder (VAE), in which the expressive decoder does not rely on the latent variable to reconstruct the input data, i.e., the KL divergence vanishes to be zero, we hypothesis the similar phenomenon appears in the machine translation models that are enhanced with a deep decoder. We presume that as the decoder goes deeper, the expressive capacity of the decoder is getting strong enough to generate the target sentence ignoring the information from the source sentence. In other words, the machine translation model, which can also be considered a conditional language model $P(Y_{i}|Y_{| Enc. | Dec. | BLEU | sacreBLEU |
| 24 | 6 | 28.95 | 27.8 |
| 6 | 24 | 28.21 | 27.0 |
| 15 | 15 | 29.09 | 28.1 |
+
+tract enough source information. This suggests that, if resources are restricted and the number of layers needs to be decreased to obtain a smaller model, it is more effective to reduce the number of decoder layers; this finding is compatible with Kasai et al. (2021)'s conclusion. In addition, increasing the depth of both the encoder and the decoder improves the model's translation performance, implying that increasing the number of decoder layers is effective in a deep NMT model.
+
+The balance between the number of encoder layers and the number of decoder layers in a deep model is another important consideration. To investigate this, we compared translation performance in three typical cases on WMT14 En→De with the total number of encoder and decoder layers set to 30. As shown in Table 2, the model with an equal number of encoder and decoder layers achieved the best results, outperforming the pure deep-encoder and deep-decoder models.
+
+# 5 Ablation Study
+
+We conducted ablation studies on the modifications that we made to both the model structure and training to investigate their respective effects on the translation performance. The ablation research was conducted on the WMT14 En $\rightarrow$ De task, as before, and the model employed was the BASE, DEEP-30L-Full model. We began by adding extra R-Drop, DDR, ALD, and CAD techniques to its baseline model (BASE, DEEP-30L). The results in Table 3 show that the baseline training was unsatisfactory,
+
+Table 2: Performance of deep NMT models with different combinations of encoder and decoder depth.
+
+| System | BLEU | sacreBLEU |
| BASE, DEEP-30L | 0.55 | 0.2 |
| +R-Drop | 0.97 | 0.5 |
| +DDR | 1.01 | 0.4 |
| +ALD | 1.45 | 0.7 |
| +CAD | 28.35 | 27.2 |
| BASE, DEEP-30L-Full | 29.09 | 28.1 |
| -CAD | 1.39 | 0.7 |
| -DDR | 28.77 | 27.6 |
| -ALD | 28.52 | 27.4 |
+
+Table 3: Ablation studies on model structures and training approaches.
+
+even with the addition of the better training methods (R-Drop, DDR, and ALD). However, when we dropped cross-attention after applying CAD, the model training became normal, indicating that the model structure has a significant impact on its performance. When we compared the results of BASE, DEEP-30L+CAD with those of BASE, DEEP-30L-Full, we found that the training methods DDR and CAD were beneficial to improving performance, demonstrating their effectiveness.
+
+We also conducted ablation evaluation of the model structure and training method on the entire model. According to the results, CAD had the greatest influence on the translation performance, which is consistent with the conclusion stated above, based on the results in Table 3. Additionally, when comparing DDR and ALD, we found that ALD had a greater influence on translation because it directly mimics the deep-decoder collapse problem, whereas DDR is mostly employed to increase the stability of the training of the drop-net mechanism in CAD, by incorporating regularization.
+
+# 6 Conclusion
+
+In this paper, we investigated the problem of deep-decoder collapse in NMT when the decoder is deepened. We introduced a CAD mechanism, DDR loss, and ALD loss to solve this problem. Using this model, we demonstrated that a deep model with balanced numbers of encoder and decoder layers outperforms either encoder deepen only or decoder deepen only NMT models. Our model outperformed previous similar models on the WMT14 $\mathrm{En} \rightarrow \mathrm{De}$ and $\mathrm{En} \rightarrow \mathrm{Fr}$ tasks, confirming the effectiveness of our approach. For future work, we intend to incorporate methods from related work on deep NMT to further improve the performance of our translation model.
+
+# References
+
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Ankur Bapna, Mia Chen, Orhan First, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3028-3033, Brussels, Belgium. Association for Computational Linguistics.
+Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21, Berlin, Germany. Association for Computational Linguistics.
+Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR.
+Ronen Eldan and Ohad Shamir. 2016. The power of depth for feedforward neural networks. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, volume 49 of JMLR Workshop and Conference Proceedings, pages 907-940. JMLR.org.
+Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, and Pengtao Xie. 2020. Cert: Contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766.
+Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the
+
+34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1243-1252. PMLR.
+John Giorgi, Osvald Nitski, Bo Wang, and Gary Bader. 2021. DeCLUTR: Deep contrastive learning for unsupervised textual representations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 879-895, Online. Association for Computational Linguistics.
+Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), 17-22 June 2006, New York, NY, USA, pages 1735-1742. IEEE Computer Society.
+Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9726-9735. Computer Vision Foundation / IEEE.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society.
+Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2021. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. Association for Computational Linguistics.
+Xiang Kong, Adithya Renduchintala, James Cross, Yuqing Tang, Jiatao Gu, and Xian Li. 2021. Multilingual neural machine translation with deep encoder and multiple shallow decoders. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1613-1624, Online. Association for Computational Linguistics.
+
+Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049, Brussels, Belgium. Association for Computational Linguistics.
+Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. 2017. Fractalnet: Ultra-deep neural networks without residuals. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
+Yann LeCun, Yoshua Bengio, and Geoffrey E. Hinton. 2015. Deep learning. Nat., 521(7553):436-444.
+Bei Li, Ziyang Wang, Hui Liu, Quan Du, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2021a. Learning light-weight translation models from deep transformer. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 13217-13225. AAAI Press.
+Bei Li, Ziyang Wang, Hui Liu, Yufan Jiang, Quan Du, Tong Xiao, Huizhen Wang, and Jingbo Zhu. 2020a. Shallow-to-deep training for neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 995-1005, Online. Association for Computational Linguistics.
+Zuchao Li, Masao Utiyama, Eiichiro Sumita, and Hai Zhao. 2021b. MiSS@WMT21: Contrastive learning-reinforced domain adaptation in neural machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 154-161, Online. Association for Computational Linguistics.
+Zuchao Li, Masao Utiyama, Eiichiro Sumita, and Hai Zhao. 2021c. Unsupervised neural machine translation with universal grammar. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3249-3264, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao. 2019a. Data-dependent gaussian prior objective for language generation. In International Conference on Learning Representations.
+Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, and Eiichiro Sumita. 2021d. Text compression-aided transformer encoding. IEEE Transactions on Pattern Analysis and Machine Intelligence.
+
+Zuchao Li, Hai Zhao, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2020b. Reference language based unsupervised neural machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4151-4162, Online. Association for Computational Linguistics.
+Zuchao Li, Hai Zhao, Yingting Wu, Fengshun Xiao, and Shu Jiang. 2019b. Controllable dual skew divergence loss for neural machine translation. arXiv preprint arXiv:1908.08399.
+Fandong Meng, Jianhao Yan, Yijin Liu, Yuan Gao, Xi-anfeng Zeng, Qinsong Zeng, Peng Li, Ming Chen, Jie Zhou, Sifan Liu, and Hao Zhou. 2020. WeChat neural machine translation systems for WMT20. In Proceedings of the Fifth Conference on Machine Translation, pages 239-247, Online. Association for Computational Linguistics.
+Hrushikesh N. Mhaskar, Qianli Liao, and Tomaso A. Poggio. 2016. Learning real and boolean functions: When is deep better than shallow. CoRR, abs/1603.00988.
+Mengqi Miao, Fandong Meng, Yijin Liu, Xiao-Hua Zhou, and Jie Zhou. 2021. Prevent the language model from being overconfident in neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3456-3468, Online. Association for Computational Linguistics.
+Ishan Misra and Laurens van der Maaten. 2020. Self-supervised learning of pretext-invariant representations. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 6706-6716. Computer Vision Foundation / IEEE.
+Xuan-Phi Nguyen, Shafiq Joty, Thanh-Tung Nguyen, Kui Wu, and Ai Ti Aw. 2021. Cross-model back-translated distillation for unsupervised machine translation. In International Conference on Machine Learning, pages 8073-8083. PMLR.
+Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Brussels, Belgium. Association for Computational Linguistics.
+Xiao Pan, Mingxuan Wang, Liwei Wu, and Lei Li. 2021. Contrastive learning for many-to-many multilingual neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 244-258, Online. Association for Computational Linguistics.
+Kevin Parnow, Zuchao Li, and Hai Zhao. 2021. Grammatical error correction as GAN-like sequence la
+
+beling. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3284-3290, Online. Association for Computational Linguistics.
+Jürgen Schmidhuber. 2015. Deep learning in neural networks: An overview. Neural Networks, 61:85-117.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics.
+Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929-1958.
+Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Highway networks. CoRR, abs/1505.00387.
+Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
+Matus Telgarsky. 2016. benefits of depth in neural networks. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, volume 49 of JMLR Workshop and Conference Proceedings, pages 1517-1539. JMLR.org.
+Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2020. Contrastive multiview coding. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XI, volume 12356 of Lecture Notes in Computer Science, pages 776-794. Springer.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+
+Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810-1822, Florence, Italy. Association for Computational Linguistics.
+Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, and Weihua Luo. 2020. Multiscale collaborative deep models for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 414-426, Online. Association for Computational Linguistics.
+Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. R-drop: regularized dropout for neural networks. Advances in Neural Information Processing Systems, 34.
+Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2019. Depth growing for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5558-5563, Florence, Italy. Association for Computational Linguistics.
+Liwei Wu, Xiao Pan, Zehui Lin, Yaoming Zhu, Mingxuan Wang, and Lei Li. 2020a. The volctrans machine translation system for WMT20. In Proceedings of the Fifth Conference on Machine Translation, pages 305-312, Online. Association for Computational Linguistics.
+Shuangzhi Wu, Xing Wang, Longyue Wang, Fangxu Liu, Jun Xie, Zhaopeng Tu, Shuming Shi, and Mu Li. 2020b. Tencent neural machine translation systems for the WMT20 news translation task. In Proceedings of the Fifth Conference on Machine Translation, pages 313-319, Online. Association for Computational Linguistics.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
+Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020c. Clear: Contrastive learning for sentence representation. arXiv preprint arXiv:2012.15466.
+Biao Zhang, Ivan Titov, and Rico Sennrich. 2019a. Improving deep transformer with depth-scaled initialization and merged attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 898-909, Hong Kong, China. Association for Computational Linguistics.
+
+Yuhao Zhang, Ziyang Wang, Runzhe Cao, Binghao Wei, Weiqiao Shan, Shuhan Zhou, Abudurexiti Reheman, Tao Zhou, Xin Zeng, Laohu Wang, Yongyu Mu, Jingnan Zhang, Xiaqian Liu, Xuanjun Zhou, Yinqiao Li, Bei Li, Tong Xiao, and Jingbo Zhu. 2020. The NiuTrans machine translation systems for WMT20. In Proceedings of the Fifth Conference on Machine Translation, pages 338-345, Online. Association for Computational Linguistics.
+Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao. 2019b. Neural machine translation with universal visual representation. In International Conference on Learning Representations.
+Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. 2019. Local aggregation for unsupervised learning of visual embeddings. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019, pages 6001-6011. IEEE.
\ No newline at end of file
diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/images.zip b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e83579b913fa6ba37be39828ccc5d085a9f03987
--- /dev/null
+++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dc9c77d79ad1c1be5605ed3bec0830d61a162b07877f2a90569b6d3bee0c6a9d
+size 373767
diff --git a/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/layout.json b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..96cc2d0519fceef5fa494f1533bca19323ff8742
--- /dev/null
+++ b/whatworksanddoesntworkadeepdecoderforneuralmachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:289bb4bd79e735b7684209bbc4682329bff860f55e175e8a7083910940c94c6a
+size 416004
diff --git a/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_content_list.json b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a12c2ca8f06619e3555af0202d78016ccf08435c
--- /dev/null
+++ b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c76735703bd639b3958950651d2a4f781703727caa03fd82d2bc7801056aa3dc
+size 103518
diff --git a/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_model.json b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ae5d0551b5af7c381a37f930dbc258594f97581d
--- /dev/null
+++ b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d8543ff4e1e16e7ce0762a8a2958d7113ff73120569b6260a1cf515fa1520ff7
+size 127675
diff --git a/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_origin.pdf b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dc49c048afe5957190e2b8bad062f76a32f7e028
--- /dev/null
+++ b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/0a581cce-80c6-42b6-a207-9570461bfecf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:968898960a80e3ef1ff93b1f065b974af329d0355b38ae650d739fdc585409ce
+size 1206439
diff --git a/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/full.md b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e9ff7ed6f1c55222f52bbf0b2265c58c0de7629
--- /dev/null
+++ b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/full.md
@@ -0,0 +1,439 @@
+# When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
+
+Ehsan Kamalloo*†
+
+Univeristy of Alberta
+
+kamalloo@ualberta.ca
+
+Mehdi Rezagholizadeh*
+
+Huawei Noah's Ark Lab
+
+mehdi.rezagholizadeh@huawei.com
+
+Ali Ghodsi
+
+University of Waterloo
+
+ali.ghodsi@uwaterloo.ca
+
+# Abstract
+
+Data Augmentation (DA) is known to improve the generalizability of deep neural networks. Most existing DA techniques naively add a certain number of augmented samples without considering the quality and the added computational cost of these samples. To tackle this problem, a common strategy, adopted by several state-of-the-art DA methods, is to adaptively generate or re-weight augmented samples with respect to the task objective during training. However, these adaptive DA methods: (1) are computationally expensive and not sample-efficient, and (2) are designed merely for a specific setting. In this work, we present a universal DA technique, called Glitter, to overcome both issues. Glitter can be plugged into any DA method, making training sample-efficient without sacrificing performance. From a pre-generated pool of augmented samples, Glitter adaptively selects a subset of worst-case samples with maximal loss, analogous to adversarial DA. Without altering the training strategy, the task objective can be optimized on the selected subset. Our thorough experiments on the GLUE benchmark, SQuAD, and HellaSwag in three widely used training setups including consistency training, self-distillation and knowledge distillation reveal that Glitter is substantially faster to train and achieves a competitive performance, compared to strong baselines. $^{1}$
+
+# 1 Introduction
+
+The undeniable importance of data in deep learning (Sambasivan et al., 2021; Rogers, 2021) and the costly process of data annotation has propelled researchers into leveraging Data Augmentation (DA) in a broad range of applications from computer vision (Cubuk et al., 2019; Wang et al., 2020) to
+
+natural language processing (NLP) including machine translation (Sennrich et al., 2016; Shen et al., 2020), language understanding (Shen et al., 2020; Qu et al., 2021; Du et al., 2021; Kamalloo et al., 2021), and question answering (Alberti et al., 2019; Longpre et al., 2019; Shakeri et al., 2020). DA is shown to be effective in improving generalization of deep neural networks (DeVries and Taylor, 2017; Xie et al., 2020) and in increasing the number of training samples especially in low resource data regimes (Sennrich et al., 2016; Zhang et al., 2018). Nonetheless, in NLP, the discrete nature of text poses additional complexity to DA as generating semantically viable text from another text is challenging (Feng et al., 2021).
+
+DA methods can be broadly categorized into task-aware and task-agnostic methods. Task-agnostic DA methods essentially generate augmented text regardless of the task at hand and often do not warrant additional training or fine-tuning. They can be based on some hand-crafted heuristics (Zhang et al., 2015; Wei and Zou, 2019), backtranslation (Sennrich et al., 2016; Edunov et al., 2018), or token replacement from a pre-trained language model (Kobayashi, 2018; Wu et al., 2019; Ng et al., 2020). Even though deploying task-agnostic methods is straightforward, these methods do not take into account any task-specific information, and thus, their performance is usually limited. On the other hand, task-aware DA methods are capable of generating augmented samples, conditioned on the downstream task objective (Hu et al., 2019; Xie et al., 2020; Rashid et al., 2021). These methods adapt augmented examples specifically for a task in that they construct augmented examples, sometimes partly, during training. Despite their advantages, they often incur additional training costs, resulting in a prohibitively slow and a computationally expensive training.
+
+In general, the central problems surrounding DA techniques in NLP can be summarized as follows:
+
+First, DA methods are mostly not sample-efficient in that they add arbitrary number of augmented samples to the training data and naively incorporate all of them into training without investigating how many of augmented samples are actually needed. Second, although more effective, task-aware methods are notoriously time-consuming to train. This is especially problematic in large-scale datasets such as SQuAD (Rajpurkar et al., 2016) and MNLI (Williams et al., 2018). Third, most DA methods are not universal as they work solely with a particular setup—e.g., training a single-network (Xie et al., 2020), or training in teacher-student settings (Rashid et al., 2021). Overall, the importance of both sample efficiency and training efficiency for DA has been often overlooked.
+
+Motivated by the above problems, in this work, we introduce a universal DA method, Glitter $^{2}$ , which can be plugged into any DA method to make them sample-efficient, and task-aware without sacrificing performance. Specifically, given a pool of augmented samples that are generated offline, our proposed method follows a minimax approach (Farnia and Tse, 2016) to select a small subset with maximal expected loss (maximization step) during training. Without any further adjustments to the training algorithm, the task objective can be optimized for this selected subset (minimization step).
+
+Our key contributions in this paper can be summarized as follows:
+
+1. Glitter is a universal method which can be effortlessly applied to any DA method to enforce sample efficiency while maintaining (or even boosting) their performance.
+2. We devise strategies to adapt Glitter for a variety of widely used training setups including single-network, consistency training, self-distillation and knowledge distillation.
+3. Through our empirical evaluations, we show that Glitter achieves superior performance over state-of-the-art DA methods on GLUE, SQuAD, and HellaSwag, while significantly speeding up the training.
+
+# 2 Related Work
+
+# 2.1 Task-agnostic DA in NLP
+
+Contextual augmentation techniques (Kobayashi, 2018; Wu et al., 2019) use pre-trained language
+
+Inspired by "All that is gold does not glitter" —J.R.R. Tolkien, The Fellowship of the Ring.
+
+models for DA. Kobayashi (2018) propose bidirectional LSTM language models for word substitution conditioned on the label of their input text. SSMBA (Ng et al., 2020) and TinyBERT (Jiao et al., 2020) perturb the input by masking some of the tokens, and then, sample tokens from a BERT model to replace the masked tokens and generate augmented samples. Back-Translation (Sennrich et al., 2016) augments data using two consecutive translation models: the first model to translate the input into an arbitrary target language; then, a second model to translate the result back into its original language. Mixed-up (Guo et al., 2019) generates augmented samples based on interpolating word embedding and sentence embedding vectors. Shen et al. (2020) introduce a set of cut-off techniques that zero out contiguous spans of the embedding matrix at token level, feature level and span level. EDA (Wei and Zou, 2019) consists of simple word-level operations including synonym replacement, random deleting, random insertion and random swapping.
+
+# 2.2 Task-aware DA in NLP
+
+One approach to leverage task-specific information is to assign different weights to augmented samples based on their individual impacts on the model (Yi et al., 2021). Although effective, the re-weighting mechanism largely ignores sample efficiency. Wu et al. (2019) introduce a mask-and-reconstruct approach, namely c-BERT, that fine-tune a pre-trained BERT model to predict label-compatible tokens. CoDA (Qu et al., 2021) combines various label-preserving transformations with adversarial training jointly with a contrastive regularization objective. Unsupervised DA (UDA; Xie et al. 2020) uses off-the-shelf DA methods and adds an auxiliary consistency loss to the training objective. However, UDA is not sample-efficient and it is designed only for a single-network setup; how to deploy it in other training scenarios such as knowledge distillation is not clear. Hu et al. (2019) propose a reinforcement learning-based technique where the reward function is defined based on whether generated augmented samples are label-preserving or not.
+
+# 2.3 DA for KD
+
+KD (Buciluţă et al., 2006; Hinton et al., 2015), initially proposed as a model compression technique, aims at transferring the knowledge of an already trained model, called teacher, to a smaller or a
+
+same-size student model. Several studies found that DA can significantly boost KD's performance in NLP. TinyBERT (Jiao et al., 2020) uses a task-agnostic DA technique for its task-specific finetuning. Kamalloo et al. (2021) and Rashid et al. (2021) showed that DA can also be tailored for KD. In particular, MATE-KD (Rashid et al., 2021) tunes a separate masked language model in order to generate augmented samples with maximum divergence. Kamalloo et al. (2021) and Du et al. (2021) employ kNN retrieval to fetch augmented samples from a massive sentence bank.
+
+Glitter differs from previous work in that it simultaneously focuses on sample efficiency, and universality such that it can be freely used in any training setting.
+
+# 3 Methodology
+
+In this section, we introduce our task-aware DA method, Glitter, that aims at using an efficient number of augmented samples without sacrificing performance. Our proposed strategy is agnostic to DA methods; it can be seamlessly plugged into any DA method with any training setting to enforce sample efficiency.
+
+Existing learning-based DA methods train a separate DA model and adapt its output for a particular objective function that is entirely task-dependent:
+
+$$
+\phi^ {*} \leftarrow \min _ {\phi} \ell_ {D A} (M (\Omega (x; \phi); \theta)) \tag {1}
+$$
+
+$$
+x ^ {\prime *} = \Omega (x; \phi^ {*})
+$$
+
+where $\ell_{DA}()$ is a loss function, geared towards the objective of the task, $\Omega (\cdot ;\phi)$ is the DA model with trainable parameters $\phi$ , and $M(\cdot ;\theta)$ refers to the original model, parameterized by $\theta$ .
+
+In contrast to learning-based DA, we propose to generate many augmented candidates using any arbitrary DA method prior training, and adaptively select most suitable candidates during training. This procedure does not introduce additional trainable parameters into training, and more importantly, is capable of automatically ignoring unnecessary augmented examples. Let $(x_{i},y_{i})_{i = 1}^{N}\in \{(\mathcal{X},\mathcal{Y})\}$ represent training data such that a pair $x_{i}\in \mathcal{X}$ and $y_{i}\in \mathcal{V}$ are an input example and its corresponding label. Suppose a pool of $K$ augmented examples, $X^{\prime}(i) = \{x_k^{\prime}(i)\}_{k = 1}^K$ , are sampled from some DA model for each training example $(x_{i},y_{i})\in (\mathcal{X},\mathcal{Y})$ . Note that Glitter imposes no restrictions on how to augment training data; augmented samples can be generated via a single or even multiple DA models.
+
+Sample Selection. Given a pool of augmented samples, our approach is to adaptively select the best candidates according to particular defined criteria. Inspired by the minimax approach (Farnia and Tse, 2016; Volpi et al., 2018), our selection mechanism is based on finding top- $k_{1}$ (out of $K$ ) worst-case augmented samples from the $X^{\prime}$ set. Minimizing the main model loss function on these worst-case augmented samples will help improving generalization of the model (Volpi et al., 2018). In order to rank augmented samples, we evaluate $X^{\prime}(i)$ based on a distance function with respect to the corresponding original training sample, $x_{i}$ , within the model's latent space:
+
+$$
+X ^ {\prime *} (i) \leftarrow \operatorname {t o p} _ {k _ {1}} \left(\ell_ {\text {e v a l}} \left(M \left(x _ {i}; \theta\right), M \left(X ^ {\prime} (i); \theta\right)\right)\right)
+$$
+
+$$
+X ^ {\prime *} (i) = \left\{x _ {j} ^ {\prime *} (i) \right\} _ {j = 1} ^ {k _ {1}} \subset X ^ {\prime} (i) \tag {2}
+$$
+
+where $\mathrm{top}_{k_1}()$ denotes returns top- $k_{1}$ indices based on the scores returned by $\ell_{\mathrm{eval}}$ , $X^{\prime *}(\boldsymbol {i})$ is the set of $k_{1}$ selected augmented samples for $x_{i}$ ; $\ell_{\mathrm{eval}}()$ is the evaluation loss which is determined via the task objective.
+
+Updating the Model Parameters. After obtaining the top- $k_{1}$ augmented samples, we group them with the original training samples, $\{x_{i}\} \cup X^{\prime *}(\boldsymbol {i})$ and subsequently, update the model parameters only based on this selected set of augmented samples on the original loss:
+
+$$
+\mathcal {L} (\theta) = \sum_ {i = 1} ^ {N} \ell_ {\text {t a s k}} \left(M \left(x _ {i}; \theta\right), M \left(X ^ {\prime *} (i); \theta\right), y _ {i}\right)
+$$
+
+$$
+\theta_ {t} \leftarrow \theta_ {t - 1} - \lambda \nabla_ {\theta} (\mathcal {L} (\theta)) | _ {\theta_ {t - 1}} \tag {3}
+$$
+
+where $N$ is the number of training samples, $\lambda$ is the learning rate, and $\ell_{\mathrm{task}}(\cdot)$ is the final task loss—e.g., cross entropy (ce) for classification—that is computed over both original data and selected augmented data. In the remainder of this section, we discuss how Glitter can be applied to popular training settings including general DA for single networks, and DA for teacher-student (KD) setups. Note that Glitter is not restricted to these settings and may be adapted for other settings such as DAIR (Huang et al., 2022).
+
+# 3.1 General DA for Single Networks
+
+We consider three potential setups for the single network scenario: (1) General single network, (2)
+
+
+Figure 1: Illustration of Glitter (from left to right): first, generating augmented samples from different DA techniques; second, forming a pool of samples $X'(i)$ ; third, evaluating the augmented samples using the $\ell_{eval}(\cdot)$ loss; fourth, filtering the top- $k_1$ samples based on their corresponding $\ell_{eval}(\cdot)$ ; fifth, updating the parameters of the model by minimizing the task loss $\ell_{\mathrm{task}}(\cdot; \theta)$ .
+
+Self-distillation, and (3) Consistency training.
+
+General Single Network. In this setup, augmented samples are exploited in a semi-supervised manner where we can evaluate them based on the divergence of their predicted output $M(x_{k}^{\prime}(i);\theta) = p(y|x_{k}^{\prime}(i);\theta)$ from the ground-truth label or the prediction of the original corresponding training sample $M(x_{i};\theta) = p(y|x_{i};\theta)$ using the cross entropy loss, $\ell_{ce}$ :
+
+$$
+\ell_ {\text {e v a l}} = \ell_ {c e} \left(y _ {i}, M \left(x _ {k} ^ {\prime} (i); \theta\right)\right)
+$$
+
+or (4)
+
+$$
+\ell_ {\text {e v a l}} = \ell_ {c e} \big (M (x _ {i}; \theta), M (x _ {k} ^ {\prime} (i); \theta) \big).
+$$
+
+The cross entropy criterion is not the only option here. Other choices for $\ell_{\mathrm{eval}}$ include (but not limited to) focal loss (Lin et al., 2017), and tilted loss (Li et al., 2021).
+
+For the final task loss, $\ell_{\mathrm{task}}$ we can deploy a standard cross entropy loss over both training samples and their corresponding selected augmented samples:
+
+$$
+\begin{array}{l} \ell_ {\text {t a s k}} = \ell_ {c e} (y _ {i}, M (x _ {i}; \theta)) + \\ \frac {1}{k _ {1}} \sum_ {x \in X ^ {\prime *} (i)} \ell_ {c e} \left(y _ {i}, M (x; \theta)\right). \tag {5} \\ \end{array}
+$$
+
+Consistency Training (CT; Xie et al. 2020). In this configuration, we can employ the same $\ell_{\mathrm{eval}}$ introduced in Eq. (4). As a result, our method naturally selects top- $k_{1}$ most inconsistent augmented samples for each training sample. Then, the network is optimized to make predictions for input augmented samples that are consistent with predictions of their corresponding original training
+
+samples:
+
+$$
+\ell_ {\text {t a s k}} ^ {\mathrm {C T}} = \ell_ {c e} \left(y _ {i}, M \left(x _ {i}; \theta_ {t}\right)\right) +
+$$
+
+$$
+\frac {1}{k _ {1}} \sum_ {x \in X ^ {\prime *} (i)} \ell_ {c e} \left(M \left(x _ {i}; \theta_ {t - 1}\right), M \left(x; \theta_ {t}\right)\right). \tag {6}
+$$
+
+As stated by Xie et al. (2020), the second term in Eq. (6) leverages the previous prediction of the network for each training example.
+
+Self-Distillation (Self-KD). In Self-KD, we first train a model, and then, use it $(M(\cdot ;\theta^{*}))$ as a teacher to train an identical model but initialized from scratch using KD (Furlanello et al., 2018). How to adjust $\ell_{\mathrm{eval}}$ and $\ell_{\mathrm{task}}$ is detailed in §3.2.
+
+# 3.2 DA for Teacher-Student (KD)
+
+In this setup, we have a teacher model, $T(\cdot; \psi^*)$ with parameters $\psi$ that is already trained on the training data, along with a student model, $M(\cdot; \theta)$ , which we aim to train. The selection criterion for augmented samples is to maximize divergence between the teacher and the student:
+
+$$
+\ell_ {\text {e v a l}} ^ {\mathrm {K D}} = \ell_ {K L} \left(T \left(x _ {k} ^ {\prime} (i); \psi^ {*}\right), M \left(x _ {k} ^ {\prime} (i); \theta\right)\right) \tag {7}
+$$
+
+where $\ell_{KL}$ refers to the KL divergence. After selecting the maximum divergence augmented samples, then we calculate the KD loss as following:
+
+$$
+\begin{array}{l} \ell_ {\mathrm {t a s k}} ^ {\mathrm {K D}} = \alpha \ell_ {c e} \big (y _ {i}, M (x _ {i}; \theta) \big) + (1 - \alpha) \times \\ \frac {1}{k _ {1} + 1} \sum_ {x \in \left\{x _ {i} \right\} \cup X ^ {\prime *} (i)} \ell_ {K L} \left(T \left(x; \psi^ {*}\right), M (x; \theta)\right) \tag {8} \\ \end{array}
+$$
+
+where $\alpha$ is a hyperparameter.
+
+# 4 Experiments
+
+# 4.1 Setup
+
+To incorporate unlabelled augmented data into training, we adopt CT (Xie et al., 2020) and KD (Hinton et al., 2015). To this end, we conduct experiments under two settings:
+
+Standalone where we train a single model on the augmented data. In this setting, we seek to answer two questions: (1) How much is DA capable of improving the model generalization? (2) Does sample efficiency of Glitter hurt performance? For this purpose, we fine-tune RoBERTabase (Liu et al., 2019) using CT and Self-KD on augmented data.
+
+Distilled where we distill DistilRoBERTa (Sanh et al., 2019) (student) from RoBERTaLarge (Liu et al., 2019) (teacher) using the augmented data. Note that the teacher is already trained on the original data and DA comes into play only during distilling the student model. Our goal here is to investigate whether DA is an effective means in knowledge transfer to curb the capacity gap (Cho and Hariharan, 2019) between a large model and a small one.
+
+In both settings, we take the best performing model on the development set and evaluate it on the test set (depicted by Test). Additionally, for the standalone model setting, we also report results on the development set when models are trained only for 5 epochs (depicted by Dev), similar to CoDA (Qu et al., 2021), to make a comparison with baselines. Our Dev results are an average of 10 runs with different seeds. The implementation details and hyperparameters are provided in $\S A$ .
+
+# 4.1.1 DA Methods
+
+We leverage three widely used textual augmentation methods:
+
+1. EDA (Wei and Zou, 2019): We randomly replace $5\%$ of the tokens with their synonyms and randomly delete up to $10\%$ .
+2. Back-Translation (BT; Sennrich et al. 2016): We use fairseq (Ott et al., 2019) to translate sentences into German and then back into English. We do nucleus sampling (Holtzman et al., 2020) with $p = 0.9$ for both translations. We find that $p = 0.6$ works better on sentiment classification.
+
+3. Mask-and-Reconstruct (MR; Ng et al. 2020): We randomly mask $15\%$ of the tokens and construct a new sentence by sampling from a pre-trained $\mathrm{BERT}_{\mathrm{Large}}$ for masked tokens. We adopt top- $k$ sampling with $k = 20$ to select new tokens. For MNLI, we obtain better results with top-10 sampling.
+
+For each augmentation method, we generate 12 augmented examples per training instance for all datasets, except for large datasets—i.e., MNLI, QQP, and SQuAD—where the number of augmented examples are 8 per train example.
+
+# 4.1.2 Baselines
+
+Because the two environments—i.e., standalone and distilled—are different in nature, we compare Glitter with different baselines for each environment. For both, Vanilla-DA that takes all augmented data into account without reservation is the first baseline.
+
+The baselines for the standalone setting are: CoDA (Qu et al., 2021), MMEL (Yi et al., 2021), and HiddenCut (Chen et al., 2021). And for distilled, we consider MATE-KD (Rashid et al., 2021).
+
+# 4.2 GLUE
+
+The GLUE benchmark (Wang et al., 2019) is a well-known suite of nine $^4$ tasks that aim at evaluating natural language understanding models. We present test results in the distilled mode in Table 1. Glitter consistently outperforms Vanilla-DA, while it is faster to train. Specifically, Glitter achieves parity with Vanilla-DA for EDA in terms of the overall average score, while scoring $+0.2\%$ and $+0.4\%$ higher for BT and MR, respectively. We observe that only in few cases Vanilla-DA negligibly outperforms Glitter—e.g., on MRPC, and STS-B for BT. Nonetheless, Glitter $8x/1x$ trains $50\%$ faster than Vanilla-DA $8x$ on average, and $30\%$ faster for $8x/2x$ . Also, Glitter surpasses MATE-KD by $+0.2\%$ in the overall score. Unlike Glitter, MATE-KD introduces additional parameters to the model during training and it trains drastically slower because it generates augmented examples on-the-fly. Moreover, Table 1 illustrates that MR yields the best test results across the three DA methods except for SST where BT leads to better results. Based on this observation, we report results on MR augmented
+
+| Method | CoLA Mcc | SST Acc | MRPC Acc/F1 | STS-B P/S | QQP Acc/F1 | MNLI-m/mm Acc | QNLI Acc | RTE Acc | Avg. |
| RoBLarge (teacher) | 63.8 | 96.8 | 90.6 | 92.4 | 81.5 | 90.3/89.8 | 94.8 | 88.3 | 87.3 |
| BERTLarge | 60.5 | 94.9 | 87.4 | 87.1 | 80.7 | 86.7/85.9 | 92.7 | 70.1 | 82.5 |
| DistilRoB | 55.2 | 93.9 | 85.9 | 86.0 | 80.3 | 84.0/83.1 | 90.6 | 73.6 | 81.1 |
| KD | 54.9 | 94.0 | 86.8 | 87.3 | 80.5 | 85.1/83.7 | 91.9 | 73.5 | 81.7 |
| Task-Aware DA |
| MATE-KD | 56.0 | 94.9 | 90.2 | 88.0 | 81.2 | 85.5/84.8 | 92.1 | 75.0 | 82.8 |
| EDA (Wei and Zou, 2019) |
| Vanilla-DA (8x) | 55.5 | 94.8 | 87.6 | 86.1 | 80.7 | 85.3/84.7 | 92.0 | 72.8 | 81.8 |
| Glitter | 54.5 | 95.1 | 87.5 | 86.5 | 80.4 | 85.4/84.8 | 92.1 | 73.2 | 81.8 |
| 8x/2x | 8x/1x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | 8x/1x | |
| Back-Translation |
| Vanilla-DA (8x) | 53.4 | 95.1 | 88.5 | 87.5 | 80.9 | 85.9/85.9 | 92.2 | 73.5 | 82.1 |
| Glitter | 54.9 | 95.1 | 88.4 | 87.3 | 80.9 | 86.2/85.3 | 92.2 | 73.7 | 82.3 |
| 8x/2x | 8x/1x | 8x/1x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | |
| Mask-and-reconstruct |
| Vanilla-DA (8x) | 58.8 | 94.5 | 88.7 | 87.0 | 80.9 | 85.8/84.9 | 91.8 | 74.0 | 82.6 |
| Glitter | 59.2 | 95.1 | 89.2 | 87.6 | 81.0 | 86.6/84.8 | 92.4 | 74.1 | 83.0 |
| 8x/1x | 8x/1x | 8x/2x | 8x/1x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | |
+
+Table 1: Test results of the distilled experiment on GLUE. $(^{*})$ denotes results are taken verbatim from: BERTLarge (Devlin et al., 2019), and MATE-KD (Rashid et al., 2021). Bold and underlined numbers indicate the best and the second best results across the DA methods.
+
+| Method | CoLA Mcc | SST Acc | MRPC Acc/F1 | STS-B P/S | QQP Acc/F1 | MNLI-m Acc | QNLI Acc | RTE Acc | Avg. |
| RoBERTa | 61.9 | 95.4 | 88.6 | 89.3 | 80.4 | 87.6 | 93.0 | 81.6 | 84.7 |
| Self-KD | 61.7 | 95.7 | 89.0 | 89.0 | 80.8 | 88.3 | 93.0 | 81.7 | 84.9 |
| + Vanilla-DA | 61.5 | 96.1 | 88.9 | 89.7 | 81.0 | 88.0 | 92.9 | 81.1 | 84.9 |
| 8x | 8x | 8x | 8x | 8x | 8x | 8x | 12x | |
| + Glitter | 62.5 | 96.0 | 89.8 | 89.5 | 81.1 | 88.1 | 93.5 | 82.3 | 85.4 |
| 8x/1x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | 8x/2x | 12x/1x | |
| CT + Vanilla-DA | 59.4 | 95.6 | 89.0 | 85.8 | 80.3 | 82.5 | 92.0 | 80.2 | 83.1 |
| 8x | 8x | 8x | 10x | 8x | 8x | 8x | 10x | |
| CT + Glitter | 62.7 | 95.8 | 89.2 | 87.9 | 80.9 | 84.1 | 92.9 | 81.8 | 84.4 |
| 8x/1x | 8x/1x | 8x/1x | 10x/1x | 8x/2x | 8x/2x | 8x/2x | 10x/1x | |
+
+Table 2: Test result of the standalone experiments on GLUE using RoBERTabase.
+
+data for all GLUE datasets except for SST in the remainder of our experiments.
+
+For the standalone mode, Tables 2 and 3 present the results on test and dev, respectively. Similar to distilled, Glitter outperforms Vanilla-DA by $+0.5\%$ for both self-KD and CT. Self-KD yields better results than CT on all GLUE tasks except CoLA. CT falls short on most GLUE tasks, compared to no DA results—i.e., top-2 rows in Table 2. This is why, we only evaluated Glitter with self-KD on the dev data. Glitter achieves superior performance gains, compared to all three baselines on all datasets except QNLI. The key advantage of Glitter is that the training procedure remains intact.
+
+# 4.2.1 Out-of-Domain Generalization
+
+We also evaluate Glitter on OOD datasets. To this end, we test our models, already trained on GLUE tasks, on OOD datasets whose data distribution differs from the original data. In particular, here
+
+are our selected OOD datasets:
+
+- SST: IMDb (Maas et al., 2011), IMDbCont. (Gardner et al., 2020), and IMDbCAD (Kaushik et al., 2020), as done in Chen et al. (2021). Although both SST and IMDb datasets are collected on movie reviews, IMDb reviews tend to be substantially longer than SST sentences.
+- STS-B: SICK (Marelli et al., 2014), a semantic relatedness dataset, created from image and video captions. SICK and STS-B are collected on roughly identical domains, but from different sources.
+- QQP: $\mathrm{PAWS}_{\mathrm{QQP}}$ (Zhang et al., 2019), analogous to Chen et al. (2021), and MQP (McCreery et al., 2020), a medical question similarity dataset.
+
+| Method | SST Acc | MRPC F1 | MNLI-m Acc | QNLI Acc | RTE Acc | IMDb-Con. Acc | A-NLI Acc | HANS Acc |
| RoB▲ | 94.8 | 90.2 | 87.6 | 92.8 | 78.7 | - | - | - |
| CoDA▲ | 95.3 | 91.7 | 88.1 | 93.6 | 82.0 | - | - | - |
| HiddenCut▲ | 95.8 | 92.0 | 88.2 | 93.7 | 83.4 | 87.8 | 32.8 | 71.2 |
| MMEL† | 94.6 ± 0.8 | 91.9 ± 0.4 | 88.1 ± 0.1 | 93.2 ± 0.1 | 85.3 ± 1.0 | 90.5 ± 0.7 | 31.4 ± 0.6 | 74.5 ± 0.6 |
| RoB† | 94.3 ± 0.1 | 91.6 ± 0.5 | 87.7 ± 0.1 | 92.8 ± 0.2 | 84.5 ± 0.8 | 90.0 ± 0.4 | 30.8 ± 0.9 | 73.6 ± 0.7 |
| Self-KD | 94.3 ± 0.2 | 91.5 ± 0.3 | 87.9 ± 0.1 | 92.9 ± 0.2 | 84.0 ± 0.6 | 90.3 ± 0.5 | 30.9 ± 0.4 | 73.5 ± 0.7 |
| + Vanilla-DA | 95.4 ± 0.5 | 92.0 ± 0.3 | 88.2 ± 0.1 | 93.4 ± 0.1 | 84.4 ± 0.7 | 90.2 ± 0.4 | 31.3 ± 0.5 | 73.9 ± 0.4 |
| + Glitter | 95.7 ± 0.2 | 92.2 ± 0.5 | 88.2 ± 0.1 | 93.4 ± 0.1 | 85.6 ± 0.7 | 90.6 ± 0.2 | 31.8 ± 0.4 | 74.6 ± 0.3 |
+
+- MNLI: SciTail (Khot et al., 2018), collected from school-level science questions, and similar to Chen et al. (2021), A-NLI (Nie et al., 2020), and HANS (McCoy et al., 2019).
+RTE: HANS (McCoy et al., 2019).
+
+Table 10 in §B.1 showcases the OOD results for the distilled mode. Glitter outperforms Vanilla-DA in most cases, and is on par with it for nearly the rest. The only exceptions are IMDb-Cont., MQP, and $\mathrm{PAWS}_{\mathrm{QQP}}$ where Vanilla-DA outperforms Glitter by almost $1\%$ on average. Also, all models do not generalize well to $\mathrm{PAWS}_{\mathrm{QQP}}$ and A-NLI because their performance is below a majority-class performance. Moreover, a fine-tuned Distil-RoBERTa achieves the best OOD performance on HANS, highlighting that DA is not actually helpful for OOD accuracy on HANS.
+
+Table 3 (the right side) reports the OOD results for standalone models. The complete results are presented in §B.2-i.e., Table 11 on test and Table 12 on dev. Glitter overwhelmingly outperforms all the baselines with a few exceptions. In the dev results, the fine-tuned model with no DA achieves the best OOD generalization on IMDb, and SciTail, while HiddenCut scores the highest on A-NLI with a $1\%$ margin. Similarly, in the test results, Glitter trails Self-KD with no DA on IMDb, IMDb-CAD, and SciTail.
+
+# 4.3 HellaSwag
+
+HellaSwag (Zellers et al., 2019) is a dataset for situated commonsense reasoning that involves picking the best ending given a context. We augment contexts in HellaSwag using only BT to ensure that the choices remain meaningful for the augmented contexts. Because our standalone results have been consistent with the distilled results, we report our results only in the distilled mode. According to our
+
+Table 3: Dev results of the standalone experiment on GLUE using RoBERTabase. (*) denotes results are taken verbatim from: RoB and CoDA (Qu et al., 2021), and HiddenCut (Chen et al., 2021). ( $\dagger$ ) indicates the results are obtained from our implementation of MMEL (Yi et al., 2021).
+
+| Method | SQuAD
+EM/F1 | HellaSwag
+Acc |
| RoBLarge | 88.9/94.6 | 85.2 |
| DistilRoB | 80.9/87.9 | 42.9 |
| KD | 81.1/88.2 | 42.5 |
| + Vanilla-DA (8x) | 81.8/89.1 | 41.8 |
| + Glitter (8x/2x) | 83.6/90.3 | 44.1 |
+
+Table 4: Dev results of the distilled experiment on two downstream tasks.
+
+results demonstrated in Table 4, Glitter comfortably surpasses Vanilla-DA by a $+2.3\%$ margin.
+
+# 4.4 SQuAD
+
+SQuAD (Rajpurkar et al., 2016) is a crowd-sourced reading comprehension benchmark that consists of more than 100K questions, derived from Wikipedia passages. The task objective is to extract an answer span from a given question/passage pair. We augment questions in SQuAD v1.1 using only BT to ensure that the answer can still be found in the given passage for the augmented questions. Analogous to HellaSwag, we report our results only in the distilled mode. As shown in Table 4, Glitter outperforms Vanilla-DA by $+1.8\%$ in exact-match accuracy on the development set.
+
+We also evaluate our trained models under distribution shift by testing them on QA datasets from four different domains: Wikipedia, New York Times, Reddit, and Amazon product reviews (Miller et al., 2020). The OOD results are presented in Table 5. Glitter is consistently superior to Vanilla-DA in all four domains.
+
+# 5 Ablation Study and Discussion
+
+In this section, we aim to answer the following questions:
+
+| Method | Wiki EM | NYT EM | Reddit EM | Amzn EM |
| RoBLarge | 84.4 | 85.9 | 76.6 | 74.4 |
| DistilRoB | 76.6 | 78.1 | 66.2 | 62.9 |
| KD | 76.5 | 78.7 | 65.7 | 63.0 |
| + Vanilla-DA | 77.3 | 79.0 | 65.9 | 63.3 |
| + Glitter | 79.3 | 80.7 | 68.1 | 64.7 |
+
+- How does training time of Glitter compare against Vanilla-DA?
+Instead of adaptively selecting augmented data during training, can we pre-process them to dispense with unnecessary examples prior to training?
+- How many augmented examples are required for Glitter to work?
+- Is our selection strategy based on sorting of $\ell_{eval}$ in Glitter important?
+
+For this purpose, we conduct a detailed analysis on 4 GLUE tasks—i.e., SST, MRPC, QNLI, and RTE. We trained models based on Vanilla-DA and Glitter using Self-KD and tested them on the development set (the dev setting).
+
+Runtime Analysis. Throughout our experiments in §4, we compare Glitter with Vanilla-DA when number of augmentations are similar for both methods—i.e., $8x$ . A natural question is: how would both DA methods behave with fewer augmented data? To this end, we vary augmentation size from $1x$ to $8x$ and train different Vanilla-DA models on each augmented dataset. We measure average the training time per epoch for all models. Figure 2 illustrates the dev accuracy as the training time increases. The training speed of Glitter $8x/2x$ is slightly faster than Vanilla-DA $6x$ on SST, MRPC, and QNLI and for Glitter $8x/1x$ , is faster than Vanilla-DA $4x$ on RTE. Glitter is superior of the two on all datasets.
+
+Effect of Pre-processing Augmented Data. We conjecture that Glitter does not need any data engineering on augmented examples to obtain preferable performance gains. However, Vanilla-DA may require some pre-processing by weeding out potentially noisy data to become more effective. To investigate this, we exploit two pre-processing
+
+Table 5: OOD results for models trained on SQuAD and tested on QA datasets from four different domains (Miller et al., 2020).
+
+| Method | SST Acc | MRPC F1 | QNLI Acc | RTE Acc |
| Vanilla-DA | 95.1 | 92.2 | 93.3 | 84.8 |
| β = 0.7 | 95.1 | 92.5 | 93.4 | 84.8 |
| β = 0.9 | 95.0 | 92.2 | 93.3 | 83.8 |
| LP | 94.8 | 92.4 | 93.3 | 84.8 |
| Glitter | 95.8 | 92.8 | 93.4 | 85.9 |
| β = 0.7 | 95.0 | 91.5 | 93.5 | 85.2 |
| β = 0.9 | 95.0 | 92.5 | 93.3 | 84.1 |
| LP | 95.1 | 92.2 | 93.5 | 85.9 |
+
+Table 6: Dev results of self-KD exhibiting the effectiveness of different pre-processing techniques to filter augmented examples on 4 GLUE tasks. $\beta$ and LP depict a minimum confidence threshold, and label preserving, respectively.
+
+techniques: (1) Confidence-based filtering: Augmented examples for which the model's confidence is below a minimum threshold $\beta$ are discarded, (2) Label-preserving augmentation (LP): Augmented examples for which the model predicts a different label than the original example are discarded. The results, reported in Table 6, show no meaningful performance gains by these preprocessing techniques. For Vanilla-DA, minimum confidence threshold of 0.7 performs slightly better as it brings minor improvements on MRPC (+0.3%) and QNLI (+0.1%), but is still lower than Glitter. On the other hand, applying these techniques slightly deteriorates the performance of Glitter in almost all cases. The only improvements are +0.1% on QNLI for LP and $\beta = 0.7$ .
+
+Effect of Augmentation Size in Glitter. We explore how augmentation size affects the performance of Glitter. Throughout our experiments, we fix the augmentation size to $8x$ , but now, we reduce augmentation size $K$ to $6x$ and $4x$ , while retaining selection size $k_{1}$ as before—i.e., 1 for RTE, and 2 for the rest. Our results, shown in Table 7, reveal that when $K$ becomes close to $k_{1}$ , Glitter's performance declines. Nonetheless, for a sufficiently large augmentation, Glitter starts to shine. For SST, and MRPC, the magic number is $8x$ , whereas for QNLI, and RTE, Glitter performs best on $6x$ . Another parameter in Glitter is the selection size $k_{1}$ . We find that for all tasks, the best value can be chosen from $\{1,2\}$ (2 by default). Using this method, tuning $k_{1}$ is straightforward and does not impose additional complexity to our method.
+
+Effect of Selection Strategy in Glitter. In this section, our objective is to assess whether our proposed selection algorithm is crucial in Glitter. To
+
+
+(a) SST
+
+
+(b) MRPC
+Figure 2: Runtime Analysis of DA when training RoBERTabase using self-KD. The red point signifies Glitter.
+
+
+(c)QNLI
+
+
+(d) RTE
+
+| Method | | SST Acc | MRPC F1 | QNLI Acc | RTE Acc |
| Glitter | (8x) | 95.8 | 92.8 | 93.4 | 85.9 |
| Glitter | (6x) | 94.7 | 92.7 | 93.7 | 86.3 |
| Glitter | (4x) | 95.0 | 92.1 | 93.3 | 85.7 |
| Glitter-Rnd (8x/2x) | 94.3 | 91.4 | 93.2 | 85.2 |
| Glitter-Rnd (8x/1x) | 94.3 | 91.8 | 93.2 | 84.5 |
+
+Table 7: Dev results of self-KD for studying the effect of augmentation size and the selection algorithm for 4 GLUE tasks.
+
+this end, we sample random augmented examples at each iteration, namely Glitter-Rnd, instead of selecting worst-case examples. As illustrated in Table 7 (the bottom two rows), the performance drops on all datasets—i.e., $0.2\%$ on QNLI, and more than $1\%$ on the rest, confirming the effectiveness of our selection algorithm.
+
+# 6 Conclusion
+
+In this work, we proposed a universal DA technique, namely Glitter, that can be freely applied to any DA technique to enforce sample efficiency without introducing additional parameters or changing the training procedure. We extensively evaluated Glitter on a broad range of NLU tasks and in various widely used settings including consistency training, self-distillation and knowledge distillation and demonstrated substantial efficiency gains without compromising effectiveness. Extending Glitter to auto-regressive models for machine translation and abstractive summarization is an interesting direction for future work.
+
+# References
+
+Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6168-6173, Florence, Italy. Association for Computational Linguistics.
+
+Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535-541.
+Jiaao Chen, Dinghan Shen, Weizhu Chen, and Diyi Yang. 2021. HiddenCut: Simple data augmentation for natural language understanding with better generalizability. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4380-4390, Online. Association for Computational Linguistics.
+Jang Hyun Cho and Bharath Hariharan. 2019. On the efficacy of knowledge distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4794-4802.
+Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. 2019. AutoAugment: Learning augmentation policies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Terrance DeVries and Graham W Taylor. 2017. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552.
+Jingfei Du, Edouard Grave, Belize Gunel, Vishrav Chaudhary, Onur Celebi, Michael Auli, Veselin Stoyanov, and Alexis Conneau. 2021. Self-training improves pre-training for natural language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5408-5418, Online. Association for Computational Linguistics.
+Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at
+
+scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics.
+Farzan Farnia and David Tse. 2016. A minimax approach to supervised learning. Advances in Neural Information Processing Systems, 29:4240-4248.
+Steven Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for NLP. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 968-988, Online. Association for Computational Linguistics.
+Tommaso Furlanello, Zachary Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. In Proceedings of the 35th International Conference on Machine Learning, pages 1607-1616. PMLR.
+Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307-1323, Online. Association for Computational Linguistics.
+Hongyu Guo, Yongyi Mao, and Richong Zhang. 2019. Augmenting data with mixup for sentence classification: An empirical study. arXiv preprint arXiv:1905.08941.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
+Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations.
+Zhiting Hu, Bowen Tan, Ruslan Salakhutdinov, Tom Mitchell, and Eric P Xing. 2019. Learning data manipulation for augmentation and weighting. arXiv preprint arXiv:1910.12795.
+Tianjian Huang, Shaunak Halbe, Chinnadhurai Sankar, Pooyan Amini, Satwik Kottur, Alborz Geramifard, Meisam Razaviyayn, and Ahmad Beirami. 2022. DAIR: Data augmented invariant regularization. In International Conference on Learning Representations.
+Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu.
+
+2020. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, Online. Association for Computational Linguistics.
+Ehsan Kamalloo, Mehdi Rezagholizadeh, Peyman Passban, and Ali Ghodsi. 2021. Not far away, not so close: Sample efficient nearest neighbour data augmentation via MiniMax. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3522-3533, Online. Association for Computational Linguistics.
+Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations.
+Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Conference on Artificial Intelligence.
+Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452-457, New Orleans, Louisiana. Association for Computational Linguistics.
+Tian Li, Ahmad Beirami, Maziar Sanjabi, and Virginia Smith. 2021. Tilted empirical risk minimization. In International Conference on Learning Representations.
+Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692.
+Shayne Longpre, Yi Lu, Zhucheng Tu, and Chris DuBois. 2019. An exploration of data augmentation and sampling techniques for domain-agnostic question answering. arXiv preprint arXiv:1912.02145.
+Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
+
+Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216-223, Reykjavik, Iceland. European Language Resources Association (ELRA).
+Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics.
+Clara H. McCreery, Namit Katariya, Anitha Kannan, Manish Chablani, and Xavier Amatriain. 2020. Effective transfer learning for identifying similar questions: Matching user questions to COVID-19 FAQs. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3458--3465. Association for Computing Machinery.
+John Miller, Karl Krauth, Benjamin Recht, and Ludwig Schmidt. 2020. The effect of natural distribution shift on question answering models. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 6905-6916. PMLR.
+Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2021. On the stability of fine-tuning BERT: Misconceptions, explanations, and strong baselines. In International Conference on Learning Representations.
+Nathan Ng, Kyunghyun Cho, and Marzyeh Ghassemi. 2020. SSMBA: Self-supervised manifold based data augmentation for improving out-of-domain robustness. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1268-1283, Online. Association for Computational Linguistics.
+Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Yanru Qu, Dinghan Shen, Yelong Shen, Sandra Sajeev, Weizhu Chen, and Jiawei Han. 2021. CoDA:
+
+Contrast-enhanced and diversity-promoting data augmentation for natural language understanding. In International Conference on Learning Representations.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
+Ahmad Rashid, Vasileios Lioutas, and Mehdi Rezagholizadeh. 2021. MATE-KD: Masked adversarial Text, a companion to knowledge distillation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1062-1071, Online. Association for Computational Linguistics.
+Anna Rogers. 2021. Changing the world by changing the data. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2182-2194, Online. Association for Computational Linguistics.
+Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. "everyone wants to do the model work, not the data work": Data cascades in high-stakes ai. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-15. Association for Computing Machinery.
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv:1910.01108.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics.
+Siamak Shakeri, Cicero Nogueira dos Santos, Henghui Zhu, Patrick Ng, Feng Nan, Zhiguo Wang, Ramesh Nallapati, and Bing Xiang. 2020. End-to-end synthetic data generation for domain adaptation of question answering systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5445-5460, Online. Association for Computational Linguistics.
+Dinghan Shen, Mingzhi Zheng, Yelong Shen, Yanru Qu, and Weizhu Chen. 2020. A simple but tough-to-beat data augmentation approach for natural language understanding and generation. arXiv preprint arXiv:2009.13818.
+
+Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John Duchi, Vittorio Murino, and Silvio Savarese. 2018. Generalizing to unseen domains via adversarial data augmentation. Advances in neural information processing systems, 31.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
+Dongdong Wang, Yandong Li, Liqiang Wang, and Boqing Gong. 2020. Neural networks are more productive teachers than human raters: Active mixup for data-efficient knowledge distillation from a black-box model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1498-1507.
+Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.
+Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional BERT contextual augmentation. In International Conference on Computational Science, pages 84-95. Springer International Publishing.
+Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. Advances in Neural Information Processing Systems, 33:6256-6268.
+Mingyang Yi, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, and Zhi-Ming Ma. 2021. Reweighting aug-
+
+mented samples by minimizing the maximal expected loss. In International Conference on Learning Representations.
+Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics.
+Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations.
+Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28, pages 649-657. Curran Associates, Inc.
+Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298-1308, Minneapolis, Minnesota. Association for Computational Linguistics.
+
+# A Implementation Details
+
+# A.1 Fine-tuning details
+
+We adopted the publicly available pre-trained RoBERTa (Liu et al., 2019) and DistilRoBERTa (Sanh et al., 2019)—using the Huggingface Transformers library (Wolf et al., 2020) and the Pytorch Lightning library5.
+
+For the test settings, the model is evaluated on the development data once per epoch for small datasets and twice per epoch for large ones—i.e., SST-2, MNLI, QNLI, SQuAD, and HellaSwag. The best performing model is chosen for testing. Our learning rate schedule follows a linear decay scheduler with a warm-up, specified as a ratio of the total number of training steps. Maximum number of epochs is set to 20 for all tasks except SQuAD, following (Mosbach et al., 2021). For large datasets, we early stop with a patience of 10. The learning rate, and the batch size are tuned for each task separately. The details of hyperparameters are summarized in Table 9. We ran RoBERTabase experiments with the similar hyperparameters, but with these exceptions: On QNLI, learning rate, batch size, and weight decay are set to 3e-5, 64, and 0.1; warmup ratio is set to 0.06 on QQP.
+
+For dev experiments, we follow CoDA (Qu et al., 2021) on the GLUE tasks. Specifically, we train the model for 5 epochs with a batch size of 32, learning rate 1e-5, warmup ratio 0.06, weight decay 0.1, and linear learning rate decay. For SQuAD, and HellaSwag, the hyperparameters are detailed in Table 8.
+
+All experiments were conducted on two Nvidia Tesla V100 GPUs.
+
+| Hyperparam. | SQuAD | HellaSwag |
| Learning rate | 1.5e-5 | 1.5e-5 |
| Batch size | 16 | 32 |
| Max length | 512 | 512 |
| Max epochs | 3 | 20 |
| Warmup ratio | 0.06 | 0.06 |
| Grad. acc. steps | 4 | 1 |
| Weight Decay | 0.01 | 0.01 |
| temp. τ (for KD) | 5.0 | 10.0 |
+
+Table 8: Hyperparameters of DistilRoBERTa on two downstream tasks.
+
+# A.2 Knowledge distillation details
+
+We implemented knowledge distillation by caching the teacher's logits prior to training. We performed grid search to find the best softmax temperature $\tau$ from $\{5.0, 10.0, 12.0, 20.0, 30.0\}$ . The value of $\tau$ used in our experiments are reported in Tables 8 and 9 for DistilRoBERTa and RoBERTabase; with the exception $\tau = 20.0$ on MRPC for RoBERTabase. Loss weight $\alpha$ , in Eq. (8), is set to 0.5 for all tasks except CoLA in which $\alpha = 0.75$ .
+
+# B OOD results
+
+# B.1 Distilled Mode
+
+OOD results for models trained in the distilled mode are presented in Table 10.
+
+# B.2 Standalone Mode
+
+Table 11 presents OOD results for models trained using test settings, and Table 12 (complementary to Table 3 in §4.2.1) presents OOD results for dev experiments.
+
+| Hyperparam. | CoLA | SST | MRPC | STS-B | QQP | MNLI-m/mm | QNLI | RTE |
| Learning rate | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 3e-5/1e-5 | 5e-5* | 1e-5 |
| Batch size | 32 | 64 | 16 | 32 | 64 | 64 | 128* | 32 |
| Max length | 128 | 256 | 128 | 128 | 256 | 256 | 256 | 256 |
| Warmup ratio | 0.1 | 0.06 | 0.06 | 0.06 | 0.1* | 0.08/0.06 | 0.08 | 0.06 |
| Gradient acc. steps | 1 | 4 | 1 | 1 | 4 | 4 | 4 | 1 |
| Weight Decay | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.0/0.1 | 0.0* | 0.1 |
| Softmax temp. τ (for KD) | 30.0 | 20.0 | 12.0* | 12.0 | 20.0 | 12.0 | 12.0 | 12.0 |
+
+Table 9: Hyperparameters of DistilRoBERTa on the GLUE benchmark. We used the same configuration for RoBERTabase albeit with a few exceptions marked by (*).
+
+| Trained On → Method | SST
+IMDb
+Acc | SST
+IMDb-Con. Acc | SST
+IMDb-CAD
+Acc | STS
+SICK
+P/S | QQP
+MQP
+Acc/F1 | QQP
+PAWSQQP
+Acc | MNLI
+SciTail
+Acc | MNLI
+A-NLI
+Acc | RTE
+HANS
+Acc |
| RoBLarge | 93.7 | 92.0 | 94.0 | 84.3 | 71.6 | 43.6 | 82.0 | 45.9 | 81.8 |
| DistilRoB | 90.2 | 87.6 | 92.5 | 79.6 | 67.3 | 36.3 | 74.8 | 27.8 | 71.3 |
| KD | 90.6 | 87.4 | 93.2 | 79.9 | 65.6 | 33.1 | 77.3 | 28.9 | 70.6 |
| EDA (Wei and Zou, 2019) |
| Vanilla-DA | 91.8 | 87.2 | 92.9 | 80.0 | 59.9 | 38.0 | 75.8 | 27.3 | 66.6 |
| Glitter | 91.2 | 87.1 | 94.0 | 80.0 | 64.0 | 36.6 | 75.6 | 28.8 | 65.6 |
| Back-Translation |
| Vanilla-DA | 92.2 | 87.9 | 92.1 | 80.3 | 69.6 | 35.0 | 76.5 | 27.9 | 68.0 |
| Glitter | 92.4 | 87.9 | 92.8 | 81.2 | 68.7 | 35.2 | 77.6 | 30.4 | 70.5 |
| Masked-and-reconstruct |
| Vanilla-DA | 91.8 | 88.8 | 92.9 | 80.4 | 68.5 | 33.7 | 77.4 | 28.5 | 69.3 |
| Glitter | 92.0 | 88.0 | 92.5 | 80.7 | 68.8 | 35.3 | 78.2 | 29.9 | 70.9 |
+
+Table 10: OOD results of models whose in-domain test results are reported in Table 1 for the distilled mode. Bold numbers indicate the best result across DistilRoB models.
+
+| Trained On → Method | SST | SST | SST | STS | QQP | QQP | MNLI | MNLI | RTE |
| IMDb | IMDb-Con. | IMDb-CAD | SICK | MQP | \( PAWS_{QQP} \) | SciTail | A-NLI | HANS |
| Acc | Acc | Acc | P/S | Acc/F1 | Acc | Acc | Acc | Acc |
| RoBBase | 92.2 | 89.1 | 94.3 | 80.6 | 70.7 | 38.6 | 78.5 | 31.4 | 78.5 |
| Self-KD | 92.6 | 89.1 | 95.0 | 80.2 | 70.9 | 37.6 | 79.4 | 32.1 | 79.5 |
| + Vanilla-DA | 91.8 | 88.8 | 94.8 | 81.5 | 71.4 | 38.8 | 78.4 | 31.5 | 79.3 |
| + Glitter | 92.0 | 89.6 | 94.8 | 81.7 | 72.1 | 39.4 | 79.1 | 32.7 | 80.1 |
| CT + Vanilla-DA | 90.6 | 88.1 | 92.1 | 76.6 | 70.6 | 38.3 | 76.6 | 30.3 | 78.4 |
| CT + Glitter | 92.2 | 88.6 | 93.7 | 79.4 | 70.7 | 38.8 | 77.0 | 31.6 | 80.2 |
+
+Table 11: OOD results of models whose in-domain test results are reported in Table 2 for the standalone experiment. Bold numbers indicate the best result.
+
+| Trained On → Method | SST | SST | SST | MNLI | MNLI | MNLI | RTE |
| IMDb | IMDb-Con. | IMDb-CAD | SciTail | A-NLI | HANS | HANS |
| Acc | Acc | Acc | Acc | Acc | Acc | |
| RoBBase | 91.9 ± 0.3 | 90.0 ± 0.4 | 94.1 ± 0.4 | 80.1 ± 0.4 | 31.0 ± 0.6 | 73.7 ± 0.7 | 78.3 ± 0.4 |
| HiddenCut▲ | - | 87.8 | 90.4 | - | 32.8 | 71.2* | - |
| MMEL† | 91.6 ± 0.1 | 90.5 ± 0.7 | 94.5 ± 0.4 | 79.7 ± 0.3 | 31.4 ± 0.6 | 74.5 ± 0.6 | 78.3 ± 0.3 |
| Self-KD | 91.9 ± 0.3 | 90.3 ± 0.5 | 94.4 ± 0.4 | 79.9 ± 0.3 | 30.9 ± 0.4 | 73.5 ± 0.7 | 78.2 ± 0.4 |
| + Vanilla-DA | 91.6 ± 0.4 | 90.2 ± 0.4 | 94.3 ± 0.3 | 79.3 ± 0.4 | 31.3 ± 0.5 | 73.9 ± 0.4 | 77.8 ± 0.3 |
| + Glitter | 91.7± 0.2 | 90.6± 0.2 | 94.8± 0.2 | 79.4 ± 0.1 | 31.8 ± 0.4 | 74.6 ± 0.3 | 78.4 ± 0.2 |
+
+Table 12: OOD results of models with dev settings in the standalone mode, same models whose results are reported in Table 3. $(\spadesuit)$ denotes results are taken verbatim from: HiddenCut (Chen et al., 2021). $(^{\dagger})$ indicates the results are obtained from our implementation of MMEL (Yi et al., 2021). Bold numbers indicate the best result.
\ No newline at end of file
diff --git a/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/images.zip b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..658c77a93030a41c9fe6b6fc9a0b65192f7530c6
--- /dev/null
+++ b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03579146a14563afd8cca28cb0523f87601a7f9719aa3aa8145c6c8d9f9d16fe
+size 798093
diff --git a/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/layout.json b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2a37faa37552b47377abb7713a3074ab0137b258
--- /dev/null
+++ b/whenchosenwiselymoredataiswhatyouneedauniversalsampleefficientstrategyfordataaugmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59339a220313645ce64a42e0514859d59b7b7bf616f6f59f004b2c9d0179eb61
+size 497985
diff --git a/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_content_list.json b/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a6fa739dd92c0342d0c545e177368cbeaaa5d623
--- /dev/null
+++ b/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b75bc5e98632478ec8ddf86a33a6b9dadcaabb5882764a28901a8566d5b30e59
+size 154071
diff --git a/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_model.json b/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0b2dbb75e33e3c62e34410754c4ea4bff1d68569
--- /dev/null
+++ b/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ac7e7229e3754cb8dddaebce2c14560fd7d20864d3e19cfbf26ae9c775722b2
+size 179795
diff --git a/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_origin.pdf b/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d99d96fb863441f9fed30ec66fd8af8bd11ab600
--- /dev/null
+++ b/whydontpeopleusecharacterlevelmachinetranslation/00d6cfc1-f6a3-4a9a-9ffd-d2611b5dd2be_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a42b8f0ca3bf9a0720c5819cdb94c77aaed05ccdd102c14e8907a2a2f1e15904
+size 370603
diff --git a/whydontpeopleusecharacterlevelmachinetranslation/full.md b/whydontpeopleusecharacterlevelmachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b5c75b71e27150d49af132a7e4c0bd3ebed56aa
--- /dev/null
+++ b/whydontpeopleusecharacterlevelmachinetranslation/full.md
@@ -0,0 +1,403 @@
+# Why don't people use character-level machine translation?
+
+Jindrich Libovický1 and Helmut Schmid2 and Alexander Fraser2
+
+1 Faculty of Mathematics and Physics, Charles Univeristy, Prague, Czech Republic
+
+2 Center for Information and Speech Processing, LMU Munich, Germany
+
+libovicky@ufal.mff.cuni.cz {schmid, fraser}@cis.lmu.de
+
+# Abstract
+
+We present a literature and empirical survey that critically assesses the state of the art in character-level modeling for machine translation (MT). Despite evidence in the literature that character-level systems are comparable with subword systems, they are virtually never used in competitive setups in WMT competitions. We empirically show that even with recent modeling innovations in character-level natural language processing, character-level MT systems still struggle to match their subword-based counterparts. Character-level MT systems show neither better domain robustness, nor better morphological generalization, despite being often so motivated. However, we are able to show robustness towards source side noise and that translation quality does not degrade with increasing beam size at decoding time.
+
+# 1 Introduction
+
+The progress in natural language processing (NLP) brought by deep learning is often narrated as removing assumptions about the input data and letting the models learn everything end-to-end. One of the assumptions about input data that seems to resist this trend is (at least partially) linguistically motivated segmentation of input data in machine translation (MT) and NLP in general.
+
+For NMT, several papers have claimed parity of character-based methods with subword models, highlighting advantageous features of such systems. Very recent examples include Gao et al. (2020); Banar et al. (2020); Li et al. (2021). Despite this, character-level methods are rarely used as strong baselines in research papers and shared task submissions, suggesting that character-level models might have drawbacks that are not sufficiently addressed in the literature.
+
+In this paper, we examine what the state of the art in character-level MT really is. We survey existing methods and conduct a meta-analysis of the
+
+input segmentation methods used in WMT shared task submissions. We then systematically compare the most recent character-processing architectures, some of them taken from general NLP research and used for the first time in MT. Further, we propose an alternative two-step decoder architecture that unlike standard decoders does not suffer from a slow-down due to the length of character sequences. Following the recent findings on MT decoding, we evaluate different decoding strategies in the character-level context.
+
+Many previous studies on character-level MT drew their conclusions from experiments on rather small datasets and focused only on quantitatively assessed translation quality without further analysis. To compensate for this, we revisit and systematically evaluate the state-of-the-art approaches to character-level neural MT and identify their major strengths and weaknesses on large datasets.
+
+# 2 Character-Level Neural MT
+
+Character-level processing was hardly possible within the statistical MT paradigm that assumed the existence of phrases consisting of semantically rich tokens that roughly correspond to words. Neural sequence-to-sequence models (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017) do not explicitly work with this assumption. In theory, they can learn to transform any sequence into any sequence.
+
+The original sequence-to-sequence models used word-based vocabularies of a limited size and which led to a relatively frequent occurrence of out-of-vocabulary tokens. A typical solution to that problem is subword segmentation (Sennrich et al., 2016; Kudo and Richardson, 2018), which keeps frequent tokens intact and splits less frequent ones into smaller units.
+
+Modeling language on the character level is attractive because it can help overcome several problems of subword models. One-hot representations
+
+of words or subwords do not reflect systematic character-level relations between words, potentially harming morphologically rich languages. With subwords, minor typos on the source side lead to radically different input representations resulting in low robustness towards source-side noise (Provilkov et al., 2020; Libovický and Fraser, 2020).
+
+Models using recurrent neural networks (RNNs) showed early success with character-level segmentation on the decoder side (Chung et al., 2016). Using character-level processing on the encoder side proved harder which was attributed to the features of the attention mechanism which can presumably benefit from semantically rich units (such as subwords) in the encoder. Following this line of thinking, Lee et al. (2017) introduced 1D convolutions with max-pooling that pre-process the character sequence into a sequence of latent word-like states. Coupled with a character-level decoder, they claimed to match the state-of-the-art subword-based models. Even though this architecture works well on the character level, it does not generalize further to the byte level (Costa-jussa et al., 2017). Hybrid approaches combining tokenization into words with the computation of character-based word representations were successfully used with RNNs (Luong and Manning, 2016; Gronroos et al., 2017; Ataman et al., 2019). Later, Cherry et al. (2018) showed that RNNs perform on par with subword models without changing the model architecture if the models are sufficiently large. Kreutzer and Sokolov (2018) support this by showing that RNN models which learn segmentation jointly with the rest of the model are close to character-level.
+
+Character-level modeling with Transformers appears to be more difficult. Gupta et al. (2019) used Transparent Attention (Bapna et al., 2018) to train deep character-level models and needed up to 32 layers to close the gap between the BPE and character models, which makes the model too large for practical use. Libovicky and Fraser (2020) narrowed the gap between subword and character modeling using curriculum learning by finetuning subword models to character-level.
+
+Gao et al. (2020) proposed adding a convolutional sub-layer in the Transformer layers. At the cost of a $30\%$ increase in parameter count, they managed to narrow the gap between subword- and character-based models by half. Banar et al. (2020) reused the convolutional preprocessing layer with constant-size segments of Lee et al. (2017) in a
+
+
+Figure 1: A timeline of research interest in character-level MT. Months of arXiv pre-print publication of the papers cited in Sections 2 and 3. Transformer repr. means pre-trained general-purpose sentence representation, not MT models.
+
+Transformer model for translation into English. Without changing the decoder, they reached comparable, but usually slightly worse, translation quality compared to BPE-based models.
+
+Shaham and Levy (2021a) revisited character-and byte-level MT on rather small IWSLT datasets. Their results show that character-level and byte-level models are usually worse than BPE models, but byte-based models without embedding layers often outperform BPE-based models in the out-of-English direction. Using similarly small datasets, Li et al. (2021) claim that character-level modeling outperforms BPE when translating into fusional, agglutinative, and introflexive languages.
+
+Nikolov et al. (2018) experimented with character-level models for romanized Chinese. These models performed comparable to models using logographic signs, but significantly worse than models using subwords. Zhang and Komachi (2018) argued that signs in logographic languages carry too much information and were able to improve the translation quality by segmenting Chinese and Japanese into sub-character units while keeping subword segmentation on the English side.
+
+Little is known about other properties of character-level MT beyond the overall translation quality. Sennrich (2017) prepared a set of contrastive English-German sentence pairs and tested them using shallow RNN-based models. They observed that character-based models transliterated better, but captured morphosyntactic agreement worse. Libovický and Fraser (2020) evaluated Transformer-based character-level models using MorphEval and came to mixed conclusions.
+
+Gupta et al. (2019) and Libovicky and Fraser (2020) make claims about the noise robustness of the character-level models using synthetic noise. Li et al. (2021) evaluated domain robustness by training models on small domain-specific datasets and evaluating them on unrelated domains, claim
+
+ing the superiority of character-level models in this setup. On the other hand, Gupta et al. (2019) evaluated the domain robustness in a more natural setup and did not observe higher robustness when evaluating general domain models on domain-specific tests compared to BPE.
+
+Another consideration is longer training and inference times. Character-level systems are significantly slower due to the increased sequence length. Libovicky and Fraser (2020) reported a 5.6-fold slowdown at training time and a 4.7-fold slowdown at inference time compared to subword models.
+
+Recent research on character-level modeling goes beyond MT. Pre-trained multilingual representations are a particularly active area. Clark et al. (2021) propose CANINE. The model shrinks character sequences into fewer hidden states (similar to Lee et al., 2017). They use local self-attention and strided convolutions (instead of highway layers and max-pooling as in Lee's work). Their model is either trained using the masked-language-modeling objective (Devlin et al., 2019) with subword supervision, or in an encoder-decoder setup similar to Raffel et al. (2020). Both methods reach a representation quality comparable to similar subword models.
+
+ByT5 (Xue et al., 2021a) and Charformer (Tay et al., 2021) are based on the mT5 model (Xue et al., 2021b) which uses sequence-to-sequence denoising pre-training. Whereas byT5 only uses byte sequences instead of subwords and differs in hyperparameters, Charformer uses convolution and combines character blocks to obtain latent subword representations. These models mostly reach similar results to sub-word models, occasionally outperforming a few of them, in the case of Charformer without a significant slowdown.
+
+# 3 WMT submissions
+
+The Conference on Machine Translation (WMT) organizes annual shared tasks in various use cases of MT. The shared task submissions focus on translation quality rather than the novelty of presented ideas, as most other research papers do. Therefore, we assume that, if character-level models were a fully-fledged alternative to subword models, at least some systems submitted to the shared tasks would use character-level models.
+
+We annotated recent system description papers with the input and output segmentation method they used. We focused on information about experi
+
+
+Figure 2: A boxplot of vocabulary sizes of WMT systems from 2018-2020, the median is denoted with the orange line.
+
+iments with character-level models. Since we are primarily interested in the Transformer architecture that became the standard after 2017, we only included system description papers from 2018-2020 (Bojar et al., 2018; Barrault et al., 2019, 2020). Transformers were used in $81\%$ , $87\%$ , and $97\%$ of the systems in the respective years. We included the main task on WMT, news translation, and two minor tasks where character-level methods might help: translation robustness (Li et al., 2019; Specia et al., 2020) and translation between similar languages (ibid.).
+
+Almost all systems use a subword-based vocabulary (BPE: $81\%$ , $71\%$ , $66\%$ in the respective years; SentencePiece: None in 2018, $9\%$ and $25\%$ in the following ones). Purely word-based (none in 2018, $2\%$ and $3\%$ in the later years) or morphological segmentation ( $4\%$ , $2\%$ , $3\%$ in the respective years) are rarely used. The average vocabulary size decreases over time (see Figure 2) with a median size remaining at $32k$ in the last two years. The reason for the decreasing average is probably a higher proportion of systems for low-resource languages, where a smaller vocabulary leads to better translation quality (Sennrich and Zhang, 2019).
+
+Among the 145 annotated system description papers, there were only two that used character-level segmentation. Mahata et al. (2018) used a character-level model for Finnish-to-English translation. This system, however, makes many suboptimal design choices and ended up as the last one in the manual evaluation. Scherrer et al. (2019) experimented with character-level systems for similar language translation and observed that characters outperform other segmentations for Spanish-Portuguese translation, but not for Czech-Polish. Knowles et al. (2020) experimented with different subword vocabulary sizes for English-Inuktikut translation and reached the best results using a subword vocabulary of size 1k, which makes it close to the character level. Most of the papers do not even mention character-level segmentation as a viable
+
+alternative they would like to pursue in future work (7% in 2018, 2% in 2019, none in 2020).
+
+Character-level methods were more frequently used in WMT17 with RNN-based systems, especially for translation of Finnish (Escolano et al., 2017; Östling et al., 2017) and less successfully for Chinese (Holtz et al., 2017) and the automatic post-editing task (Variš and Bojar, 2017).
+
+On the other hand, Figure 1 shows that the research interest in character-level methods remains approximately the same, or may have slightly increased. For practical solutions in WMT systems, we clearly show that system designers in the WMT community have avoided character-level models.
+
+We speculate that the main reasons for not considering character-level modeling are its lower efficiency and the fact that the literature shows no clear improvement of translation quality. Most of the submissions use back-translation (85%, 82%, and 94% in the respective years), often iterated several times (11%, 20%, 16%), which requires both training and inference on large datasets. With the approximately 5-fold slowdown, WMT-scale experiments on character models are not easily tractable.
+
+# 4 Evaluated Models
+
+We evaluate several Transformer-based architectures for character-level MT. A major issue with character-level sequence processing is the sequence length and low information density compared to subword sequences. Architectures for character-level sequence processing typically address this issue by locally processing and shrinking the sequences into latent word-like units. In our experiments, we explore several ways to do this.
+
+First, we directly use character embeddings as input to the Transformer. Second, following Banar et al. (2020), we use the convolutional character processing layers proposed by Lee et al. (2017). Third, we replace the convolutions with local self-attention as proposed in the CANINE model (Clark et al., 2021). Finally, we use the recently proposed Charformer architecture (Tay et al., 2021).
+
+Lee-style encoding. Lee et al. (2017) process the sequence of character embeddings with convolutions of different kernel sizes and number of output channels. In the original paper, this was followed by 4 highway layers (Srivastava et al., 2015). In our preliminary experiments, we observed that a too deep stack of highway layers leads to diminishing gradients, and we replaced the second two High-
+
+way layers with feedforward sublayers as used in the Transformer architecture (Vaswani et al., 2017)
+
+CANINE. Clark et al. (2021) experiment with character-level pre-trained sentence representations. The character-processing architecture is in principle similar to Lee et al. (2017) but uses more modern building blocks. Character embeddings are processed by a Transformer layer with local self-attention which only allows the states to attend to states in their neighborhood. This is followed by downsampling using strided convolution.
+
+Originally, CANINE used a local self-attention span as long as 128 characters. In the case of MT, this would usually span the entire sentence, so we use significantly shorter spans.
+
+Charformer. Unlike previous approaches, Charformer (Tay et al., 2021) does not apply a nonlinearity on the embeddings and gets latent subword representations by repeated averaging of character embeddings. First, it processes the sequence using a 1D convolution, so the states are aware of their mutual local positions in local neighborhoods. Second, non-overlapping character $n$ -grams of length up to $N$ are represented by averages of the respective character embeddings. This means that for each character, there is a vector that represents the character as a member of $n$ -grams of length 1 to $N$ . In the third step, the character blocks are scored with a scoring function (a linear transformation), which can be interpreted as attention over the $N$ different $n$ -gram lengths. The attention scores are used to compute a weighted average over the $n$ -gram representations. Finally, the sequence is downsampled using mean-pooling with window size and stride size $N$ (i.e., the maximum $n$ -gram size).
+
+Whereas Lee-style encoding allows using low-dimensional character embeddings and keeps most parameters in the convolutional layers, CANINE and Charformer need the character representation to have the same dimension as the following Transformer layer stack.
+
+Two-step decoding. The architectures mentioned above allow the Transformer layers to operate more efficiently with a shorter and more information-dense sequence of states. However, while decoding, we need to generate the target character sequence in the original length, by outputting a block of characters in each decoding step. Our preliminary experiments showed that generating
+
+
+Figure 3: Encoder-decoder architecture with character-processing layers and a two-step decoder with lightweight LSTM for output coherence.
+
+blocks of characters non-autoregressively leads to incoherent output. Therefore, we propose a two-step decoding architecture where the stack of Transformer layers operating over the downsampled sequence is followed by a lightweight LSTM autoregressive decoder (see Figure 3).
+
+The input to the LSTM decoder is a concatenation of the embedding of the previously generated character and a projection of the Transformer decoder output state. At inference time, the LSTM decoder generates a block of characters and inputs them to the character-level processing layer. The Transformer decoder computes an output state that the LSTM decoder uses to generate another character block. More details are in Appendix A.
+
+Modifying Charformer for the two-step decoding would require a long padding at the beginning of the sequence causing the decoder to diverge. Because of that, we use Lee-style encoding on the decoder side when using Charformer in the encoder.
+
+First, we conduct all our experiments on the small IWSLT datasets. Then we evaluate the most promising architectures on larger datasets.
+
+# 5 Experiments on Small Data
+
+We implement the models using Huggingface Transformers (Wolf et al., 2020). We take the CANNINE layer from Huggingface Transformers and use an independent implementation of Charformer1. Our source code is available on Github.2 Hyperparameters and other experimental details can be found in Appendix B.
+
+# 5.1 Experimental Setup
+
+We evaluate the models on translation between English paired with German, French, and Arabic (with
+
+English as both input and output) using the IWSLT 2017 datasets (Cettolo et al., 2017) with a training data size of around 200k sentences for each language pair (see Appendix B for details).
+
+For the subword models, we tokenize the input using the Moses tokenizer (Koehn et al., 2007) and then further split the words into subword units using BPE (Sennrich et al., 2016) with 16k merge operations. For the character models, we limit the vocabulary to 300 UTF-8 characters.
+
+We use the Transformer Base architecture (Vaswani et al., 2017) in all experiments. We make no changes to it in the subword and baseline character experiments. In the later experiments, we replace the embedding lookup with the character processing architectures. For the Lee-style encoder, we chose similar hyperparameters as related work (Banar et al., 2020). For experiments with Charformer and CANINE models, we set the hyperparameters such that they cover the same character span before downsampling as the Lee-style encoder, which causes the models to have fewer parameters than a Lee-style encoder. Note however that for both the Charformer and the CANINE models, the number of parameters is almost independent of the character window width. For all three character processing architectures, we experiment with downsampling factors of 3 and 5 (a 16k BPE vocabulary corresponds to a downsampling factor of about 4 in English).
+
+# 5.2 Translation Quality
+
+We evaluate the translation quality using the BLEU score (Papineni et al., 2002), the chrF score (Popovic, 2015) (as implemented in SacreBLEU; Post, 2018), and the COMET score (Rei et al., 2020). We run each experiment 4 times and report the mean value and standard deviation.
+
+The results are presented in Table 1. Except for translation into Arabic, where character methods outperform BPEs (which is consistent with the findings of Shaham and Levy, 2021a and Li et al., 2021), subword methods are always better than characters.
+
+The Lee-style encoder outperforms the two more recent methods and the method of using the character embeddings directly. Charformer performs similarly to using character embeddings directly,
+
+| Model | Enc. Dec. | Char. proc. params | From English | Into English | |
| ar | de | fr | ar | de | fr | |
| BLEU | chrF | COMET | BLEU | chrF | COMET | BLEU | chrF | COMET | BLEU | chrF | COMET | BLEU | chrF | COMET | BLEU | chrF | COMET |
| BPE 16k | 16516 | ±0.2 | .436 | .258 | 27.7 | .555 | .254 | 36.4 | .619 | .408 | 29.7 | .521 | .325 | 31.6 | .554 | .379 | 36.2 | .592 | .527 | |
| Vanilla char. | 658 | ±0.4 | .447 | .267 | 25.6 | .550 | .165 | 34.6 | .611 | .350 | 27.7 | .518 | .238 | 29.4 | .545 | .327 | 34.7 | .585 | .487 | |
| ±0.04 | ±0.016 | ±0.7 | ±0.005 | ±0.34 | ±0.7 | ±0.002 | ±0.20 | ±0.8 | ±0.006 | ±0.34 | ±0.7 | ±0.005 | ±0.29 | ±0.4 | ±0.003 | ±0.12 | |
| Lee-style | 3 | — | 13.1 | .448 | .274 | 25.9 | .552 | .200 | 35.2 | .613 | .383 | 28.0 | .521 | .257 | 30.2 | .551 | .345 | 35.3 | .588 | |
| ±0.5 | ±0.009 | ±0.001 | ±0.01 | ±0.023 | ±0.4 | ±0.002 | ±0.010 | ±0.4 | ±0.002 | ±0.015 | ±0.5 | ±0.003 | ±0.22 | ±0.2 | ±0.2 | ±0.13 | |
| 5 | — | 12.5 | .439 | .245 | 25.0 | .545 | .140 | 33.2 | .602 | .303 | 24.9 | .491 | .090 | 28.9 | .543 | .311 | 34.4 | .583 | |
| ±0.1 | ±0.02 | ±0.13 | ±0.4 | ±0.002 | ±0.13 | ±0.1 | ±0.003 | ±0.017 | ±4.4 | ±0.042 | ±228 | ±0.3 | ±0.002 | ±0.019 | ±0.3 | ±0.002 | |
| 3 | 3 | 11.0 | .432 | .143 | 23.4 | .541 | .065 | 31.7 | .603 | .277 | 25.6 | .509 | .170 | 28.0 | .537 | .262 | 33.3 | .577 | |
| ±0.2 | ±0.002 | ±0.013 | ±0.4 | ±0.002 | ±0.028 | ±0.5 | ±0.002 | ±0.012 | ±0.3 | ±0.001 | ±0.016 | ±0.3 | ±0.002 | ±0.019 | ±0.4 | ±0.001 | |
| 5 | 5 | 9.4 | .418 | .006 | 21.8 | .524 | -.106 | 28.7 | .584 | .094 | 23.7 | .492 | .033 | 25.5 | .519 | .131 | 30.9 | .561 | |
| ±0.5 | ±0.003 | ±0.015 | ±0.3 | ±0.002 | ±0.021 | ±1.7 | ±0.011 | ±0.096 | ±0.3 | ±0.001 | ±0.015 | ±0.3 | ±0.003 | ±0.019 | ±0.5 | ±0.004 | |
| Charformer | 3 | — | 1320 | .448 | .261 | 25.9 | .550 | .167 | 32.9 | .607 | .300 | 27.3 | .520 | .229 | 29.9 | .548 | .327 | 35.1 | .588 | |
| ±0.3 | ±0.002 | ±0.011 | ±0.5 | ±0.004 | ±0.026 | ±0.3 | ±0.003 | ±0.018 | ±0.5 | ±0.002 | ±0.028 | ±0.3 | ±0.001 | ±0.008 | ±0.3 | ±0.002 | |
| 5 | — | 1320 | .435 | .179 | 24.2 | .535 | .060 | 31.3 | .591 | .171 | 25.1 | .500 | .103 | 28.1 | .535 | .227 | 33.7 | .577 | |
| ±0.3 | ±0.002 | ±0.020 | ±0.6 | ±0.003 | ±0.027 | ±0.4 | ±0.003 | ±0.026 | ±0.6 | ±0.002 | ±0.022 | ±0.4 | ±0.003 | ±0.022 | ±0.2 | ±0.002 | |
| 3 | 3 | 1165 | .431 | .000 | 23.2 | .540 | .037 | 30.6 | .601 | .192 | 24.5 | .506 | .125 | 27.5 | .538 | .225 | 32.6 | .576 | |
| ±0.5 | ±0.004 | ±0.000 | ±0.5 | ±0.004 | ±0.034 | ±0.4 | ±0.003 | ±0.031 | ±0.4 | ±0.003 | ±0.021 | ±0.5 | ±0.003 | ±0.021 | ±0.3 | ±0.001 | |
| 5 | 5 | 1165 | 8.4 | .402 | -.121 | 19.9 | .510 | -.250 | 27.4 | .575 | -.039 | 18.4 | .448 | -.248 | 23.5 | .511 | .018 | 29.2 | |
| ±0.2 | ±0.003 | ±0.023 | ±0.2 | ±0.002 | ±0.027 | ±0.7 | ±0.005 | ±0.029 | ±3.1 | ±0.029 | ±173 | ±0.5 | ±0.003 | ±0.029 | ±0.7 | ±0.002 | |
| Canine | 3 | — | 6446 | 12.6 | .440 | .195 | 25.4 | .547 | .121 | 33.2 | .606 | .269 | 26.1 | .512 | .137 | 29.1 | .546 | .273 | 34.5 | |
| ±0.3 | ±0.002 | ±0.019 | ±0.5 | ±0.002 | ±0.024 | ±0.6 | ±0.004 | ±0.024 | ±0.5 | ±0.004 | ±0.024 | ±0.4 | ±0.002 | ±0.20 | ±0.4 | ±0.003 | |
| 5 | — | 7470 | 11.2 | .421 | .045 | 22.5 | .524 | -.095 | 30.5 | .584 | .273 | 22.1 | .477 | -.121 | 27.3 | .528 | .115 | 32.5 | |
| ±0.2 | ±0.001 | ±0.05 | ±0.4 | ±0.004 | ±0.027 | ±0.5 | ±0.004 | ±0.029 | ±0.6 | ±0.001 | ±0.023 | ±0.3 | ±0.001 | ±0.022 | ±0.5 | ±0.004 | |
| 3 | 6291 | 9.4 | .399 | .035 | 21.7 | .516 | -.050 | 29.6 | .573 | .113 | 23.4 | .490 | .007 | 25.0 | .523 | .120 | 32.1 | .570 | |
| ±0.6 | ±1.04 | ±0.023 | ±0.3 | ±0.003 | ±1.77 | ±0.4 | ±0.096 | ±0.027 | ±1.1 | ±1.194 | ±130 | ±0.8 | ±0.008 | ±1.57 | ±0.3 | ±1.02 | |
| 5 | 5 | 7444 | 6.4 | .344 | -.384 | 19.0 | .490 | -.421 | 27.8 | .531 | .046 | 15.4 | .389 | -.516 | 23.0 | .494 | -.112 | 27.6 | |
| ±0.3 | ±1.07 | ±0.041 | ±0.3 | ±205 | ±236 | ±0.8 | ±201 | ±019 | ±0.1 | ±0.97 | ±070 | ±0.4 | ±201 | ±210 | ±0.4 | ±0.99 | |
+
+Table 1: Translation quality of the models on the IWSLT data. The fourth column shows the size of the character-processing layers expressed as the vocabulary size of Transformer Base having the same number of parameters in the embeddings.
+
+CANINE is significantly worse. The results are mostly consistent across the language pairs.
+
+Increasing the downsampling rate from 3 to 5 degrades the translation quality for all architectures. Employing the two-step decoder matches the decoding speed of subword models. However, the overall translation quality is much worse.
+
+The three metrics that we use give consistent results in most cases. Often, relatively small differences in BLEU and chrF scores correspond to much bigger differences in the COMET score.
+
+# 5.3 Inference
+
+Inference algorithms for neural MT have been discussed extensively (Meister et al., 2020; Massarelli et al., 2020; Shi et al., 2020; Shaham and Levy, 2021b) for the subword models. Subword translation quality quickly degrades beyond a certain beam width unless heuristically defined length normalization is applied.
+
+As an alternative, Eikema and Aziz (2020) recently proposed Minimum Bayes Risk (MBR; Goel and Byrne 2000) estimation as an alternative. Assuming that similar sentences should be similarly probable, they propose repeatedly sampling from the model and selecting a sentence that is most similar to other samples. With subword models, MBR performs comparably to beam search.
+
+Intuitive arguments about the inference algorithms are often based on the properties of the
+
+subword output distribution. On average, character models will produce distributions with lower perplexity and thus likely suffer more from the exposure bias which might harm sampling from the model. Therefore, there is a risk that these empirical findings do not apply to character-level models.
+
+We explore what decoding strategies are best suited for the character-level models. We compare the translation quality of beam search decoding with different degrees of length normalization. Further, we compare length-normalized beam search decoding with MBR (with 100 samples), greedy decoding, and random sampling. We use the chrF as a comparison metric which allows pre-computing the character $n$ -grams and thus faster sentence pair comparison than the originally proposed METEOR (Denkowski and Lavie, 2011).
+
+Figure 4 shows the translation quality of the selected models for different beam sizes. The dotted lines denoting the translation quality without length normalization show that the quality of the subword models quickly deteriorates without length normalization, whereas vanilla and Lee-style character-level models do not seem to suffer from this problem.
+
+Table 2 presents the translation quality for different decoding methods. In all cases, beam search
+
+
+Figure 4: chrF scores for IWSLT en-de translation for different models and beam sizes. The dotted lines are without length normalization, the solid lines are with length normalization. All character processing architectures use a downsampling window of size 3. The legend tabulates the Pearson correlation of the beam size (starting from 5) and the chrF score.
+
+| Model | Enc. Dec. | Sample | Greedy | Beam | MBR |
| downsample |
| BPE 16k | | 0.482 | 0.545 | 0.555 | 0.554 |
| -0.132 | 0.199 | 0.262 | 0.187 |
| Vanilla char. | | 0.448 | 0.537 | 0.537 | 0.538 |
| -0.446 | 0.117 | 0.165 | 0.086 |
| Lee-style | 3 | — | 0.461 | 0.539 | 0.552 | 0.544 |
| | -0.340 | 0.142 | 0.200 | 0.106 |
| 3 | 3 | 0.430 | 0.523 | 0.540 | 0.526 |
| | -0.657 | -0.015 | 0.065 | -0.105 |
| Charformer | 3 | — | 0.305 | 0.530 | 0.547 | 0.448 |
| | -1.490 | 0.061 | 0.149 | -0.831 |
| 3 | 3 | 0.227 | 0.462 | 0.540 | 0.412 |
| | -1.720 | -0.424 | 0.036 | -1.090 |
| Canine | 3 | — | 0.307 | 0.531 | 0.547 | 0.456 |
| | -1.500 | 0.051 | 0.121 | -0.838 |
| 3 | 3 | 0.253 | 0.516 | 0.534 | 0.413 |
| | -1.680 | -0.097 | -0.034 | -1.130 |
+
+Table 2: chrF (yellow-green scale) and COMET (yellow-red scale) scores for decoding methods for models trained on en-de systems.
+
+is the best strategy. Sampling from character-level models leads to very poor translation quality that in turn also influences the MBR decoding leading to much worse results than beam search.
+
+Our experiments show that beam search with length normalization is the best inference algorithm for character-level models. They also seem to be more resilient towards the beam search curse compared to subword models.
+
+# 6 Experiments on WMT Data
+
+Based on the results of the experiments with the IWSLT data, we further experiment only with the Lee-style encoder using a downsampling factor of
+
+3 on the source side. Additionally, we experiment with hybrid systems with a subword encoder and character decoder. We train translation systems of competitive quality on two high-resource language pairs, English-Czech and English-German, and perform an extensive evaluation.
+
+# 6.1 Experimental Setup
+
+For English-to-Czech translation, we use the CzEng 2.0 corpus (Kocmi et al., 2020b) that aggregates and curates all sources for this language pair. We use all 66M authentic parallel sentence pairs and 50M back-translated Czech sentences.
+
+For the English-to-German translation, we use a subset of the training data used by Chen et al. (2021). The data consists of 66M authentic sentence pairs filtered from the available data for WMT and 52M back-translated German sentences from News Crawl 2020.
+
+We tag the back-translation data (Caswell et al., 2019). We use the Transformer Big architecture for all experiments with hyperparameters following Popel and Bojar (2018). For the Lee-style encoder, we double the hidden layer sizes compared to the IWSLT experiments (following the hidden size increase between the Transformer Base and Big architectures). In contrast to the previous set of experiments, we use Fairseq (Ott et al., 2019). Our code is available on Github5. System outputs are attached to the paper in the ACL anthology.
+
+We evaluate the systems not only on WMT20 test sets but also on data that often motivated the research of character-level methods. We evaluate the out-of-domain performance of the models on the NHS test set from the WMT17 Biomedical Task (Jimeno Yepes et al., 2017) and on the WMT16 IT Domain test set (Bojar et al., 2016). We use the same evaluation metrics as for the IWSLT experiments. We estimate the confidence intervals using bootstrap resampling (Koehn, 2004).
+
+We also assess the gender bias of the systems (Stanovsky et al., 2019; Kocmi et al., 2020a), using a dataset of sentence pairs with stereotypical and non-stereotypical English sentences. We measure the accuracy of gendered nouns and pronouns using word alignment and morphological analysis.
+
+Morphological generalization is often mentioned among the motivations for character-level modeling. Therefore, we evaluate our models using MorphEval (Burlot and Yvon, 2017; Burlot et al., 2018).
+
+ | News | IT | Medical | Gender Acc. | Avg. Mor-pheval | Recall of novel | Noisy setchrF |
| BLEU | chrF | COMET | BLEU | chrF | COMET | BLEU | chrF | COMET | Forms | Lemmas |
| en-cs | BPE 16k | 30.8±0.8 | .585±0.006 | .672±0.022 | 34.5±1.3 | .623±0.008 | .889±0.022 | 26.4±1.4 | .519±0.010 | .734±0.037 | 71.3 | 86.6 | 33.7vs.63.7 | 48.5vs.71.1 | .436±0.002 |
| BPE to char. | 28.4±0.8 | .570±0.006 | .597±0.024 | 31.4±1.2 | .603±0.008 | .821±0.025 | 23.6±1.3 | .499±0.010 | .674±0.039 | 68.9 | 87.0 | 34.3vs. | 47.4vs. | .436±0.001 |
| Vanilla char. | 27.7±0.7 | .563±0.006 | .550±0.026 | 30.0±1.2 | .589±0.008 | .778±0.028 | 23.3±1.3 | .492±0.010 | .663±0.039 | 70.2 | 86.4 | 34.4vs. | 47.4vs. | .493±0.001 |
| Lee-style enc. | 28.8±0.8 | .568±0.006 | .609±0.024 | 31.7±1.3 | .606±0.008 | .849±0.024 | 24.3±1.3 | .506±0.010 | .696±0.038 | 65.6 | 86.6 | 34.1vs. | 48.5vs. | .497±0.001 |
| en-de | BPE 16k | 31.5±0.9 | .603±0.006 | .418±0.021 | 45.6±1.3 | .701±0.009 | .622±0.021 | 38.7±1.6 | .640±0.010 | .569±0.034 | 66.5 | 90.6 | 40.2vs. | 51.0vs. | .464±0.002 |
| BPE to char. | 29.1±0.8 | .589±0.006 | .360±0.022 | 46.5±1.3 | .703±0.008 | .617±0.021 | 36.0±1.4 | .621±0.009 | .513±0.035 | 71.2 | 91.3 | 45.1vs. | 50.8vs. | .465vs. |
| Vanilla char. | 27.8±0.8 | .578±0.006 | .321±0.023 | 45.3±1.3 | .698±0.008 | .600±0.022 | 35.6±1.4 | .618±0.009 | .496±0.036 | 71.2 | 91.4 | 50.7vs. | 45.1vs. | .504±0.001 |
| Lee-style enc. | 29.1±0.8 | .588±0.006 | .363±0.022 | 46.5±1.3 | .710±0.008 | .619±0.022 | 36.5±1.4 | .623±0.009 | .500±0.037 | 74.0 | 91.5 | 44.5vs. | 50.8vs. | .515±0.001 |
+
+Table 3: Results of the WMT-scale experiments.
+
+Similar to the gender evaluation, MorphEval also uses contrastive sentence pairs that differ in exactly one morphological feature. Accuracy on the sentences is measured. Besides, we assess how well the models handle lemmas and forms that were unseen at training time. We tokenize and lemmatize all data with UDPipe (Straka and Straková, 2017). On the WMT20 test set, we compute the recall of test lemmas that were not in the training set and the recall of word forms that were not in the training data, but forms of the same lemma were. Note that not generating a particular lemma or form is not necessarily an error. Therefore, we report the recall in contrast with the recall of lemmas and forms that were represented in the training data.
+
+Character-level models are also supposed to be more robust towards source-side noise. We evaluate the noise robustness of the systems using synthetic noise. We use TextFlint (Wang et al., 2021) to generate synthetic noise in the source text with simulated typos and spelling errors. We generate 20 noisy versions of the WMT20 test set and report the average chrF score.
+
+# 6.2 Results
+
+The main results are presented in Table 3. The main trends in the translation quality are the same as in the case of IWSLT data: subword models outperform character models. Using Lee-style encoding narrows the quality gap and performs similarly to models with subword tokens on the source side. Although domain robustness often motivates character-level experiments, our experiments show that the trends are domain-independent, except for English-German IT Domain translation.
+
+The similar performance of the subword encoder and the Lee-style encoder suggests that the hidden states of the Lee-style encoder can efficiently emu
+
+late the subword segmentation. We speculate that the main weaknesses remain on the decoder side.
+
+In the English-to-Czech direction, the character-level models perform worse in gender bias evaluation, although they better capture grammatical gender agreement according to the MorphEval benchmark. On the other hand, character-level models make more frequent errors in the tense of coordinated verbs. There are no major differences in recall of novel forms and lemmas.
+
+For the English-to-German translation, character-level methods reach better results on the gender benchmark. We speculate that getting gender correct in German might be easier because unlike Czech it does not require subject-verb agreement. The average performance on the MorphEval benchmark is also slightly better for character models. Detailed results on MorphEval are in Tables 7 and 8 in the Appendix. The higher recall of novel forms also suggests slightly better morphological generalization.
+
+The only consistent advantage of the character-level models is their robustness towards source side noise. Here, the character-level models outperform both the fully subword model and the subword encoder.
+
+# 7 Conclusions
+
+In our extensive literature survey, we found evidence that character-level methods should reach comparative translation quality as subword methods, typically at the expense of much higher computation costs. We speculate that the computational cost is the reason why virtually none of the recent WMT systems used character-level methods or mentioned them as a reasonable alternative.
+
+Recently, most innovations in character-level
+
+modeling were introduced in the context of pretrained representations. In our comparison of character processing architectures (two of them used for the first time in the context of MT), we showed that 1D convolutions followed by highway layers still deliver the best results for MT.
+
+Character-level systems are still mostly worse than subword systems. Moreover, the recent character-level architectures do not show advantages over vanilla character models, other than improved speed.
+
+To overcome efficiency issues, we proposed a two-step decoding architecture that matches the speed of subword models, however at the expense of a further drop in translation quality.
+
+Furthermore, we found that conclusions of recent literature on decoding in MT do not generalize for character models. Character models do not suffer from the beam search curse and decoding methods based on sampling perform poorly, here.
+
+Evaluation on competitively large datasets showed that there is still a small quality gap between character and subword models. Character models do not show better domain robustness, and only slightly better morphological generalization in German, although this is often mentioned as important motivation for character-level modeling. The only clear advantage of character models is high robustness towards source-side noise.
+
+In contrast to earlier work on character-level MT, which claimed that decoding is straightforward and which focused on the encoder part of the model, our conclusions are that Lee-style encoding is comparable to subword encoders. Even now, most modeling innovations focus on encoding. Character-level decoding which is both accurate and efficient remains an open research question.
+
+# Acknowledgement
+
+Many thanks to Martin Popel for comments on the pre-print of this paper and to Lukas Edman for discovering a bug in the source code and for a fruitful discussion on the topic of the paper.
+
+The work at LMU Munich was supported by was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (No. 640550) and by the German Research Foundation (DFG; grant FR 2829/4-1). The work at CUNI was supported by the European Commission via its H2020 Program (contract No. 870930).
+
+# References
+
+Duygu Ataman, Orhan Firat, Mattia A. Di Gangi, Marcello Federico, and Alexandra Birch. 2019. On the importance of word boundaries in character-level neural machine translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 187-193, Hong Kong. Association for Computational Linguistics.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+Nikolay Banar, Walter Daelemans, and Mike Kestemont. 2020. Character-level transformer-based neural machine translation. In NLPIR 2020: 4th International Conference on Natural Language Processing and Information Retrieval, Seoul, Republic of Korea, December 18-20, 2020, pages 149-156. ACM.
+Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3028-3033, Brussels, Belgium. Association for Computational Linguistics.
+Loic Barrault, Magdalena Biesialska, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joannis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubesic, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Linguistics.
+Loic Barrault, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Muller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics.
+Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Nével, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference
+
+on machine translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131-198, Berlin, Germany. Association for Computational Linguistics.
+Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272-303, Belgium, Brussels. Association for Computational Linguistics.
+Franck Burlot, Yves Scherrer, Vinit Ravishankar, Ondrej Bojar, Stig-Arne Gronroos, Maarit Koponen, Tommi Nieminen, and François Yvon. 2018. The WMT'18 morpheval test suites for English-Czech, English-German, English-Finnish and Turkish-English. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 546-560, Belgium, Brussels. Association for Computational Linguistics.
+Franck Burlot and François Yvon. 2017. Evaluating the morphological competence of machine translation systems. In Proceedings of the Second Conference on Machine Translation, pages 43-55, Copenhagen, Denmark. Association for Computational Linguistics.
+Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53–63, Florence, Italy. Association for Computational Linguistics.
+Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Niehues Jan, Stüker Sebastian, Sudoh Katsuitho, Yoshino Koichiro, and Federmann Christian. 2017. Overview of the iwslt 2017 evaluation campaign. In International Workshop on Spoken Language Translation, pages 2-14.
+Pinzhen Chen, Jindrich Helcl, Ulrich Germann, Laurie Burchell, Nikolay Bogoychev, Antonio Valerio Miceli Barone, Jonas Waldendorf, Alexandra Birch, and Kenneth Heafield. 2021. The University of Edinburgh's English-German and English-Hausa submissions to the WMT21 news translation task. In Proceedings of the Sixth Conference on Machine Translation, pages 104-109, Online. Association for Computational Linguistics.
+Colin Cherry, George Foster, Ankur Bapna, Orhan First, and Wolfgang Macherey. 2018. Revisiting character-based neural machine translation with capacity and compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4295-4305, Brussels, Belgium. Association for Computational Linguistics.
+Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit segmentation for neural machine translation.
+
+In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1693-1703, Berlin, Germany. Association for Computational Linguistics.
+Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. 2021. CANINE: pre-training an efficient tokenization-free encoder for language representation. CoRR, abs/2103.06874.
+Marta R. Costa-jussa, Carlos Escolano, and José A. R. Fonollosa. 2017. Byte-based neural machine translation. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 154-158, Copenhagen, Denmark. Association for Computational Linguistics.
+Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 85-91, Edinburgh, Scotland. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Bryan Eikema and Wilker Aziz. 2020. Is MAP decoding all you need? the inadequacy of the mode in neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4506-4520, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Carlos Escolano, Marta R. Costa-jussà, and José A. R. Fonollosa. 2017. The TALP-UPC neural machine translation system for German/Finnish-English using the inverse direction model in rescoring. In Proceedings of the Second Conference on Machine Translation, pages 283–287, Copenhagen, Denmark. Association for Computational Linguistics.
+Yingqiang Gao, Nikola I. Nikolov, Yuhuang Hu, and Richard H.R. Hahnloser. 2020. Character-level translation with self-attention. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1591-1604, Online. Association for Computational Linguistics.
+Vaibhava Goel and William J. Byrne. 2000. Minimum bayes-risk automatic speech recognition. Comput. Speech Lang., 14(2):115-135.
+Stig-Arne Gronroos, Sami Virpioja, and Mikko Kurimo. 2017. Extending hybrid word-character neural machine translation with multi-task learning of
+
+morphological analysis. In Proceedings of the Second Conference on Machine Translation, pages 296-302, Copenhagen, Denmark. Association for Computational Linguistics.
+Rohit Gupta, Laurent Besacier, Marc Dymetman, and Matthias Galle. 2019. Character-based NMT with transformer. CoRR, abs/1911.04997.
+Chester Holtz, Chuyang Ke, and Daniel Gildea. 2017. University of Rochester WMT 2017 NMT system submission. In Proceedings of the Second Conference on Machine Translation, pages 310-314, Copenhagen, Denmark. Association for Computational Linguistics.
+Antonio Jimeno Yepes, Aurélie Néveol, Mariana Neves, Karin Verspoor, Ondrej Bojar, Arthur Boyer, Cristian Grozea, Barry Haddow, Madeleine Kittner, Yvonne Lichtblau, Pavel Pecina, Roland Roller, Rudolf Rosa, Amy Siu, Philippe Thomas, and Saskia Trescher. 2017. Findings of the WMT 2017 biomedical translation shared task. In Proceedings of the Second Conference on Machine Translation, pages 234-247, Copenhagen, Denmark. Association for Computational Linguistics.
+Rebecca Knowles, Darlene Stewart, Samuel Larkin, and Patrick Littell. 2020. NRC systems for the 2020 Inuktitut-English news translation task. In Proceedings of the Fifth Conference on Machine Translation, pages 156-170, Online. Association for Computational Linguistics.
+Tom Kocmi, Tomasz Limisiewicz, and Gabriel Stanovsky. 2020a. Gender coreference and bias evaluation at WMT 2020. In Proceedings of the Fifth Conference on Machine Translation, pages 357-364, Online. Association for Computational Linguistics.
+Tom Kocmi, Martin Popel, and Ondrej Bojar. 2020b. Announcing czeng 2.0 parallel corpus with over 2 gigawords. CoRR, abs/2007.03006.
+Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics.
+Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. Association for Computational Linguistics.
+Julia Kreutzer and Artem Sokolov. 2018. Learning to segment inputs for NMT favors character-level processing. In Proceedings of the 15th International
+
+Conference on Spoken Language Translation, pages 166-172, Brussels. International Conference on Spoken Language Translation.
+Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.
+Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365-378.
+Jiahuan Li, Yutong Shen, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2021. When is char better than subword: A systematic study of segmentation algorithms for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 543-549, Online. Association for Computational Linguistics.
+Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Hassan Sajjad. 2019. Findings of the first shared task on machine translation robustness. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 91-102, Florence, Italy. Association for Computational Linguistics.
+Jindrich Libovicky and Alexander Fraser. 2020. Towards reasonably-sized character-level transformer NMT by finetuning subword systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2572-2579, Online. Association for Computational Linguistics.
+Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1054-1063, Berlin, Germany. Association for Computational Linguistics.
+Sainik Kumar Mahata, Dipankar Das, and Sivaji Bandyopadhyay. 2018. JUCBNMT at WMT2018 news translation task: Character based neural machine translation of Finnish to English. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 445-448, Belgium, Brussels. Association for Computational Linguistics.
+Luca Massarelli, Fabio Petroni, Aleksandra Piktus, Myle Ott, Tim Rocktäschel, Vassilis Plachouras,
+
+Fabrizio Silvestri, and Sebastian Riedel. 2020. How decoding strategies affect the verifiability of generated text. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 223-235, Online. Association for Computational Linguistics.
+Clara Meister, Ryan Cotterell, and Tim Vieira. 2020. If beam search is the answer, what was the question? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2173-2185, Online. Association for Computational Linguistics.
+Nikola I. Nikolov, Yuhuang Hu, Mi Xue Tan, and Richard H.R. Hahnloser. 2018. Character-level Chinese-English translation through ASCII encoding. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 10-16, Brussels, Belgium. Association for Computational Linguistics.
+Robert Östling, Yves Scherrer, Jörg Tiedemann, Gongbo Tang, and Tommi Nieminen. 2017. The Helsinki neural machine translation system. In Proceedings of the Second Conference on Machine Translation, pages 338-347, Copenhagen, Denmark. Association for Computational Linguistics.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
+Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
+Martin Popel and Ondrej Bojar. 2018. Training Tips for the Transformer Model. The Prague Bulletin of Mathematical Linguistics, 110:43-70.
+Maja Popovic. 2015. *chrF: character n-gram F-score* for automatic MT evaluation. In *Proceedings of the Tenth Workshop on Statistical Machine Translation*, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
+Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual
+
+Meeting of the Association for Computational Linguistics, pages 1882-1892, Online. Association for Computational Linguistics.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
+Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685-2702, Online. Association for Computational Linguistics.
+Yves Scherrer, Raul Vázquez, and Sami Virpioja. 2019. The University of Helsinki submissions to the WMT19 similar language translation task. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 236-244, Florence, Italy. Association for Computational Linguistics.
+Rico Sennrich. 2017. How grammatical is character-level neural machine translation? assessing MT quality with contrastive translation pairs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376-382, Valencia, Spain. Association for Computational Linguistics.
+Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
+Rico Sennrich and Biao Zhang. 2019. Revisiting low-resource neural machine translation: A case study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 211-221, Florence, Italy. Association for Computational Linguistics.
+Uri Shaham and Omer Levy. 2021a. Neural machine translation without embeddings. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 181-186, Online. Association for Computational Linguistics.
+Uri Shaham and Omer Levy. 2021b. What do you get when you cross beam search with nucleus sampling? CoRR, abs/2107.09729.
+Xing Shi, Yijun Xiao, and Kevin Knight. 2020. Why neural machine translation prefers empty outputs. CoRR, abs/2012.13454.
+
+Lucia Specia, Zhenhao Li, Juan Pino, Vishrav Chaudhary, Francisco Guzmán, Graham Neubig, Nadir Durrani, Yonatan Belinkov, Philipp Koehn, Hassan Sajjad, Paul Michel, and Xian Li. 2020. Findings of the WMT 2020 shared task on machine translation robustness. In Proceedings of the Fifth Conference on Machine Translation, pages 76-91, Online. Association for Computational Linguistics.
+Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Highway networks. CoRR, abs/1505.00387.
+Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy. Association for Computational Linguistics.
+Milan Straka and Jana Straková. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics.
+Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
+Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Prakash Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. 2021. Charformer: Fast character transformers via gradient-based subword tokenization. CoRR, abs/2106.12672.
+Dušan Variš and Ondřej Bojar. 2017. CUNI system for WMT17 automatic post-editing task. In Proceedings of the Second Conference on Machine Translation, pages 661-666, Copenhagen, Denmark. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Xiao Wang, Qin Liu, Tao Gui, Qi Zhang, Yicheng Zou, Xin Zhou, Jiacheng Ye, Yongxin Zhang, Rui Zheng, Zexiong Pang, Qinzhuo Wu, Zhengyan Li, Chong Zhang, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Shan Qin, Bolin Zhu, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, and Xuanjing Huang. 2021. TextFlint: Unified multilingual robustness evaluation toolkit
+
+for natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 347-355, Online. Association for Computational Linguistics.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2021a. Byt5: Towards a token-free future with pre-trained byte-to-byte models. CoRR, abs/2105.13626.
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021b. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.
+Longtu Zhang and Mamoru Komachi. 2018. Neural machine translation of logographic language using sub-character level information. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 17-25, Brussels, Belgium. Association for Computational Linguistics.
+
+# A Two-step decoder
+
+Here, we describe details of the architecture of the two step decoder shown in Figure 3. The input of the decoder are hidden states of the character processing architecture, i.e., for a downsampling factor $s$ , a sequence that is $s$ times shorter than the input sequence. The output of the Transformer stack is a sequence of the same length.
+
+For each Transformer decoder state $h_i$ , the decoder needs to produce $s$ characters. This is done by a light-weight autoregressive LSTM decoder. In each step, it has two inputs: the embedding of the previously decoded character and a projection of the decoder state $h_i$ . There are $s$ different linear projections for each of the output character generated from a single Transformer state.
+
+At inference time, the LSTM decoder gets one Transformer state and generates $s$ output characters. The characters are fed to the character processing architecture, which is in turn used to generate the next Transformer decoder state.
+
+# B IWSLT Experiments
+
+# B.1 Dataset details
+
+We used the tst2010 part of the dataset for validation and tst2015 for testing and did not use any other test sets. The data sizes are presented in Table 4.
+
+# B.2 Model Hyperparameters
+
+All models are trained with initial learning rate: $5 \cdot 10^{-4}$ with 4k warmup steps. The batch size is 20k tokens for both BPE and character experiments with update after 3 batches. Label smoothing is set to 0.1.
+
+Lee-style. The character embedding dimension is 64. The original paper used kernel sizes from 1 to 8. For ease of implementation, we only use even-sized kernels up to size 9. The encoder uses 1D convolutions of kernel size 1, 3, 5, 7, 9 with 128, 256, 512, 512, 256 filters. Their output is concatenated and projected to the model dimension, followed by 2 highway layers and 2 Transformer feed-forward layers.
+
+CANINE. The local self-attention span in the encoder is $4 \times$ the downsampling factor, in the encoder, equal to the downsampling factor.
+
+Two-step decoder. The decoder uses character embeddings with dimension of 64, which is also the size of the projection of the Transformer decoder state. The hidden state size of the LSTM is 128.
+
+# B.3 Validation Performance
+
+The validation BLEU and chrF scores and training and inference times are in Table 5. The training times were measured on machines with GeForce GTX 1080 Ti GPUs and with Intel Xeon E5-2630v4 CPUs (2.20GHz), a single GPU was used.
+
+Note that the experiments on IWSLT were not optimized for speed and are thus not comparable with the times reported on the larger datasets.
+
+# C WMT Experiments
+
+# C.1 Training Details
+
+We use the Transformer Big architecture as defined FairSeq's standard transformer_wmt_en_de/big_t2t.
+
+The Lee-style encoder uses filters sizes 1, 3, 5, 7, 9 of dimensions 256, 512, 1024, 1024, 512. The other parameters remains the same as in the IWSLT experiments.
+
+We set the beta parameters of the Adam optimizer to 0.9 and 0.998 and gradient clipping to 5. The learning rate is $5 \cdot 10^{-4}$ with 16k warmup steps. Early stopping is with respect to negative log likelihood with patience 10. We save 5 best checkpoints and do checkpoint averaging before evaluation. The maximum batch size is 1800 tokens for the BPE experiments and 500 for character-level experiments. We train the models on 4 GPUs, so the effective batch size is 4 times bigger.
+
+# C.2 Validation Performance
+
+During training, we evaluated the models by measuring the cross-entropy on the validation set. After model training, we use grid search to estimate the best value of length normalization on the validation set. The translation quality on the validation data is tabulated in Table 6.
+
+# C.3 Detailed Results
+
+The detailed results on the MorphEval benchmark are in Tables 7 (Czech) and 8 (German). The details of the noise evaluation are in Table 9.
+
+ | Train | Validation | Test |
| Sent. | Char. src | Char. tgt | Sent. | Char. src | Char. tgt | Sent. | Char. src | Char. tgt |
| en-ar | 232k | 22.5M | 32.8M | 1.3k | 119k | 179k | 1.2k | 116k | 164k |
| en-de | 206k | 19.9M | 21.7M | 1.3k | 117k | 132k | 1.1k | 109k | 100k |
| en-fr | 232k | 22.6M | 25.5M | 1.3k | 119k | 140k | 1.2k | 116k | 129k |
+
+Table 4: IWSLT data statistics in terms of number of parallel sentences and number of characters.
+
+| Model | Enc. Dec. | From English | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| ar | de | fr | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | Train | Valid | BLEU | chrF | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10.4 | 19.8 | 3.7 | 591 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| BPE 16k | 8.9±1.6 | 19.4±1.0 | 13.8±0.2 | 411±0.0 | 411±0.0 | 8.2±0.9 | 23.8±8.6 | 26.1±0.3 | 523±0.0 | 6.8±1.0 | 20.6±1.0 | 35.8±0.3 | 594±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0.1 | 27.9±0.1 | 501±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0.1 | 501±0.0 | 594±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0.1 | 501±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0.1 | 501±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0.1 | 501±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0 | 501±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0.1 | 501±0.0 | 10.4±0.7 | 19.8±0.7 | 27.9±0.1 | 501±0.0 | 10.4±0.7 | 19.8±0.7 | 27 | 591 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| Vanilla char. | 14.5±5.5 | 203.2±3.9 | 11.4±0.2 | 417±0.0 | 417±0.0 | 13.7±5.5 | 293.5±5.8 | 24.7±0.5 | 516±0.0 | 17.0±2.0 | 318.7±3.8 | 34.9±0.3 | 590±0.2 | 16.2±5.2 | 241.3±28.1 | 26.8±0.7 | 499±0.0 | 16.2±2.1 | 241.3±27.8 | 26.8±0.7 | 499±0.0 | 16.2±2.1 | 241.3±27.8 | 26.8±0.7 | 499±0.0 | 16.2±2.1 | 241.3±27.8 | 26.8±0.7 | 499±0.0 | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 499±0.0 | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 499±0.0 | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 499± | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 499± | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 499± | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 489± | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 489± | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | 489± | 16.2±2.2 | 241.3±27.8 | 26.8±0.7 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| Vanilla char. | 13.9±0.5 | 232.8±3.9 | 11.5±0.1 | 420±0.0 | 420±0.0 | 16.6±9.2 | 331.0±7.2 | 24.8±0.1 | 519±0.0 | 11.1±9.1 | 358.2±7.0 | 34.9±0.4 | 591±0.0 | 9.6±9.0 | 321.0±12 | 27.9±0.1 | 27.9±0.1 | 502±0.0 | 16.5±8.2 | 27.9±0.1 | 502±0.0 | 593±0.2 | 16.5±8.2 | 27.9±0.1 | 503±0.0 | 16.5±8.2 | 27.9±0.1 | 503±0.0 | 16.5±8.2 | 27.9±0.1 | 503±0.0 | 16.5±8.2 | 27.9±0.1 | 503±0.0 | 16.5±8.2 | 27.9±0.1 | 504±0.0 | 17.4±3.4 | 317.9±3.3 | 589±0.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| Lee-style | 5-3 | 165±6.8 | 223.2±6.9 | 11.0±0.2 | 411±0.0 | 9.4±7.4 | 313.8±4.9 | 23.6±0.2 | 510±0.0 | 18.7±2.0 | 347.5±3.9 | 326.6±0.4 | 576±7.6 | 9.2±7.6 | 237.0±12.0 | 237.0±4.7 | 472±0.4 | 12.3±1.7 | 237.0±12.7 | 237.0±4.7 | 472±0.4 | 12.3±1.7 | 237.0±12.7 | 237.0±4.7 | 472±0.4 | 12.3±1.7 | 237.0±12.7 | 237.0±4.7 | 472±0.4 | 12.3 ± 9.6 | 237.0±9.0 | 580±1.0 | 10.8±9.8 | 287.8±3.6 | 364±0.2 | 580±0.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3-5 | 154±3.4 | 81.5±2.1 | 398±0.2 | 10.0±0.2 | 15.7±3.1 | 103.0±6.0 | 22.5±0.3 | 502±0.0 | 17.1±2.9 | 106.0±0.7 | 33.0±0.2 | 579±8.3 | 14.2±2.2 | 102.5±2.2 | 48.4±0.3 | 484±0.1 | 16.2±2.0 | 108.8±2.2 | 23.6±0.3 | 501±0.0 | 16.2±2.0 | 90.8±2.2 | 27.3±0.2 | 513±0.0 | 16.2±2.0 | 90.8±2.2 | 27.3±0.2 | 513±0.0 | 14.8±2.2 | 94.8±3.7 | 353±0.2 | 574±0.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5-5 | 13.7±3.9 | 41.0±0.9 | 8.4±0.1 | 377±0.2 | 13.1±5.2 | 46.4±0.8 | 195±0.3 | 474±0.3 | 10.7±3.4 | 44.2±11.1 | 28.0±1.9 | 545±0.3 | 11.6±6.8 | 47.2±0.4 | 22.1±0.2 | 461±0.2 | 11.6±6.8 | 47.2±0.4 | 22.1±0.2 | 461±0.2 | 11.6±6.8 | 47.2±0.4 | 22.1±0.2 | 461±0.2 | 11.6±6.8 | 47.2±0.4 | 22.1±0.2 | 459±0.2 | 11.6±6.8 | 47.2±0.4 | 22.1±0.2 | 461±0.2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| Charfaerner | 3-5-5 | 164±2.4 | 232.0±8.4 | 11.3±0.2 | 417±0.2 | 16.4±2.7 | 342.2±7.1 | 24.0±0.4 | 510±0.4 | 17.2±1.5 | 363.8±8.3 | 33.7±0.1 | 582±0.1 | 15.4±7.0 | 363.0±40 | 27.3±0.3 | 500±0.0 | 16.7±7.0 | 363.0±40 | 27.3±0.3 | 500±0.0 | 16.7±7.0 | 276.0±4.4 | 29.4±0.3 | 531±0.1 | 17.9±3.2 | 306.2±8.3 | 371±0.3 | 587±0.1 | 17.9±3.2 | 306.2±8.3 | 371±0.3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5-5-5 | 14.0±1.9 | 63.0±7.0 | 7.4±0.1 | 359±0.3 | 12.2±1.0 | 80.8±15.4 | 18.2±0.2 | 456±0.2 | 13.8±3.2 | 76.2±7.4 | 27.8±0.5 | 536±0.5 | 11.5±3.7 | 62.5±8.0 | 18.1±0.5 | 419±0.5 | 11.6±3.7 | 62.5±8.0 | 23.7±2.7 | 427±0.27 | 11.6±1.4 | 64.2±7.9 | 23.0±0.3 | 480±5.5 | 13.0±5.1 | 72.5±9.1 | 306.2±8.3 | 371±0.3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3-5-5 | 15.5±1.6 | 81.2±1.5 | 398±0.2 | 10.0±0.1 | 14.9±2.3 | 102.8±3.1 | 22.5±0.3 | 497±0.3 | 16.2±1.1 | 119.2±9.0 | 32.2±0.4 | 571±3.8 | 14.8±3.8 | 104.2±4.8 | 24.8±0.3 | 482±0.3 | 13.4±0.7 | 104.2±5.2 | 248±0.3 | 482±0.3 | 13.4±0.7 | 89.0±2.5 | 27.6±0.2 | 516±0.2 | 15.7±2.4 | 98.8±9.8 | 35.7±0.1 | 576±0.1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5-5-5 | 14.0±1.9 | 63.0±7.0 | 7.4±0.1 | 359±0.3 | 12.2±1.0 | 80.8±15.4 | 18.2±0.2 | 456±0.2 | 13.8±3.2 | 76.2±7.4 | 28.7±0.5 | 536±0.5 | 11.5±3.7 | 62.5±8.0 | 18.1±0.5 | 419±0.5 | 11.6±3.7 | 62.5±8.0 | 23.7±2.7 | 427±0.27 | 11.6±1.4 | 64.2±7,9 | 23.0±0.3 | 480±5.5 | 13.0±5.1 | 72.5±9.1 | 306.2±8.3 | 371±0.3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| Charfaerner | 14.8±2.2 | 308.8±6.8 | 10.7±0.3 | 407±0.3 | 19.1±2.3 | 481.0±5.2 | 24.1±0.2 | 513±0.0 | 20.0±3.3 | 494.8±13.8 | 33.9±0.6 | 582±0.3 | 196.8±3.3 | 368.8±3.8 | 268.9±3.3 | 493±0.3 | 197.3±2.3 | 368.8±3.8 | 263±0.3 | 493±0.3 | 185±2.3 | 318.2±3,4 | 288±0.3 | 526±0.3 | 13.3±6,5 | 347.5±10,1 | 367,5±6,4 | 583±0,4 | 588±0,4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5-5-5 | 13.9±7.5 | 249.2±5.0 | 9.4±0.2 | 386±0.2 | 13.5±7.3 | 366.8±2.8 | 216.8±0.4 | 489±0.5 | 20.1±4.2 | 395.5±5.4 | 312.2±0.7 | 558±0.7 | 17.7±4.8 | 363.2±8.9 | 226.2±0.1 | 458±0.1 | 12.9±7.5 | 308.8±10,8 | 226.2±0,2 | 508±0,2 | 12.9±7,5 | 308.8±10,8 | 226.2±0,2 | 508±0,2 | 12.9±7,5 | 308.8±10,8 | 226,2±5,7 | 364,5±3,7 | 564,5±5,4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3-5-5 | 17.3±2.5 | 91.5±1.1 | 9.4±0.3 | 390±0.3 | 18.6±2.8 | 138.5±11.9 | 216.5±9.4 | 493±0.1 | 18.4±1.8 | 132.2±15.9 | 316±0.6 | 567±4.9 | 14.1±1.8 | 115.2±15.6 | 239.4±0.6 | 474±0.4 | 12.9±2,4 | 105.5±4,0 | 262,5±0,8 | 505±0,2 | 12.9±5,9 | 104,5±4,0 | 262,5±0,8 | 505±0,2 | 12.9±5,9 | 104,5±4,1 | 35,0±5,1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5-5-5 | 17.1±8,3 | 72.0±6,7 | 6.1±0,2 | 332±0,5 | 15.2±4,4 | 85.5±9,6 | 17.3±0,3 | 450±0,4 | 16.2±1.8 | 89.0±5,4 | 27.1±0,3 | 529±0,3 | 20.9±1,1 | 81,8±1,9 | 15,7±0,4 | 391±0,4 | 15,7±3,9 | 75,0±2,4 | 22,5±0,2 | 473±0,3 | 15,7±3,9 | 75,0±2,4 | 22,5±0,2 | 473±0,3 | 13,1±5,1 | 84,5±5,0 | 29,4±0,2 | 529±0,2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
+
+Table 5: Training time (hours), inference time on the validation set (seconds) and translation quality in terms of BLUE and chrF scores on the validation data.
+
+ | BLEU | chrF | COMET | Len. norm. |
| en- | BPE 16k | 24.4 | .524 | .753 | 0.8 |
| BPE to char | 22.9 | .513 | .687 | 1.2 |
| Vanilla char. | 22.3 | .506 | .654 | 1.4 |
| Lee-style enc. | 23.1 | .514 | .698 | 1.0 |
| Lee-style enc. 121 | 23.7 | .520 | .724 | 1.4 |
| en-de | BPE 16k | 47.8 | .708 | .651 | 1.2 |
| BPE to char | 43.7 | .683 | .594 | 1.2 |
| Vanilla char. | 42.7 | .675 | .569 | 1.4 |
| Lee-style enc. | 43.7 | .684 | .595 | 1.6 |
| Lee-style enc. 121 | 44.9 | .691 | .617 | 1.0 |
+
+Table 6: Translation quality on the validation data and the value of length normalization that led to the best quality.
+
+ | BPE | BPE2char | char | lee |
| comparative | 78.2% | 78.2% | 79.6% | 80.4% |
| conditional | 59.8% | 65.8% | 71.2% | 68.4% |
| coordverb-number | 85.4% | 81.2% | 77.4% | 80.0% |
| coordverb-person | 85.2% | 82.0% | 78.0% | 80.0% |
| coordverb-tense | 81.8% | 78.4% | 74.0% | 75.2% |
| coref-gender | 71.7% | 74.8% | 76.5% | 75.9% |
| future | 86.2% | 85.8% | 84.0% | 85.8% |
| negation | 96.2% | 97.4% | 98.0% | 98.2% |
| noun number | 79.4% | 81.0% | 80.8% | 81.4% |
| past | 87.2% | 89.0% | 89.4% | 86.8% |
| preposition | 96.0% | 96.6% | 96.1% | 95.9% |
| pron2coord | 100.0% | 100.0% | 99.6% | 100.0% |
| pron2nouns-case | 95.8% | 95.6% | 94.4% | 94.6% |
| pron2nouns-gender | 95.2% | 95.2% | 93.6% | 93.8% |
| pron2nouns-number | 95.6% | 95.6% | 94.4% | 94.6% |
| pron fem | 94.0% | 94.6% | 93.8% | 93.2% |
| pron plur | 92.0% | 92.0% | 92.0% | 91.4% |
| pron relative-gender | 78.9% | 81.8% | 81.8% | 81.5% |
| pron relative-number | 80.1% | 83.1% | 82.8% | 82.6% |
| superlative | 93.0% | 91.4% | 91.0% | 92.0% |
| NOUN case | .102 | .108 | .105 | .100 |
| ADJ gender | .198 | .194 | .211 | .202 |
| ADJ number | .198 | .190 | .213 | .202 |
| ADJ case | .204 | .198 | .220 | .207 |
| VERB number | .117 | .103 | .101 | .104 |
| VERB person | .091 | .083 | .085 | .084 |
| VERB tense | .113 | .109 | .108 | .110 |
| VERB negation | .081 | .077 | .075 | .075 |
| Average | 88.6% | 87.0% | 86.4% | 86.6% |
+
+Table 7: Detailed MorphEval results for English-Czech translation.
+
+ | BPE | BPE2char | Char | Lee |
| dj strong | 97.9% | 98.7% | 99.6% | 99.2% |
| comparative | 96.9% | 96.8% | 95.6% | 96.3% |
| compounds syns | 65.9% | 66.0% | 65.4% | 66.7% |
| conditional | 90.5% | 95.4% | 97.0% | 97.0% |
| coordverb-number | 98.0% | 98.7% | 99.1% | 99.3% |
| coordverb-person | 98.3% | 99.1% | 99.5% | 99.8% |
| coordverb-tense | 98.0% | 98.7% | 99.3% | 99.3% |
| coref-gender | 94.5% | 93.2% | 95.1% | 91.9% |
| future | 87.3% | 90.8% | 87.6% | 88.9% |
| negation | 98.8% | 98.8% | 99.4% | 99.4% |
| noun number | 67.0% | 69.3% | 71.5% | 68.4% |
| past | 94.7% | 97.1% | 96.0% | 96.5% |
| pron2nouns-gender | 100.0% | 100.0% | 100.0% | 100.0% |
| pron2nouns-number | 100.0% | 100.0% | 100.0% | 100.0% |
| pron plur | 99.2% | 99.2% | 98.6% | 98.2% |
| pron relative-gender | 69.4% | 69.1% | 68.8% | 71.0% |
| pron relative-number | 69.4% | 69.1% | 68.8% | 71.0% |
| superlative | 99.8% | 99.8% | 99.8% | 99.6% |
| verb position | 96.0% | 95.2% | 95.2% | 95.8% |
| ADJ gender | .006 | .002 | .002 | .003 |
| ADJ number | .004 | .001 | .002 | .001 |
| NOUN case | .018 | .011 | .013 | .011 |
| VERB number | .022 | .017 | .015 | .020 |
| VERB person | .010 | .010 | .006 | .008 |
| VERB tense/mode | .046 | .041 | .049 | .050 |
| Average | 90.6 | 91.3 | 91.4 | 91.5 |
+
+Table 8: Detailed MorphEval results for English-German translation.
+
+ | | BLEU | chrF | COMET |
| en-cs | BPE 16k | 15.1 ±0.2 | .436 ±.002 | -.863 ±.010 |
| BPE to char | 14.4 ±0.2 | .436 ±.001 | -.836 ±.009 |
| Vanilla char. | 19.5 ±0.2 | .493 ±.001 | -.307 ±.009 |
| Lee-style enc. | 20.2 ±0.2 | .497 ±.001 | -.308 ±.009 |
| en-de | BPE 16k | 16.0 ±0.2 | .464 ±.002 | -1.127 ±.012 |
| BPE to char | 15.5 ±0.2 | .465 ±.001 | -1.112 ±.008 |
| Vanilla char. | 18.5 ±0.1 | .504 ±.001 | -.742 ±.013 |
| Lee-style enc. | 19.6 ±0.1 | .515 ±.001 | -.743 ±.014 |
+
+Table 9: Detailed results on the datasets with generated noise. Average and standard deviation for 20 evaluations.
\ No newline at end of file
diff --git a/whydontpeopleusecharacterlevelmachinetranslation/images.zip b/whydontpeopleusecharacterlevelmachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9053f83f09511ec6e57871d1d0e35233beebb0b0
--- /dev/null
+++ b/whydontpeopleusecharacterlevelmachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ed8405423f3c422fc5cfd22644b78c02b7f77a623f3d508b779a737f502a8d95
+size 816567
diff --git a/whydontpeopleusecharacterlevelmachinetranslation/layout.json b/whydontpeopleusecharacterlevelmachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..65939119781a8910b90e9021b3ed79e9185bad01
--- /dev/null
+++ b/whydontpeopleusecharacterlevelmachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be2f7fcb0e0bde31b686092d875f54b04d66532801bd20ad0b4b7fe56f8acd00
+size 485505
diff --git a/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_content_list.json b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..016769d9b38422c0199e11a074a57f5edd6b7310
--- /dev/null
+++ b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1efaf99e78837a4fea8e703a8c6d9b4590f477772c8831e8e5109ef5f3802a44
+size 80099
diff --git a/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_model.json b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..490ea1cabbb8684177b9d11e3e70055dfab74ffd
--- /dev/null
+++ b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5634772ec7dd2a6a1db1e5c2faa75df1740dabd54e199fc76c6bd720f3a7fb45
+size 96501
diff --git a/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_origin.pdf b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..094e3e12fdf5d19f0db0deb58891ed0ec1c088ae
--- /dev/null
+++ b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/8845ee8d-a842-43b5-a1b3-1d134a6a475b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dea63e32cc844d3a40fd1fd88ede650f592e4ebc8271347bbbbaaf53ec807d54
+size 401160
diff --git a/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/full.md b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d0429db66d4d2618219803ea8e4a2c04e992e2be
--- /dev/null
+++ b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/full.md
@@ -0,0 +1,354 @@
+# Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation
+
+Kushal Arora $^{1*}$ Layla El Asri $^{2}$ Hareesh Bahuleyan $^{3\dagger}$ Jackie Chi Kit Cheung $^{1‡}$
+
+$^{1}$ Mila / McGill University $^{2}$ Borealis AI $^{3}$ Zalando SE
+
+{kushal.arora@mail, jcheung@cs}.mcgill.ca
+
+layla.elasri@borealisai.com, hareeshbahuleyan@gmail.com
+
+# Abstract
+
+Current language generation models suffer from issues such as repetition, incoherence, and hallucinations. An often-repeated hypothesis for this brittleness of generation models is that it is caused by the training and the generation procedure mismatch, also referred to as exposure bias. In this paper, we verify this hypothesis by analyzing exposure bias from an imitation learning perspective. We show that exposure bias leads to an accumulation of errors during generation, analyze why perplexity fails to capture this accumulation of errors, and empirically show that this accumulation results in poor generation quality.
+
+# 1 Introduction
+
+Large-scale neural language models have made great strides in a series of language generation tasks such as machine translation (Bahdanau et al., 2014; Vaswani et al., 2017; Raffel et al.), text summarization (See et al., 2017; Lewis et al., 2019; Zhang et al., 2019a), conversational dialog generation (Serban et al., 2015; Lowe et al., 2017; Roller et al., 2020; Zhang et al., 2020), etc.
+
+However, despite the successes achieved by these models on several conditional generation tasks, they continue to suffer from degenerate behaviors such as repetition, a lack of diversity, dullness, and, incoherence, especially in open-ended generation settings such as text completion and dialog modeling (Holtzman et al., 2019). This degenerate behavior is often attributed to a mismatch between the maximum likelihood training and gen
+
+eration procedure mismatch (Welleck et al., 2019; Choi et al., 2020; Li et al., 2016).
+
+Maximum likelihood training also referred to as teacher forcing (Williams and Zipser, 1989), factorizes the language model as a linear chain, and maximizes the log-likelihood of this factorized language model on a training corpus. During this maximum likelihood training, the model learns a distribution of the next tokens conditioned on the contexts from the ground-truth training data.
+
+A concern with the MLE-based training is that the ground-truth contexts from the training corpus are not available during generation. Rather, the conditioning contexts during this phase comprise of tokens previously generated by the model itself. The distribution of these contexts seen during the generation phase might be very different from the ones encountered during the training phase. This mismatch is referred to as exposure bias (Ranzato et al., 2016; Bengio et al., 2015).
+
+A side-effect of exposure bias is that an error at any step during generation might have a cascading effect as the next context will incorporate this erroneous prediction, deviating away from the ground truth context distribution leading to more errors. These errors will result in sequences that degenerate over the sequence length resulting in incoherent text, lack of vocabulary diversity, and detachment from the source sequence resulting in hallucination, and/or word- and phrase-level repetition.
+
+There is an active debate in the language generation community on the impact of exposure bias in language generation. Authors have both validated (Xu et al., 2019; Zhang et al., 2019b) and questioned (He et al., 2019) the impact of exposure bias on language generation. Several approaches have been proposed to mitigate exposure bias (Ranzato et al., 2016; Shen et al., 2016; Bahdanau et al., 2017; Chen et al., 2020; Leblond et al., 2018; Welleck et al., 2019) but these have neither formalized exposure bias clearly nor provide any empiri
+
+cal evidence that these methods mitigate the effect of exposure bias. Finally, previous works have linked exposure bias to out-of-domain (Wang and Sennrich, 2020) and out of distribution (Schmidt, 2019) generalization, and hallucinations (Wang and Sennrich, 2020) but these claims remain weak in absence of a clear and principled formalization of the exposure bias issue.
+
+In this paper, we attempt to clarify this confusion by formalizing exposure bias in the terms of accumulation of errors and analyzing its impact on generation quality. We do this by providing a theoretically-grounded understanding of the exposure bias issue by analyzing it from an imitation learning perspective. We use this perspective to show that behavior cloning—an imitation learning algorithm is equivalent to teacher forcing under the choice of a particular loss function. We then exploit this equivalence to borrow the bound on error accumulation caused by behavior cloning and use it to quantify exposure bias and analyze the error accumulation in language generation.
+
+Finally, we use this quantifiable definition of exposure bias to demonstrate that models trained using teacher forcing do suffer from an accumulation of errors. We also show, both analytically and empirically, why perplexity fails to capture this error accumulation, and how a lower exposure bias correlates with better generation quality.
+
+# 2 Language Generation Formulation
+
+Given a finite-sized vocabulary set $\mathcal{V}$ , language generation is posed as a problem of generation a variable-length sequence $w_0^n\in \mathcal{V}^n$ from a language model $p_{\theta}$ , either unconditionally or conditioned on the a source $\mathbf{x}$ , using a decoding algorithm $\mathcal{F}$ .
+
+$$
+w _ {0} ^ {n} = \mathcal {F} (p _ {\theta}; \mathbf {x}) \tag {1}
+$$
+
+Language modeling is the problem of learning this parameterized model $p_{\theta}$ , that approximates an oracle model $o$ , such that decoding from the model $p_{\theta}$ mimics greedily sampling from the oracle $o$ .
+
+Maximum likelihood-based training factorizes the probability distribution model, $p_{\theta}(w_0^n)$ , into a linear chain, i.e.,
+
+$$
+p _ {\theta} \left(w _ {0} ^ {n}; \mathbf {x}\right) = \prod_ {i = 1} ^ {n} p _ {\theta} \left(w _ {i} \mid w _ {0} ^ {i - 1}; \mathbf {x}\right) p \left(w _ {0}\right), \tag {2}
+$$
+
+where $w_{i}$ is the token to be generated at step $i$ and $w_{0}^{i - 1}$ is the context at time $i$ ; i.e., all the tokens
+
+seen from step 0 to step $i - 1$ .2
+
+During maximum likelihood training, the language model is trained by minimizing the negative log-likelihood on the corpus $\mathcal{D}$ , i.e.,
+
+$$
+\theta^ {*} = \underset {\theta} {\operatorname {a r g m i n}} \frac {- 1}{| \mathcal {D} |} \sum_ {w _ {0} ^ {n} \in \mathcal {D}} \sum_ {i = 0} ^ {n} \log p _ {\theta} \left(w _ {i} \mid w _ {0} ^ {i - 1}\right), \tag {3}
+$$
+
+where $|\mathcal{D}|$ is the number of tokens in the corpus.
+
+Given a trained language model $p_{\theta}$ , the simplest strategy for generating a target sequence is to greedily sample the model i.e., at each step $i$ , pick the most probable token $w_{i} = \arg \max p_{\theta}(\cdot |w_{0}^{i - 1};x)$ as its prediction. For the next step $i + 1$ , we use $w_{i}$ to generate the context $w_{0}^{i} = w_{0}^{i - 1}w_{i}$ , and use it to predict the next token. This continues either until the maximum sequence length $(T)$ is reached, or a special end of sequence token (EOS) is generated.
+
+# 3 An Imitation Learning Perspective of Language Generation
+
+In this section, we will present an imitation learning perspective of language generation. This framing will allow us to borrow theoretical machinery from imitation learning literature to formalize the exposure bias issue and quantify it in terms of accumulation of error due to procedural mismatch between MLE-based training and generation.
+
+We start by posing language generation as a sequential decision-making problem and language modeling as an instance of imitation learning. We exploit these parallels to show behavior cloning—an imitation learning algorithm, is equivalent to teacher forcing under a choice of a particular loss function. We then exploit this equivalence to quantify the error accumulation due to exposure bias.
+
+Language Generation is a Sequential Decision-Making Problem: A sequential decision-making problem can be formalized as learning a policy $\pi (a_{t}|s_{t})$ over a space of actions $a_{t}\in \mathcal{A}$ and states $s_t\in S$ where the next state $s_{t + 1}$ is conditioned on the current state-action pair and is determined by the transition distribution $P(s_{t + 1}|s_t,a_t)$ . We can use this framework to pose language generation as an instance of a sequential decision-making problem with language model $p_{\theta}$ as the policy, contexts $w_0^{t - 1}\in \mathcal{V}^*$ as states, the next token prediction $w_{t}\in \mathcal{V}$ as actions, and concatenation function as the transition function.
+
+2As $w_{0}$ is usually a fixed SOS token, $p(w_0) = 1$ .We will drop $p(w_0)$ from the subsequent equations for brevity.
+
+This perspective allows us to appreciate the fact that, during generation, predictions at previous steps affect the next predictions, and error over time can cascade resulting in incoherent sequences.
+
+Language Modeling is Imitation Learning: Imitation learning is a class of methods to solve a sequential decision-making problem while having access to the oracle policy $o$ or data generated by the oracle, i.e., $\mathcal{D} = \{(s_t,a_t)|s_t\sim d_o^t,a_t\sim o(\cdot |s_t)\}$ . Here, $d_{o}^{t}$ is the oracle induced state-visitation distribution at time $t$ . In imitation learning, an agent learns a model policy $\pi$ that reproduces the expert policy $o$ but on the state-visitation distribution $d_{\pi}^{t}$ that has been induced by the model policy $\pi$ ,
+
+The sequential decision-making perspective of language generation allows us to pose language modeling as an instance of imitation learning—learning a model for a sequential decision-making problem with the help of an expert oracle (in RL-based methods) or using the data generated by the oracle (for MLE-based methods).
+
+Teacher Forcing is Behavior Cloning: The assumption of access to an oracle is unrealistic in many scenarios. Behavior cloning is an approach to solve an imitation learning problem using only the training data generated by an oracle. In this setup, the state-action pairs in the training data are assumed to be identically and independently distributed. This is equivalent to reducing a sequential decision-making problem to a supervised multi-class classification learning problem.
+
+Concretely, this learning problem can be seen as minimizing the expected per-step loss under the state distribution induced by the oracle:
+
+$$
+\begin{array}{l} L ^ {B C} (\pi) = \sum_ {t = 1} ^ {T} \mathbb {E} _ {s _ {t} \sim d _ {o} ^ {t}} [ l (s _ {t}, \pi ; o) ] (4) \\ \approx \frac {- 1}{| \mathcal {D} |} \sum_ {\left(s _ {t}, a _ {t}\right) \in \mathcal {D}} l \left(s _ {t}, \pi ; o\right), (5) \\ \end{array}
+$$
+
+Here, $L^{BC}(\pi)$ is the behavior cloning loss and $l(s,\pi ;o)$ is the per-step loss.
+
+Similar, in practical scenarios, language models are also trained on a finite training corpus, $\mathcal{D}$ , that assumed to be generated by the oracle, i.e., $\mathcal{D} = \{(w_0^{t - 1},w_t)|w_0^{t - 1}\sim d_o^{t - 1},w_t\sim o(\cdot |w_0^{t - 1})\}$ .
+
+Maximum likelihood training loss from Equation 3, can be reformulated as learning the distribution over the next tokens, conditioned on the train
+
+ing contexts generated by the oracle, $w_0^{t - 1}\sim d_o^{t - 1}$
+
+$$
+\begin{array}{l} L ^ {T F} \left(p _ {\theta}\right) = \frac {- 1}{\left| \mathcal {D} \right|} \sum_ {\left(w _ {0} ^ {i - 1}, w _ {i}\right) \in \mathcal {D}} \log p _ {\theta} \left(w _ {i} \mid w _ {0} ^ {i - 1}\right), (6) \\ \approx \sum_ {t = 1} ^ {T} \mathbb {E} _ {w _ {0} ^ {t - 1} \sim d _ {o} ^ {t}} \left[ - \log p _ {\theta} \left(w _ {i} \mid w _ {0} ^ {i - 1}\right) \right] (7) \\ \end{array}
+$$
+
+The behavior cloning loss (Equation 4) is equivalent to the language modeling loss (Equation 7) with $l(p_{\theta}, w_0^{t-1}; o) = -\log p_{\theta}(w_i | w_0^{i-1})$ .
+
+For our analysis though, we define per-step loss for language modeling, $l(p_{\theta}, w_0^{t-1}; o)$ as:
+
+$$
+l \left(p _ {\theta}, w _ {0} ^ {t - 1}; o\right) = \underset {w _ {t} \sim o \left(\cdot \mid w _ {0} ^ {t - 1}\right)} {\mathbb {E}} \log \frac {o \left(w _ {t} \mid w _ {0} ^ {t - 1}\right)}{p _ {\theta} \left(w _ {t} \mid w _ {0} ^ {t - 1}\right)}, \tag {8}
+$$
+
+This definition ensures that the per-step loss for oracle is zero, i.e., $l(o, w_0^{t-1}; o) = 0$ .
+
+The per-step loss function defined by equation 8 ensures that the behavior cloning loss, $L^{BC}(p)$ , under our definition is equivalent to teacher forcing loss, $L^{TF}(p)$ , up to a constant term. This equivalence of $L^{BC}(p)$ and $L^{TF}(p)$ ensures that the model learned by minimizing either of the two losses will be identical.
+
+Language Generation is Policy Rollouts: During policy rollouts, an agent in state $s_t$ , executes the action $a_t$ , sampled from policy $\pi$ , and ends up in state $s_{t+1}$ . The agent's next state is dependent upon its own actions. This state evolution can be formulated as sampling from state-visitation distribution induced by the policy $\pi$ , i.e., $s_{t+1} \sim d_{\pi}^{t+1}$ .
+
+The performance of policy $\pi$ during rollouts can be measured using the loss (cost) of executing the policy $\pi$ , i.e.,
+
+$$
+L ^ {I} (\pi) = \sum_ {t = 1} ^ {T} \mathbb {E} _ {s _ {t} \sim d _ {\pi} ^ {t}} [ l (s _ {t}, \pi ; o) ] \tag {9}
+$$
+
+We can also formulate language generation in terms of policy rollouts from imitation learning. Mathematically, we can express generation as sampling contexts from model's context distribution, i.e., $w_0^{j-1} \sim d_{p_\theta, \mathcal{F}}^j$ , and generating the next token $w_j$ conditioned on $w_0^{j-1}$ , using the decoding algorithm $\mathcal{F}$ , i.e.,
+
+$$
+\left\{w _ {j} = \mathcal {F} \left(p _ {\theta}, w _ {0} ^ {j - 1}\right) \mid w _ {0} ^ {j - 1} \sim d _ {p _ {\theta}, \mathcal {F}} ^ {j} \right\} \tag {10}
+$$
+
+We can now define the inference-time loss, $L^{I}(p_{\theta})$ as the accumulated $T$ -step loss of model $p_{\theta}$ imitating oracle $o$ on the context distribution induced by the model:
+
+$$
+L^{I}(p_{\theta}) = \sum_{t = 1}^{T}\mathbb{E}_{\substack{w_{0}^{t - 1}\sim d_{p_{\theta},\mathcal{F}}^{t}\\ w_{t}\sim o(\cdot |w_{0}^{t - 1})}}\log \frac{o(w_{t}|w_{0}^{t - 1})}{p_{\theta}(w_{t}|w_{0}^{t - 1})}, \tag{11}
+$$
+
+where $d_{p_\theta, \mathcal{F}}^t(w_0^t) \coloneqq p_\theta(w_0^{t-1})$ , is the context distribution at step $t$ , induced due to use of model $p_\theta$ and the decoding algorithm $\mathcal{F}$ , from step 1 to $t-1$ .
+
+# 4 Exposure Bias and Error Accumulation
+
+Ranzato et al. (2016) defined exposure bias as a behavioral mismatch between maximum likelihood-based training and generation procedure. During maximum likelihood-based training, the next token distribution is conditioned on ground truth data whereas, during generation, it has to rely on the model's own previously generated tokens. They also postulated that this training and generation context distribution mismatch might result in an accumulation of errors during generation.
+
+Intuitively, when the model produces a token $w_{i}$ that makes the resulting context $w_{0}^{i}$ unfamiliar, it might not be able to continue the generation adequately and is likely to produce another token which will further make the context flawed. This phenomenon reinforces itself as the context drifts further from what the oracle would produce, leading to an accumulation of errors.
+
+In the imitation learning literature, the accumulation of errors while rolling out a policy trained using behavior cloning is analyzed in the terms of inference-time regret of the behavior cloning policy, $\pi_{BC}$ , with respect to the oracle policy, $o$ , (Ross and Bagnell, 2010; Ross et al., 2011) i.e.,
+
+$$
+\mathcal {R} \left(\pi_ {B C}\right) = L ^ {I} \left(\pi_ {B C}\right) - L ^ {I} (o) \tag {12}
+$$
+
+Let $\epsilon_{t}$ be the expected error of executing policy $\pi$ at step $t$ on the state-visitation distribution induced by the oracle $o$ , i.e.,
+
+$$
+\epsilon_ {t} = \mathbb {E} _ {s \sim d _ {o} ^ {t}} [ l (s, \pi ; o) ] \tag {13}
+$$
+
+Let $\epsilon$ be the average expected error of executing policy $\pi$ over $T$ step, i.e., $\epsilon = 1 / T\sum_{t = 1}^{T}\epsilon_{t}$ . Assuming $l(s,\pi ,o)$ is an upper bound on [0, 1] loss, we can bound the regret for a policy $\pi_{BC}$ as,
+
+$$
+T \epsilon \leq \mathcal {R} \left(\pi_ {B C}\right) \leq T ^ {2} \epsilon . \tag {14}
+$$
+
+The lower-bound in Equation 14 assumes no accumulation of error, hence an expected error of $\epsilon$ at each step, whereas the upper bound assumes the worst-case scenario, resulting in linear growth in error at each step and overall quadratic accumulative growth w.r.t. maximum sequence length $T$ .
+
+Relying on the imitation learning perspective of language generation presented in the previous section, we can now borrow this regret-based analysis from imitation learning literature to similarly bound the regret of a language generation model as
+
+$$
+T \epsilon \leq \mathcal {R} \left(p _ {\theta}, \mathcal {F}\right) \leq T ^ {2} \epsilon . \tag {15}
+$$
+
+where $p_{\theta}$ is the model being used for generation, $\mathcal{F}$ is the decoding method being used for generation, $\epsilon = 1 / T\sum_{t = 1}^{T}\epsilon_{t}$ and $\epsilon_{t}$ is defined as
+
+$$
+\epsilon_ {t} = \underset { \begin{array}{c} w _ {0} ^ {t - 1} \sim d _ {o} ^ {t} \\ w _ {t} \sim o (\cdot | w _ {0} ^ {t - 1}) \end{array} } {\mathbb {E}} \log \frac {o \left(w _ {t} \mid w _ {0} ^ {t - 1}\right)}{p _ {\theta} \left(w _ {t} \mid w _ {0} ^ {t - 1}\right)} \tag {16}
+$$
+
+We will now use these bounds on the regret to analyze and quantify the error accumulation due to exposure bias in language generation.
+
+# 5 Quantifying Error Accumulation due to Exposure Bias
+
+In our analysis, we use two metrics, $\mathrm{AccErr}_{\leq}(l)$ and $\% \mathrm{ExAccErr}_{\leq}(l)$ to measure the impact of error accumulation due to exposure bias.
+
+We define accumulated errors up to length $l$ , $\mathrm{AccErr}_{\leq}(l)$ , as
+
+$$
+\operatorname {A c c E r r} _ {\leq} (l) = \mathcal {R} _ {\leq l} \left(p _ {\theta}, \mathcal {F}\right) / \epsilon_ {\leq l} \tag {17}
+$$
+
+Here, $\mathcal{R}_{\leq l}(p_{\theta},\mathcal{F})$ be the regret due to the use of language model $p_{\theta}$ and decoding method, $\mathcal{F}$ , up to sequence length $l$ , and $\epsilon_{\leq l} = 1 / l\sum_{t = 1}^{l}\epsilon_{t}$ is the expected per-step error up to length $l$ .
+
+This metric captures the growth of error w.r.t. sequence length $l$ . If exposure bias does indeed leads to error accumulation, $\mathrm{AccErr}_{\leq}(l)$ should grow super-linearly w.r.t. $l$ .
+
+We define our second metric, $\% \mathrm{ExAccErr}_{\leq}(l)$ , as percentage of excess errors committed by the model that can be attributed to exposure bias, i.e.,
+
+$$
+\% \operatorname {ExAccErr} _ {\leq} (l) = \frac {\mathcal {R} _ {\leq l} \left(p _ {\theta} , \mathcal {F}\right) - l \epsilon_ {\leq l}}{l \epsilon_ {\leq l}} * 100
+$$
+
+Here, $l\epsilon_{\leq l}$ is the lower bound on the regret and is the minimum number of errors ( $\epsilon$ per step) a model would make if there was no accumulation of errors.
+
+$\% \mathrm{ExAccErr}_{\leq}(l)$ allows us to compare models, training algorithms, and decoding strategies on the extra error that might be caused/mitigated by their use. A model, training algorithm, or a decoding strategy that perfectly mitigates the exposure bias will result in zero excess accumulated errors.
+
+In the rest of the paper, we use these definitions to show: 1.) error accumulation in language generation is real, 2.) perplexity fails to capture this error accumulation, 3.) lower exposure bias correlates with a higher quality generation that is more coherent, uses diverse vocabulary, and is less repetitive.
+
+# 6 Study Setup: Open-ended Generation
+
+Text Completion Setup: Text completion is the standard experimental setup to measure the quality of generation in open-ended language generation (Holtzman et al., 2019; Welleck et al., 2019). It is also a generalization of a numerous practical language generation applications such as story generation (Fan et al., 2018), contextual text completion (Radford et al., 2019), dialog modeling (Zhang et al., 2018), etc.
+
+Text completion models take a text passage or prefix $w_0^j \sim o$ as an input and generate a coherent continuation of the prefix, $w_{j+1}^n$ using the language model $p_\theta$ and the decoding algorithm $\mathcal{F}$ , i.e., $w_{j+1}^n = \mathcal{F}(p_\theta, w_0^j)$ . In this paper, we use this text-completion setup to analyze the error accumulation due to exposure bias and its correlation with language generation quality.
+
+Language Model and Dataset: We conduct our analysis using the GPT2 language model (Radford et al., 2019). We use the GPT2-117M model as our evaluation language model and use train split of Wikitext-103 (Merit et al., 2016) for prompts. We rely on GPT-2 model fine-tuned on Wikitext-103 as our approximate oracle. We tokenize the Wikitext-103 dataset using GPT-2's tokenization scheme. We chunk Wikitext-103's train split into sequences of length 512. Of these, we use the first 50 tokens as prompts for our generation experiments and generate the completions to a maximum length of 512 or up to the end of the sequence token. We use a total of $20k$ prompts for our evaluation.
+
+# 7 Results
+
+# 7.1 Error Accumulation in Language Generation is Real!
+
+Figure 1a plots $\mathrm{AccErr}_{\leq}(l)$ w.r.t. sequence length, $l$ . The support (dotted, orange line) $y = x$ , captures the linear growth. It shows $\mathrm{AccErr}_{\leq}(l)$ grows near-quadratically w.r.t. sequence length, empirically validating the theory that exposure bias would lead to accumulation of errors. Figure 1b, further strengthens this claim by demonstrating near-linear growth in excess errors w.r.t. to sequence length.
+
+We hypothesize that these excess errors would manifest in the form of language degeneration, especially in the latter part of the sequence, and would cause issues such as hallucinations, limited vocabulary, and word- and phrase-level repetitions.
+
+# 7.2 Perplexity is Not Enough
+
+Perplexity is a standard measure used to evaluate the quality of a language model. It is often used as a proxy measure for the text generation quality of the language model. In this section, we argue perplexity paints an incomplete picture regarding a model's ability to generate high-quality coherent text. It only captures the average per-step error generalization gap (or lack of it) but fails to account for the error accumulation due to exposure bias. These accumulated errors, as seen in the previous section, can grow near-quadratically and can prove to be a major concern for any generation model that generates sequences longer than a few words.
+
+Perplexity can be seen as scaled exponentiated average per-step error, $\epsilon$ , computed over a held-out test set, $\mathcal{D}_h$ :
+
+$$
+\begin{array}{l} \epsilon = 1 / T \sum_ {t = 1} ^ {T} \underset { \begin{array}{c} w _ {0} ^ {t - 1} \sim d _ {o} ^ {t} \\ w _ {t} \sim o (\cdot | w _ {0} ^ {t - 1}) \end{array} } {\mathbb {E}} \log \frac {o (w _ {t} | w _ {0} ^ {t - 1})}{p (w _ {t} | w _ {0} ^ {t - 1})}. (18) \\ \approx \frac {- 1}{| \mathcal {D} _ {h} |} \sum_ {\left(w _ {0} ^ {i - 1}, w _ {i}\right) \in \mathcal {D} _ {h}} \log p _ {\theta} \left(w _ {i} \mid w _ {0} ^ {i - 1}\right) + c, (19) \\ = H \left(p _ {\theta}; \mathcal {D} _ {h}\right) + c. ^ {\prime} (20) \\ \end{array}
+$$
+
+where $H(p_{\theta};\mathcal{D}_h)$ is the entropy rate (log perplexity) of the model $p_{\theta}$ on held-out test set $\mathcal{D}_h$
+
+As entropy rate is a linear function of average per-step error, we hypothesize that it will only be able to measure the per-step generalization gap of the model and will fail to capture the error accumulation caused by reducing a sequential decision-making problem to a supervised learning problem.
+
+
+(a) $AccErr_{\leq}(l)$ vs $l$
+
+
+(b) $\% \mathrm{ExError}_{\leq}(l)$ vs $l$
+Figure 1: Figure 1a plots accumulated error till len $l$ ( $AccErr_{\leq}(l)$ ) w.r.t. $l$ . This graph shows the quadratic growth of accumulated errors w.r.t to sequence length ( $l$ ) as predicted by the theory. Figure 1b plots % excess errors due to error accumulation ( $\% ExError_{\leq}(l)$ ) caused by exposure bias. This indicates that extra errors due to exposure bias grows near-linearly with the sequence length, and decoding using greedy search results in over 70% more errors.
+
+| Search | %ExErrAcc (↓) | Generation Quality |
| seq-rep-4 (↓) | rep (↓) | wrep (↓) | uniq (↑) |
| Greedy | 60.96% | 0.8990 | 0.4423 | 0.4136 | 7833 |
| Beam (k=5) | 69.72% | 0.8094 | 0.4064 | 0.3787 | 10966 |
| Sampling | | | | | |
| w/ Temp (temp=1) | 39.37% | 0.1883 | 0.2547 | 0.2301 | 23729 |
| w/ Temp (temp=1.2) | 24.75% | 0.1556 | 0.2271 | 0.2033 | 25225 |
| w/ top-k (k=100) | 35.37% | 0.1690 | 0.2409 | 0.2166 | 26251 |
| w/ top-p (p=0.94) | 48.71% | 0.2218 | 0.2743 | 0.2490 | 22582 |
| Human | - | 0.0274 | 0.4338 | - | 28739 |
+
+Table 1: Impact of error accumulation on generation quality. We observe that stochastic decoding methods not only lead to diverse language generation but also have lower exposure bias than the deterministic methods.
+
+In Figure 2, we plot the entropy rate, $H(p_{\theta}; \mathcal{D}_h)_{\leq l}$ , w.r.t. average per-step error, $\epsilon_{\leq l}$ and length-normalized regret up to length $l$ , $\mathcal{R}_{\leq l}(p_{\theta}, \mathcal{F}) / l$ . We observe a strong correlation between the entropy rate and average per-step error $(\rho = 0.9997)$ validating our theoretical observation that perplexity can capture the per-step generalization gap of language model $p_{\theta}$ . On the other hand, the length-normalized regret exhibits poor correlation with the entropy rate $(\rho = 0.4003)$ indicating perplexity's failure to capture the error accumulation due to exposure bias.
+
+A case in point of perplexity's inability to capture error accumulation is the degenerate behavior of GPT-2 (Radford et al., 2019) while generating moderately long sequence under greedy or beam
+
+search. This happens despite GPT2 having a low zero-shot perplexity on the held-out set of Wikitext-103 dataset (perplexity: 37.50), but despite this, it suffers from degeneration issues such as repetition, low vocabulary usage, and a lack of coherent generation. We hypothesize that the degenerate behavior of large pre-trained language models such as GPT2 under greedy or beam search is the result of this accumulation of errors. An example of this behavior is presented in Table 2 where we observe GPT2 generating repetitive and incoherent text completion for a Wikitext-103 prompt under deterministic decoding schemes such as greedy and beam decoding.
+
+
+Figure 2: Analyzing (log) perplexity $(H_{\leq l})$ w.r.t to average per-step error $(\epsilon_{\leq l})$ , and length-normalized exposure bias regret $(\mathcal{R}_{\leq l}(p_{\theta},\mathcal{F}) / l)$ . We observe that perplexity strongly correlates with average per-step error $(\rho = 0.9997)$ , but it has a weaker correlation with length-normalized regret $(\rho = 0.4003)$ .
+
+# 7.3 Error Accumulation impacts Generation Quality
+
+Finally, we examine the hypothesis that poor text generation capabilities of pre-trained large language models under greedy decoding might be due to the error accumulation caused by a procedural mismatch between generation and maximum likelihood training (Vijayakumar et al., 2016; Welleck et al., 2019; Holtzman et al., 2019).
+
+The regret-based definition of error accumulation allows us to analyze exposure bias along two axes of variation: the trained language model, $p_{\theta}$ , and the decoding algorithm, $\mathcal{F}$ . In this set of experiments, we explore the impact of various decoding schemes on error accumulation due to exposure bias and the quality of the completed text.
+
+Benchmarking various decoding strategies allows us to verify if an accumulation of errors does indeed lead to degeneration as the choice of decoding algorithm does not impact the average per-step error, $\epsilon$ , or held-out test set perplexity $(H(p_{\theta};\mathcal{D}_h)_{\leq l})$ . This rules out the role of modeling and model training in language degeneration across different decoding algorithms. Hence, it is reasonable to causally link a decoding algorithm's resultant improvement on language generation to its ability to reduce error accumulation.
+
+For quantitative analysis of the impact of various decoding algorithms on quality of language generation, we measure the completion quality by using
+
+Context: ... He made one attempt to break through the blockade on 23 December, but it proved impossible to do as the Union controlled every channel from Savannah, aided ...
+
+Greedy: ... Savannah, aided by the British, to the North. The blockade was broken on 23 December by the British and the Americans. The blockade was broken on 23 December by the British and the Americans. The blockade was broken on 23 December by the British and the Americans. The blockade was broken on 23 December by the British and the Americans.
+
+BeamSearch (k=5): ... Savannah, aided by the North Carolina Railroad. In the spring of 1788, in the midst of the blockade, he made a move to join the Confederacy, but it was too late. On 23 December 1788, he marched to the battle of Custer's Creek in the South Carolina River. In the following weeks, he returned to Savannah on 23 December, but it was too late. On 24 December 1788, he made another attempt to break through the blockade on 23 December, but it was too late. ...
+
+Sampling w/ Temp. (temp=1.2): ... Savannah, aided by US Navy ships, and took him by surprise. One of the most famous incidents involved General Grant's arrest at the Battle of Camp Amish on 16 August. He was being held by the Union Army in Atlanta and accused of taking down General Joseph E. Douglas, a Marine commander who was in command of Fort Hood, an Army outpost near Fort Laramie, Mississippi. ...
+
+Top-k Sampling (k=100): ... Savannah, aided by a little naval warplane bombardment which had crippled many parts of the ship. As an early American merchant vessel he was carried forward as a heavy merchant, while at some time it was necessary to take up residence in Savannah to be used as a regular transport vessel. In the meantime, on the 20th of January 1816, Jefferson died. He was buried in an unmarked grave in the chapel inside Savannah in the family home at Glenbrook, where the rest of the family resides. ...
+
+Top-p Sampling (p=0.94): ... Savannah, aided in the capture of his land and his supplies. He made two attempts to stop a blockade on 28 December, but failed to make it. There were, however, six people in Savannah who were convinced that Captain Walker was the cause of the blockade. He then made a trip to North Carolina where he gave up hope. ...
+
+Gold: ... He made one attempt to break through the blockade on 23 December, but it proved impossible to do as the Union controlled every channel from Savannah, aided by their occupation of Tybee Island at the mouth of the Savannah River. Bulloch reported to Mallory in late January 1862 that breaking out was hopeless so Mallory ordered him to turn the ship over to another officer and to return to Europe some other way. ...
+
+Table 2: Examples of completions using various decoding methods. We observe that the deterministic decoding schemes produce less diverse, incoherent, and more repetitive (highlighted in red) text.
+
+the same metrics as Welleck et al. (2019), and these metrics are: 1.) rep/128 measures if the prediction token at step $t$ occurs in previous 128 steps, 2.) wrep/128 counts the prediction's repetition at step $t$ only if the predicted token is not the ground-truth token at that position, 3.) seq-rep-4 measure the repetition at the 4-gram level, and 4.) uniq measure the vocabulary diversity by accounting for the number of unique tokens generated by the model.
+
+Table 1 shows that the use of stochastic sampling-based decoding algorithms results in diverse and more coherent language generation and a lower percentage of excess errors. Sampling with temperature (with temp=1.2) leads to the least amount of repetition (both at the token and the n-gram level), second highest vocabulary diversity, and the least amount of excess errors due to exposure bias. This also bears out from our qualitative analysis in Table 2 as sampling with temperature produces the most coherent text. Deterministic decoding schemes, in contrast, fare poorly in both reducing exposure bias and on language generation quality metrics, producing repetitive and incoherent text. These quantitative and qualitative experiments offer us evidence that reducing exposure bias does lead to more coherent text generation.
+
+We hypothesize that the reasonable amount of randomness introduced by stochastic sampling helps the model avoid sampling the most likely token at each time step, thus avoiding possible divergent contexts that might have resulted in a degenerate completion in the future. We conjecture that this timely intervention restricts the generation context distribution from moving too far away from training context distribution helping it avoid compounding of errors. This is also borne out by qualitative analysis as a reasonable amount of stochasticity does result in texts which look more coherent and oracle-like. A broader analysis of this behavior though is beyond the scope of this work and is left for future work.
+
+# 8 Related Work
+
+Non-MLE Training Methods: Several approaches have been proposed to mitigate the exposure bias issue including RL-based optimization objectives (Ranzato et al., 2016; Shen et al., 2016; Bahdanau et al., 2017; Chen et al., 2020), learning to search (Leblond et al., 2018), energy-based models (Deng et al., 2020), imitation learning (Du and Ji, 2019), generative adversarial networks (Yu et al., 2017) and knowledge distillation (Liu et al., 2019). Although these methods motivate their approaches intending to reduce exposure bias, they neither formalize exposure bias clearly nor provide any empirical evidence that these methods mitigate the effect of exposure bias. In this paper, we proposed a quantifiable definition of exposure bias by analyzing the issue from a principled imitation learning perspective. This definition can be used or
+
+adapted to evaluate the various novel training and modeling approaches on their ability to do reduce exposure bias.
+
+Smarter Decoding Methods: Large language models have unusually low test perplexities but they falter at coherent and diverse language generation tasks especially while using deterministic decoding schemes. Several authors (Vijayakumar et al., 2016; Welleck et al., 2019; Holtzman et al., 2019) have hypothesized that training and inference mismatch due to MLE-based training is responsible for the degenerate behavior. They have proposed smarter decoding schemes to mitigate the side effects of exposure bias resulting in better generation quality. Despite this being an active area of research, this often-repeated hypothesis for degenerate generation behavior has not received serious treatment till now. In this paper, we take a step towards explaining this discrepancy and show that error accumulation due to exposure bias might be the reason for this degenerate behavior and explain why perplexity has a handicap in capturing this compounding of errors.
+
+Analyzing Exposure Bias: Schmidt (2019) and Wang and Sennrich (2020) link exposure bias to generalization gap due to distribution and domain shift respectively. Performance degradation under domain and distribution shift is a major issue with language generation, and direct evidence supporting this hypothesis will provide insights into building more robust language generation models. Unfortunately, neither of the papers formalize the notion of exposure bias or empirically link the generalization gap to exposure bias directly.
+
+Three recent papers, Xu et al. (2019); Zhang et al. (2019b); He et al. (2019), have tried to empirically evaluate the impact of exposure bias on language generation. The first two papers validate the existence of exposure bias whereas He et al. (2019) show language models have self-recovering ability negating the impact of exposure bias. All three analyses are based on the empirical definition of exposure bias which, in turn, is based on the informal formulation by Ranzato et al. (2016).
+
+In this paper, we provide a principled and theoretically grounded approach to analyze exposure bias in language generation and show that it is indeed a problem and that it might explain the degeneration issue with large language models on open-ended tasks under deterministic decoding.
+
+# 9 Discussion
+
+In this paper, we analyze language generation from an imitation learning perspective. We use this analysis to arrive at a theoretical bound on error accumulation due to exposure bias. This bound predicts a super-linear growth in errors accumulation during generation due to exposure bias. In our experiments, we validate this bound and show that accumulation due to exposure bias indeed results in super-linear growth in errors.
+
+We then show, both analytically and empirically, why perplexity is not enough to capture this accumulation of errors and hypothesize that this accumulation of errors is responsible for the degenerate language generation. Finally, we provide some evidence for this hypothesis by evaluating the impact of various decoding schemes on error accumulation and generation quality. We show that techniques that improve the generation quality do result in a lower error accumulation and causally link language generation quality to error accumulator due to exposure bias.
+
+Our analysis provides a principled and theoretically grounded way to understand the exposure bias. We believe this analysis can pave way for developing smarter training and decoding algorithms to address this error accumulation resulting in more robust language generation models.
+
+# Acknowledgments
+
+We would like to thank the reviewers for their valuable feedback. This work is supported by funding from Samsung Electronics. The last author is supported by the Canada CIFAR AI Chair program. This research was enabled in part by support provided by Calcul Québec3, and Compute Canada4. We would also like to thank Khimya Khetarpal, Sachin Grover, Ankit Anand, Jayakumar Subramanian for feedback on current and previous drafts of this paper, and colleagues at Borealis AI for their valuable inputs and discussions during the first author's internship at Borealis AI.
+
+# References
+
+Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. An actor-critic
+
+algorithm for sequence prediction. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv:1409.0473 [cs, stat]. ArXiv: 1409.0473.
+Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1171-1179. Curran Associates, Inc.
+Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2020. Reinforcement learning based graph-to-sequence model for natural question generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
+Byung-Ju Choi, Jimin Hong, David Park, and Sang Wan Lee. 2020. F2-Softmax: Diversifying Neural Text Generation via Frequency Factorized Softmax. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9167-9182, Online. Association for Computational Linguistics.
+Yuntian Deng, Anton Bakhtin, Myle Ott, Arthur Szlam, and Marc'Aurelio Ranzato. 2020. Residual energy-based models for text generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
+Wanyu Du and Yangfeng Ji. 2019. An empirical comparison on imitation learning and reinforcement learning for paraphrase generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 6011-6017. Association for Computational Linguistics.
+Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical Neural Story Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.
+Tianxing He, Jingzhao Zhang, Zhiming Zhou, and James R. Glass. 2019. Quantifying exposure bias for neural language generation. CoRR, abs/1905.10617.
+Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The Curious Case of Neural Text Degeneration. arXiv:1904.09751 [cs]. ArXiv: 1904.09751.
+
+Rémi Leblond, Jean-Baptiste Alayrac, Anton Osokin, and Simon Lacoste-Julien. 2018. SEARNN: training rnns with global-local losses. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.
+Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. arXiv:1910.13461 [cs, stat]. 00007 arXiv: 1910.13461.
+Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A Diversity-Promoting Objective Function for Neural Conversation Models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.
+Rui Liu, Berrak Sisman, Jingdong Li, Feilong Bao, Guanglai Gao, and Haizhou Li. 2019. Teacher-student training for robust tacotron-based TTS. CoRR, abs/1911.02839.
+Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1:1116-1126.
+Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer Sentinel Mixture Models. arXiv:1609.07843 [cs]. 00218 arXiv: 1609.07843.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
+Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. page 67.
+Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
+Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y.-Lan Boureau, and Jason Weston. 2020. Recipes for building an open-domain chatbot. arXiv:2004.13637 [cs].
+
+Stéphane Ross and Drew Bagnell. 2010. Efficient reductions for imitation learning. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, volume 9 of JMLR Proceedings, pages 661-668.
+Stephane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 627-635.
+Florian Schmidt. 2019. Generalization in Generation: A closer look at Exposure Bias. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 157-167, Hong Kong. Association for Computational Linguistics. 00002.
+Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. arXiv:1704.04368 [cs]. ArXiv: 1704.04368.
+Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. arXiv:1507.04808 [cs]. ArXiv:1507.04808.
+Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum Risk Training for Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692, Berlin, Germany. Association for Computational Linguistics. 00202.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv:1706.03762 [cs]. ArXiv: 1706.03762.
+Ashwin K. Vijayakumar, Michael Cogswell, Ramprasath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models.
+Chaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural machine translation. CoRR, abs/2005.03642.
+Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural Text Generation with Unlikelihood Training. arXiv:1908.04319 [cs, stat]. 00000 arXiv: 1908.04319.
+Ronald J. Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural computation*, 1(2):270-280.
+
+Yifan Xu, Kening Zhang, Haoyu Dong, Yuezhou Sun, Wenlong Zhao, and Zhuowen Tu. 2019. Rethinking exposure bias in language modeling. CoRR, abs/1910.11235.
+Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 2852-2858. AAAI Press.
+Amy Zhang, Adam Lerer, Sainbayar Sukhbaatar, Rob Fergus, and Arthur Szlam. 2018. Composable Planning with Attributes. arXiv:1803.00512 [cs]. ArXiv: 1803.00512.
+Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.
+Wen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019b. Bridging the gap between training and inference for neural machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4334-4343. Association for Computational Linguistics.
+Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DIALOGPT: Large-Scale Generative Pre-training for Conversational Response Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/images.zip b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0e2c2acd7effadbea840ba65c859cae42dd46e2f
--- /dev/null
+++ b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4677d69a07298991c731dde047a60d55e56c099611f452158f45d5f3f41d7ef5
+size 230823
diff --git a/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/layout.json b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f4e42caf5787718b2860b0b342dcc80e38346dd7
--- /dev/null
+++ b/whyexposurebiasmattersanimitationlearningperspectiveoferroraccumulationinlanguagegeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b8301bc7cd973602a8468789141fb0614a93032a530a422f5279de0283af599
+size 433411
diff --git a/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_content_list.json b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c50d0a1d155fc56d5ec4358d45b52931a042baf8
--- /dev/null
+++ b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd2b0b64c4e1cea4ea816269050cd061d93b0e9b6209209b84fb62b8e5357aba
+size 55250
diff --git a/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_model.json b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bb803e277f5359ed47d481267a80d6959be908e5
--- /dev/null
+++ b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b79fb00f0a8855338d20d535281b0e4e85cc9340e70cce99a0640f58f74e9f6
+size 66045
diff --git a/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_origin.pdf b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..737c68550944e2d569eee0d3b4802fe909b1abf9
--- /dev/null
+++ b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/aafa1b51-d4ed-415b-a19a-992d74a0d71c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63ec56cf96ec1ee2b5e49ccebb7690604fa2f271c774b256c1fd57a354583f47
+size 522700
diff --git a/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/full.md b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea216902f1a8ac314297fc9cee2b43bde95be00c
--- /dev/null
+++ b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/full.md
@@ -0,0 +1,214 @@
+# Word-level Perturbation Considering Word Length and Compositional Subwords
+
+Tatsuya Hiraoka†, Sho Takase†, Kei Uchiumi‡, Atsushi Keyaki‡, Naoaki Okazaki†
+
+† Tokyo Institute of Technology
+
+$\ddagger$ Denso IT Laboratory, Inc.
+
+{tatsuya.hiraoka, sho.takase}@nlp.c.titech.ac.jp
+
+{kuchiumi, akeyaki}@d-itlab.co.jp
+
+okazaki@c.titech.ac.jp
+
+# Abstract
+
+We present two simple modifications for word-level perturbation: Word Replacement considering Length (WR-L) and Compositional Word Replacement (CWR). In conventional word replacement, a word in an input is replaced with a word sampled from the entire vocabulary, regardless of the length and context of the target word. WR-L considers the length of a target word by sampling words from the Poisson distribution. CWR considers the compositional candidates by restricting the source of sampling to related words that appear in subword regularization. Experimental results showed that the combination of WR-L and CWR improved the performance of text classification and machine translation.
+
+# 1 Introduction
+
+Word-level perturbation is a well-known technique used NLP (Zhang and Yang, 2018; Takase and Kiyono, 2021). For example, word replacement (WR) (Bengio et al., 2015; Zhang and LeCun, 2015) randomly replaces words in the input sequence with words sampled from a vocabulary. The conventional WR uses a uniform distribution for sampling. Although a simple method, it is as effective as complex methods, such as adversarial perturbations (Takase and Kiyono, 2021). However, the conventional WR frequently replaces original words with unrelated words. If the probability of replacement (hyperparameter) is set to be large, a perturbed input sequence would be drastically different from the original one, which would significantly affect performance. Thus, it is important to search for an appropriate hyperparameter.
+
+Subword regularization (SR) (Kudo, 2018; Hiraoka et al., 2019; Provilkov et al., 2020) is another effective method for word-level perturbation. We used different tokenizations sampled from a pretrained language model in each training epoch with the SR. As this method focuses only on tokenization, unrelated words are not used. However,
+
+
+Figure 1: Outline of replacing the word "da" in "up/da/tion" using our method, CWR-L.
+
+sampling tokenization takes a longer time owing to its complex procedure for managing various tokenization candidates. In addition, the improvement achieved by SR is sometimes unimpressive in comparison with WR; however, it requires a considerable amount of time.
+
+In this study, we propose two approaches to compromise between WR and SR. Our method restricts candidates in WR to related words in terms of (1) word length and (2) tokenization. The first approach weights the distribution for word sampling based on the length of the target word. The second approach hardly restricts the vocabulary for word sampling to compositional subwords of the original word inspired by SR. These restrictions prevent the replacement of words with unrelated words and thus result in a stable improvement in NLP tasks even if the hyperparameter is varied. In addition, the sampling speeds of our methods are faster than those of SR because they do not require an alternative tokenization sequence. We empirically demonstrate the advantages of the proposed method for text classification and machine translation tasks.
+
+# 2 Related Work
+
+This work discusses the technique of word-level perturbation in NLP. One of the popular perturbation ways is word replacement (Bengio et al., 2015; Zhang and LeCun, 2015), which randomly choices input words and replaces them with other words in a vocabulary. Word dropout (Gal and Ghahramani, 2016) and unknown token replacement (Zhang et al., 2020) are variations of word replacement, which replace the selected words with zero embeddings and unknown tokens, respectively.
+
+There are some techniques to prevent using unrelated words in word replacement. Zhang et al. (2015a) replaces randomly selected words with their synonyms. Kobayashi (2018) employs a language model to replace the chosen words. Our work focuses on the tokenization units to restrict vocabulary to prevent using unrelated words.
+
+Subword regularization is another means of word-level perturbation. Kudo (2018) employs a unigram language model to sample tokenization for machine translation. Provilkov et al. (2020) modifies byte pair encoding to perturb the input tokenization. Hiraoka et al. (2019, 2020, 2021) introduces a technique to update the tokenizer during the training.
+
+# 3 Proposed Method
+
+Before describing our method, we provide a brief overview of the base method: WR. Let $\boldsymbol{x} = x_{1},\ldots x_{i},\ldots x_{I}$ be a sequence of words whose length is $I$ . The WR method randomly replaces $x_{i}$ with $\tilde{x}_i$ with probability $a$ using the following equations:
+
+$$
+\tilde {x} _ {i} \sim Q _ {V} \tag {1}
+$$
+
+$$
+x _ {i} = \left\{ \begin{array}{l l} \tilde {x} _ {i} & \text {w i t h p r o b a b i l i t y} a \\ x _ {i} & \text {w i t h p r o b a b i l i t y} 1 - a \end{array} , \right. \tag {2}
+$$
+
+where $Q_V$ is the uniform distribution on the entire vocabulary $V$ , and $a$ is the hyperparameter. We refer to $x_i$ selected with $a$ as the target word.
+
+# 3.1 WR Considering Length (WR-L)
+
+The conventional WR often samples words whose length is similar to the average length of words in the corpus regardless of the length of the target word because we use a uniform distribution as $Q_V$ . We address this problem with a distribution
+
+| Method | Perturbed Example |
| Vanilla | _Love / _the / _updated / _format |
| SR | _Love / _the / _update / d / _form / at |
| WD | _Love / _the / [PAD] / _format |
| UTR | _Love / _the / [UNK] / _format |
| LM | _Love / _the / _the / _format |
| WR | _Love / _the / char / _format |
| WR-L | _Love / _the / _nothing / _format |
| CWR | _Love / _the / up / _format |
| CWR-L | _Love / _the / _update / _format |
+
+Table 1: Perturbed examples for each method. Replaced words are in bold.
+
+weighted by the Poisson distribution3, whose mean is the target word length as follows:
+
+$$
+p \left(\tilde {x} _ {i} \mid x _ {i}\right) = \frac {\operatorname {P o i s s o n} \left(L _ {\tilde {x} _ {i}} ; \lambda = L _ {x _ {i}}\right)}{Z}, \tag {3}
+$$
+
+where $L_{x_i}$ indicates the number of characters that comprise $x_i$ , and $Z$ is a normalization term that makes the sum of the probabilities 1.
+
+# 3.2 Compositional Word Replacement (CWR)
+
+WR often samples words unrelated to the target word owing to the uniform distribution $Q_V$ . To address this problem, we propose CWR that restricts the source of sampling $V$ to $S_{x_i}$ , which consists of two subsets: Substrings and Overlapped Subwords. Substrings contain all the substrings of the target word, whereas Overlapped Subwords contain words that include the target word. Let us consider the target word "da" in "up/da/tion." Substrings are "d" and "a," and Overlapped Subwords are "updat," "at," and "ation," as shown in Figure 1.
+
+We pre-compute Overlapped Subwords for each target word by checking all tokenizations for each training sentence. During this extraction, we merge Overlapped Subwords for the same target word to save the memory footprint, even if the target word appears in different sentences. For example, when the target word "da" appears in "up/da/tion" and "pan/da," we merge "and" in "pan/da" with the set containing "updat," "at," and "ation" as Overlapped Subwords of "da." Algorithm 1 in Appendix overviews the construction of $S_{x_i}$ .
+
+WR-L can be combined with CwR by weighting the uniform distribution over $S_{x_i}$ with the Poisson distribution introduced in Section 3.1.
+
+| Dataset | Vanilla | SR | WD | UTR | LM | WR | WR-L | CWR | CWR-L |
| Twitter(En) | 75.51 | 77.52 | 76.27 | 76.35 | 76.53 | 77.14 | 77.64 | 76.11 | 77.79 |
| + BERT | 82.03 | - | 82.30 | 82.25 | 82.10 | 82.07 | 82.08 | 82.19 | 82.33 |
| Twitter(Ja) | 86.42 | 86.41 | 86.69 | 86.68 | 87.25 | 87.30 | 87.36 | 86.71 | 87.11 |
| Weibo(Zh) | 93.10 | 93.18 | 93.53 | 93.65 | 93.21 | 93.44 | 93.41 | 93.24 | 93.70 |
| Rating(En) | 65.21 | 65.7 | 66.77 | 65.38 | 66.72 | 67.50 | 67.56 | 65.42 | 67.01 |
| + BERT | 71.30 | - | 71.68 | 71.47 | 71.54 | 71.83 | 71.65 | 71.84 | 72.02 |
| Rating(Ja) | 52.46 | 52.46 | 53.01 | 52.62 | 53.21 | 53.33 | 53.39 | 52.76 | 53.34 |
| Rating(Zh) | 48.71 | 49.04 | 48.96 | 48.85 | 49.63 | 49.60 | 49.83 | 49.13 | 49.71 |
| Genre(En) | 67.69 | 67.81 | 72.42 | 72.47 | 72.27 | 71.55 | 72.19 | 67.83 | 72.76 |
| + BERT | 77.64 | - | 79.09 | 79.23 | 78.89 | 79.07 | 78.85 | 79.04 | 79.43 |
| Genre(Ja) | 50.42 | 50.03 | 52.07 | 51.92 | 52.17 | 51.82 | 51.85 | 50.64 | 52.32 |
| Genre(Zh) | 47.83 | 47.85 | 48.89 | 48.92 | 49.10 | 48.60 | 49.83 | 47.73 | 49.06 |
| Average w/o BERT | 65.26 | 65.56 | 66.51 | 66.32 | 66.68 | 66.70 | 67.01 | 65.51 | 66.98 |
| Average w/ BERT | 68.19 | - | 69.31 | 69.15 | 69.39 | 69.44 | 69.64 | 68.55 | 69.72 |
+
+Table 2: Experimental results for text classification tasks averaged over five runs (F1). Bold and underline highlight that the highest scores and scores significantly surpass WR ( $p < 0.05$ , McNemar's Test).
+
+# 4 Experiment
+
+We conducted experiments on text classification and machine translation. To confirm the effectiveness of our methods, we compared our method with regular training without word-level perturbation (Vanilla) and the following four word-level perturbation techniques in addition to WR:
+
+Subword regularization (SR) samples the tokenization in each training epoch with the pretrained unigram language model. We employed Sentence-Piece (Kudo, 2018) for SR.
+
+Word Dropout (WD) randomly replaces inputs with zero vectors (Gal and Ghahramani, 2016).
+
+Unknown Token Replacement (UTR) randomly replaces words with unknown tokens (Zhang et al., 2020), i.e., we use an unknown token as $\tilde{x}_i$ in Eq.2.
+
+Language Model (LM) randomly replaces words with words sampled depending on an $\mathbf{LM}^4$ .
+
+In addition to the proposed methods, WR-L and CWR, we denote the combination of these methods as CWR-L. Table 1 presents the perturbed examples for each method. We controlled the above methods except SR with the hyperparameter $a$ mentioned in Eq. 2. For SR, we controlled the diversity of the sampled tokenization with a hyperparameter, which we refer to as $b^5$ . For all datasets, we determined the hyperparameters for the perturbation using validation splits using a grid search ranging from 0.1 to 0.9 in increments of 0.1. Figures 2 and 3 indicate the effects of these variables.
+
+# 4.1 Text Classification
+
+Setup: We employed nine datasets in three languages for text classification. Twitter(En), Twitter(Ja), and Weibo(Zh) are sentiment analyses of short-text SNS in English, Japanese, and Chinese, respectively. Rating and Genre are datasets of rating prediction and genre prediction for e-commerce services: Amazon (He and McAuley, 2016) in English, Rakuten (Rakuten, Inc., 2014) in Japanese, and JD.com (Zhang et al., 2015b) in Chinese. Appendix A describes the preparation of the datasets in detail. We used SentencePiece (Kudo and Richardson, 2018) for tokenization with a vocabulary size of 16K for sentiment analysis and 32K for the others, after the pre-tokenization for the Japanese corpus with MeCab (Kudo, 2006) and the Chinese corpus with Jieba (Junyi, 2013). We employed a BiLSTM-based text classifier (Zhou et al., 2016) and trained it on the training split. For the English datasets, we also employed a BERT-base (Devlin et al., 2018) implemented by HuggingFace (Wolf et al., 2020), a well-known pretrained language model, as the classifier (+BERT) $^{6}$ .
+
+Results: Table 2 presents the performance of each word-level perturbation method. The results indicate that the proposed perturbation method with the Poisson distribution WR-L outperformed the original WR on nine out of 12 datasets. In addition, the combination of our methods, CWR-L, improved the performance on several datasets, including the setting where we employed BERT. The average scores of CWR-L over the entire dataset were higher than those of the other methods, and the scores of WR-L were comparable to those of CWR-L. By contrast, the method that only con
+
+| Datasets | Vanilla | SR | WD | UTR | LM | WR | WR-L | CWR | CWR-L |
| IWSLT14 | DeEn | 33.92 | 34.75 | 34.81 | 34.84 | 34.46 | 34.68 | 34.91 | 34.73 | 34.90 |
| EnDe | 28.02 | 29.04 | 28.91 | 28.94 | 28.67 | 28.72 | 28.83 | 28.59 | 28.95 |
| IWSLT15 | ViEn | 28.83 | 29.29 | 29.22 | 29.35 | 28.87 | 29.37 | 29.63 | 29.33 | 29.51 |
| EnVi | 30.39 | 31.55 | 31.32 | 31.42 | 31.52 | 31.04 | 31.29 | 31.57 | 31.69 |
| ZhEn | 20.27 | 21.19 | 20.86 | 20.95 | 18.65 | 20.86 | 21.26 | 21.36 | 21.56 |
| EnZh | 14.50 | 15.20 | 15.17 | 15.18 | 14.70 | 15.00 | 15.21 | 15.32 | 15.35 |
| Average | 25.99 | 26.84 | 26.72 | 26.78 | 26.15 | 26.61 | 26.86 | 26.82 | 26.99 |
+
+Table 3: Experimental results for the machine translation task averaged over three runs (ScareBLEU (Post, 2018)). Bold and underline denote the highest scores and scores that significantly surpass WR ( $p < 0.05$ , bootstrap resampling (Koehn, 2004)), respectively.
+
+siders tokenization, CwR, underperformed other methods on several datasets. These results demonstrate that WR-L contributes to the performance improvement of text classification, and considering tokenization, as is the case in CwR-L, it helps improve performance. Among the baseline methods, WR and LM ranked first in terms of the average score, whereas SR did not show any significant improvement on most datasets.
+
+# 4.2 Machine Translation
+
+Setup: For machine translation, we employed Transformer (Vaswani et al., 2017) implemented by Fairseq for the IWSLT setting (Ott et al., 2019). We conducted experiments on De-En, Vi-En, and Zh-En language pairs of the IWSLT corpora because previous studies reported that word-level perturbation is particularly effective in low-resource settings (Kudo, 2018). We tokenized each corpus using SentencePiece with a vocabulary size of 36K, and we pre-tokenized the Chinese corpus with Jieba. We trained the models with 50 epochs and chose the best model using the validation loss.
+
+Results: Table 3 shows the results of each perturbation method for machine translation. The scores of SR were higher than those of the other baseline methods. CWR achieved competitive scores against SR, even though it does not strictly sample tokenization. Moreover, WR-L surpassed SR, and CWR-L achieved the highest performance in five out of six language pairs. These results indicate that the perturbation considering tokenization (SR, CWR) is effective for machine translation, and the methods considering the sampled length (WR-L, CWR-L) have a greater effect on the performance.
+
+# 5 Discussion
+
+# 5.1 Performance against Hyperparameters
+
+In Section 4, we reported the performance with the hyperparameter that yielded the highest perfor
+
+
+Figure 2: Average performances on test splits over the nine datasets excluding experiments with BERT.
+
+mance on the validation split for each method. To confirm the sensitivity of each method to the hyperparameters, we report the average performance over nine text classification datasets used in Section 4.1 against the hyperparameter scoped in the grid search. As shown in Figure 2, CWR-L outperformed the other perturbation methods in terms of most values. Although WR and LM achieved the higher performance among the baselines, the performance curve was much peaky. The peak performance of WR-L was higher than that of WR and competitive against LM, especially in lower hyperparameters that are often selected. These results indicate that LM, WR, and WR-L are sensitive to hyperparameters. Although CWR scores are almost the same as the vanilla performance, CWR-L is a tractable perturbation approach because its performance is not highly dependent on the hyperparameter. This demonstrates that using the Poisson distribution for sampling is effective for stable performance improvement.
+
+# 5.2 Perturbation Speed
+
+We aimed to develop a fast and effective perturbation method. In this subsection, we report the speed of the perturbation on the entire training dataset of the Amazon corpus used in Section 4.1, which con
+
+
+Figure 3: Average time to process 10K sentences in the training data of the Amazon corpus over 10 runs.
+
+tains 96,000 sentences (84.91 words per sentence).
+
+Figure 3 shows the averaged processing time over 10 runs for each perturbation method. Our methods were slightly slower than WR and LM because they have an additional step of restricting the sampled candidates to WR. By contrast, our methods were much faster than SR. This result indicates that the proposed methods, especially CWR-L, are better alternatives from the perspectives of both processing speed and performance.
+
+# 6 Conclusion
+
+We propose a fast and effective alternative for word-level perturbation. The experimental results showed that the proposed method, CWR-L, improved the performance of text classification and machine translation, particularly with the sampling strategy using Poisson distribution. We also empirically showed that CWR-L is more robust to hyperparameters than other perturbation methods and is faster than SR.
+
+# Ethical Considerations
+
+Because word-level perturbation includes stochastic behaviour, the experimental results depend on random seeds. Ideally, tons of trials are required to compare the methods correctly. However, because of limitation of computational resources, we averaged the results of five trials for text classification and three trials for machine translation.
+
+Word-level perturbation can be seen as a variation of data augmentation. Therefore, the effectiveness of word-level perturbation might be small when the training corpus is significantly large. However, this work does not discuss this point because preparing such a large training corpus is difficult.
+
+# Acknowledgments
+
+This paper is based on results obtained from a project, JPNP18002, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
+
+# References
+
+Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 1, pages 1171-1179.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. Advances in neural information processing systems, 29:1019-1027.
+Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In *proceedings of the 25th international conference on world wide web*, pages 507-517.
+Tatsuya Hiraoka, Hiroyuki Shindo, and Yuji Matsumoto. 2019. Stochastic tokenization with a language model for neural text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1620-1629.
+Tatsuya Hiraoka, Sho Takase, Kei Uchiumi, Atsushi Keyaki, and Naoaki Okazaki. 2020. Optimizing word segmentation for downstream task. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1341-1351, Online. Association for Computational Linguistics.
+Tatsuya Hiraoka, Sho Takase, Kei Uchiumi, Atsushi Keyaki, and Naoaki Okazaki. 2021. Joint optimization of tokenization and downstream model. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 244-255.
+Sun Junyi. 2013. jieba. https://github.com/fxsjy/jieba.
+Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452-457.
+Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 388-395.
+
+Taku Kudo. 2006. Mecab: Yet another part-of-speech and morphological analyzer. http://taku910.github.io/pecab/.
+Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75.
+Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71.
+Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian unsupervised word segmentation with nested pitman-yor language modeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 100-108. Association for Computational Linguistics.
+Masaaki Nagata. 1996. Automatic extraction of new words from Japanese texts using generalized forward-backward search. In Conference on Empirical Methods in Natural Language Processing.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of NAACL-HLT* 2019: Demonstrations.
+Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
+Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1882-1892, Online. Association for Computational Linguistics.
+Rakuten, Inc. 2014. Rakuten dataset. Informatics Research Data Repository, National Institute of informatics. (dataset).
+Sho Takase and Shun Kiyono. 2021. Rethinking perturbations in encoder-decoders for fast training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5767-5780, Online. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30:5998-6008.
+
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
+Dongxu Zhang and Zhichao Yang. 2018. Word embedding perturbation for sentence classification. arXiv preprint arXiv:1804.08166.
+Huao Zhang, Shigui Qiu, Xiangyu Duan, and Min Zhang. 2020. Token drop mechanism for neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4298-4303, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Xiang Zhang and Yann LeCun. 2015. Text understanding from scratch. arXiv preprint arXiv:1502.01710.
+Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015a. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649-657.
+Yongfeng Zhang, Min Zhang, Yi Zhang, Guokun Lai, Yiqun Liu, Honghui Zhang, and Shaoping Ma. 2015b. Daily-aware personalized recommendation based on feature-level time series analysis. In Proceedings of the 24th international conference on world wide web, pages 1373-1383.
+Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. 2016. Text classification improved by integrating bidirectional LSTM with two-dimensional max pooling. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3485-3495.
+
+
+Figure 4: Distribution of length of replaced words on the Amazon dataset sampled with (a) WR and (b) WR-L. The figure shows WR-L sample words whose length is similar to that of the target word.
+
+| Algorithm 1 Algorithm for Building Candidates |
| 1: S ← Empty Dictionary of Set |
| 2: for Each Sentence in Training Data do |
| 3: for Each Substring x ∈ V in Sentence do |
| 4: for Each Substring x̂ ∈ V in Sentence do |
| 5: if x̂ Partly Overlaps with x then |
| 6: ADD x̂ to Sx |
| 7: end if |
| 8: end for |
| 9: end for |
| 10: end for |
+
+# A Dataset Preparation
+
+In Section 4.1, we used nine datasets for text classification. We exploited the default settings for Twitter(En)7 and Weibo(Zh)8, but we preprocessed the other datasets. Twitter(En) contains 100,000 tweets and Weibo(Zh) contains 671,052 samples.
+
+Twitter(Ja) $^{9}$ :We collected 352,554 tweets using Twitter API and used 162,184 tweets that had one sentiment label (positive: 10,319, negative: 16,035, or neutral: 135,830).
+
+Rating&Genre(En): From the published Amazon dataset, we sampled 5,000 reviews for each of the 24 product genres that contained sufficient reviews. We counted the number of words using whitespaces, and we only extracted reviews whose length was less than 200 words. The total number of reviews was 120,000. We created datasets for Rating(En) and Genre(En) from the same reviews.
+
+Rating&Genre(Ja): From the published Rakuten dataset, we sampled 5,000 reviews for each of the five rates and 21 genres that contained a sufficient number of reviews. We limited the maxi
+
+mum length of reviews to 100 characters, and the total number of reviews was 525,000. We created datasets for Rating(Ja) and Genre(Ja) from the same reviews.
+
+Rating&Genre(Zh): From the published JD.com dataset, we sampled 6,000 reviews for each of the five rates and 13 genres that contained a sufficient number of reviews. We limited the maximum length of reviews to 100 characters, and the total number of reviews was 390,000. We created datasets for Rating(Zj) and Genre(Zh) from the same reviews.
+
+We divided all the datasets in a ratio of 8:1:1 to obtain the training, validation, and test sets.
+
+# B Environment
+
+In all the experiments, we implemented the proposed method with PyTorch. We ran all the experiments on a machine with an NVIDIA Tesla V100 (16 GiB) GPU and Intel Xeon E5-2680 V4 processor (Broadwell-EP, 14 cores, 2.4 GHz).
+
+# C Implementation
+
+We employed the Poisson distribution to sample a replacement word by considering the word length, as expressed in Eq. 3. The sampling process using a non-uniform distribution takes a much longer time than sampling using a uniform distribution. Therefore, we avoided sampling using a nonuniform distribution via random sampling from a candidate list that reflects the Poisson distribution. We prepared a candidate list of a specified size $K$ that contains replacement candidates with a Poisson distribution ratio for each target word. For example, when the replacement candidates of a word "A" are "B" and "C" with the probabilities of 0.4 and 0.6, respectively, the candidate list is "[B, B, C, C, C]" ( $K = 5$ ). Sampling a word from this list can avoid the use of nonuniform distributions; thus, our method can be implemented as quickly as the proposed method without the Poisson distribution. In our implementation, the size of the list $K$ was 1,000 for all the experiments.
+
+ | SR | WD | UTR | LM | WR | WR-L | CWR | CWR-L |
| Text Classification |
| Twitter(En) | 0.2 | 0.5 | 0.4 | 0.4 | 0.4 | 0.4 | 0.4 | 0.5 |
| +BERT | - | 0.3 | 0.1 | 0.1 | 0.2 | 0.3 | 0.2 | 0.2 |
| Twitter(Ja) | 0.8 | 0.5 | 0.4 | 0.3 | 0.4 | 0.4 | 0.4 | 0.4 |
| Weibo(Zh) | 0.9 | 0.3 | 0.4 | 0.1 | 0.2 | 0.2 | 0.1 | 0.4 |
| Rating(En) | 0.1 | 0.4 | 0.3 | 0.3 | 0.3 | 0.4 | 0.5 | 0.5 |
| +BERT | - | 0.4 | 0.1 | 0.3 | 0.3 | 0.4 | 0.4 | 0.2 |
| Genre(En) | 0.3 | 0.6 | 0.7 | 0.3 | 0.3 | 0.3 | 0.5 | 0.5 |
| +BERT | - | 0.5 | 0.5 | 0.4 | 0.4 | 0.3 | 0.5 | 0.5 |
| Rating(Ja) | 0.8 | 0.3 | 0.4 | 0.2 | 0.3 | 0.3 | 0.1 | 0.4 |
| Genre(Ja) | 0.7 | 0.5 | 0.5 | 0.2 | 0.1 | 0.2 | 0.5 | 0.4 |
| Rating(Zh) | 0.5 | 0.4 | 0.4 | 0.2 | 0.2 | 0.2 | 0.7 | 0.3 |
| Genre(Zh) | 0.3 | 0.3 | 0.4 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 |
| Machine Translation |
| DeEn | 0.5 | 0.2 | 0.1 | 0.1 | 0.1 | 0.1 | 0.2 | 0.4 |
| EnDe | 0.5 | 0.2 | 0.2 | 0.1 | 0.1 | 0.2 | 0.1 | 0.1 |
| ViEn | 0.5 | 0.2 | 0.2 | 0.1 | 0.2 | 0.3 | 0.2 | 0.5 |
| EnVi | 0.5 | 0.3 | 0.2 | 0.1 | 0.2 | 0.2 | 0.2 | 0.4 |
| ZhEn | 0.5 | 0.2 | 0.1 | 0.1 | 0.1 | 0.2 | 0.3 | 0.1 |
| EnZh | 0.4 | 0.3 | 0.3 | 0.2 | 0.2 | 0.1 | 0.4 | 0.2 |
+
+Table 4: Hyperparameters selected depending on the validation split for each experiment are reported in Tables 2 and 3.
\ No newline at end of file
diff --git a/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/images.zip b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1aacd0f33f585e78ce2a489535bb6ff8461ef4a9
--- /dev/null
+++ b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9aa9073f93d132661cd51da0899086729b0fe397f2beac43c2ecbe26c90cb599
+size 428485
diff --git a/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/layout.json b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b6227dc255bf2420d8093dae0471688b765614d6
--- /dev/null
+++ b/wordlevelperturbationconsideringwordlengthandcompositionalsubwords/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0c2102c924f69ef16af75eea822ff6e0d10520051a359415209abb0c98b6276
+size 237143
diff --git a/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_content_list.json b/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0741622d6f33b3313a5ea93fac107b7515f29310
--- /dev/null
+++ b/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d116cb12ee7dd16a2b21de52c06ee2a1f215037e594db0749a77e71e1278f75f
+size 74920
diff --git a/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_model.json b/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1c147e0f6bf8960aff7db76e8de4720a68c5206e
--- /dev/null
+++ b/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15775d74e4bc8ea230fe729eadaf8b7dd68ca5d4ab83c9a796c9c90dd78229c4
+size 93254
diff --git a/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_origin.pdf b/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3721490b7fb7959e7b96f0e9df8025e275c394db
--- /dev/null
+++ b/wordsegmentationbyseparationinferenceforeastasianlanguages/ce9c11ed-f6ee-40ec-9189-ea220f84820f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd7ec96615d70fbe17714e1eb54c313afc186313ecc2a6517ef637daa10e71e7
+size 1503905
diff --git a/wordsegmentationbyseparationinferenceforeastasianlanguages/full.md b/wordsegmentationbyseparationinferenceforeastasianlanguages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..34540fc63be20610a0b30e37869baf21c518436d
--- /dev/null
+++ b/wordsegmentationbyseparationinferenceforeastasianlanguages/full.md
@@ -0,0 +1,291 @@
+# Word Segmentation by Separation Inference for East Asian Languages
+
+Yu Tong†, Jingzhi Guo†*, Jizhe Zhou‡, Ge Chen§, Guokai Zheng§
+
+$^{\dagger}$ Department of Computer Science, University of Macau, Macau, China
+
+$^{\ddagger}$ Department of Computer Science, Sichuan University, China
+
+$\S$ vivo AI Lab, Shenzhen, China
+
+$\ddagger \ddagger$ {yb87462, jzguo, yb87409}@umac.mo
+
+$\S$ {gechen.nlp, gkzheng.nlp}@gmail.com
+
+# Abstract
+
+Chinese Word Segmentation (CWS) intends to divide a raw sentence into words through sequence labeling. Thinking in reverse, CWS can also be viewed as a process of grouping a sequence of characters into a sequence of words. In such a way, CWS is reformed as a separation inference task in every adjacent character pair. Since every character is either connected or not connected to the others, the tagging schema is simplified as two tags "Connection" (C) or "NoConnection" (NC). Therefore, bigram is specially tailored for "C-NC" to model the separation state of every two consecutive characters. Our Separation Inference (SpIn) framework is evaluated on five public datasets, is demonstrated to work for machine learning and deep learning models, and outperforms state-of-the-art performance for CWS in all experiments. Performance boosts on Japanese Word Segmentation (JWS) and Korean Word Segmentation (KWS) further prove the framework is universal and effective for East Asian Languages.
+
+# 1 Introduction
+
+In Natural Language Processing (NLP), word segmentation is the commencement of Part-of-Speech (POS) tagging, semantic role labeling (SRL), and other similar studies. Particularly for Chinese, Japanese and Korean languages, the absence of explicit boundaries between characters makes the Word Segmentation (WS) task indispensable in NLP tasks. Dominant word segmentation methods considered WS as a sequence tagging task (Xue, 2003). Various tagging schemas such as "BMES" (Begin, Middle, End, Single), "BIES" (Begin, Inside, End, Single), "SEP-APP" (Separate, Append), "BI" (Begin, Inside), and "START-NONSTART" were employed to tackle the sequence labeling
+
+task. These tagging schemas are all character-based and summarized as four-tags ("BMES", "BIES") and two-tags ("SEP-APP", "BI" "START-NONSTART"). Despite diverse tagging schemas, they all carry implicit position information. For four-tags tagging schemas, the implicit information restricts the transition between tags. Take "BMES" as an example; tag "B" can not be followed by "B" or "S". These two tagging schemas heavily rely on the precise prediction of the relative position of each character in one segment. However, the exact position information is not essential for the WS task. Any unreasonable inner prediction representing the character's relative position results in incorrect segmentation, although the correct boundary prediction. There is no limitation of tag-to-tag transition for the two-tags schema, but according to common sense, the first character of a sentence must be predicted as "SEP", "B" or "START". The implicit constraint of position for the first tag of the sentence still exists. It is necessary to ensure the prediction accuracy of the first tag during the inference. Therefore, CRF is required to revise unreasonable tag-to-tag transitions and learn the implicit restriction including the first tag of a sentence. The CRF has alleviated the unreasonable tag prediction to some degree, but the simultaneous learning of transition and emission matrix still results in the tag inference being intractable. Current works attempt to complicate the network (Chen et al., 2017; Tian et al., 2020) and introduce more information (Cai et al., 2017) such as rich context, linguistic and extra knowledge to tackle the abovementioned problem. However, the intrinsic problem, which is the implicit restriction of the position in the existing tagging schemas, is not well solved. In this paper, we propose "Connection(C)-No-Connection(NC)", which targets on character-to-character connections, to deal with the WS task directly. "C-NC" is independent of the previous state, and there is no dependency between states.
+
+Moreover, there is no restriction for the first state as it is located between the first and the secondary characters. It can be either "C" or "NC". "C" or "NC" is a binary classification task. Therefore, CRF is not required and can then be substituted with a classification network. The tag-transition and implicit restriction burdens can be substantially alleviated through such "C-NC" states. Because "C-NC" describes the connection state between two adjacent characters, we employ bigram features to cooperate with the "C-NC". Compared with existing tagging schemas, which are character-based and the bigram features are considered as extra information, the bigram features in SpIn are the basic processing unit. Therefore, a brand-new Separation Inference (SpIn) framework is proposed and constructed on the bigram features and the classification layer. Sliding one-after-one along all the bigrams, words are yielded by allocating "C" and "NC" tags in the interval of characters. SpIn significantly reduces the inference complexity (inference layer CRF is degraded as the softmax network); dispels extra context information (merely bigram feature is in consideration); and gains competitive performance of CWS on the machine learning in contrast with the deep learning models. Besides its effectiveness on Chinese Word Segmentation, our extensive experiments also verify the universality by attaining state-of-the-art (SOTA) performance in Japanese and Korean Word Segmentation benchmark tests. Our contributions are summarized as follows:
+
+- SpIn provides a new tagging schema from a novel perspective and solves the intrinsic problems of the existing tagging schemas.
+- SpIn is a universal framework that gains state-of-the-art performance on the Word Segmentation task in East Asian Languages.
+- The SpIn framework is also suitable for machine learning models and has achieved competitive results.
+
+# 2 Related Work
+
+Researchers have explored the CWS task from various directions since the 1990s (Sproat et al., 1996). Widely applied methodologies considered it as the sequence tagging task based on various label schemas. CWS was first treated as a sequence
+
+tagging task in (Xue, 2003). The Maximum Entropy (Low et al., 2005) model and the CRF (Lafferty et al., 2001) were the most adopted sequence tagger. There are two main problems in the WS task: the ambiguities and the Out-of-Vocabulary (OOV) words. Researchers tried to leverage extra context features such as the bigram (Zhao et al., 2006; Chen et al., 2015; Pei et al., 2014; Yang et al., 2017; Zhang et al., 2013) and the word features (Morita et al., 2015; Zhang et al., 2016; Zhang and Clark, 2007) to tackle word ambiguities and improve the model's generalization capability. Moreover, language-specific knowledge such as dictionaries was employed (Sun and Xu, 2011) for better CWS. Extra punctuation marks from large manually segmented corpus were introduced to the learning model and proved effective for solving the unknown words (Li and Sun, 2009). Meanwhile, the external knowledge was explored through the semi-supervised models for better segmentation (Sun and Xu, 2011; Wang et al., 2011; Liu and Zhang, 2012; Zhang et al., 2013). Along with the development of pre-trained models like BERT (Devlin et al., 2018), ELMo (Peters et al., 2018), and GPT (Radford et al., 2018), striking improvements on CWS are observed by replacing the feature extraction layer with these powerful pretrained models. Except for the investigation of the effect of features, various tagging schemas were also discussed. Widely applied tagging schema in CWS contains "BMES" (Meng et al., 2019; Huang et al., 2020; Yang et al., 2019, 2017), "BIES" (Ma et al., 2018), "SEP-APP" (Zhang et al., 2016, 2018; Yan et al., 2020), "BI" (Lee and Kim, 2013), and "START-NONSTART" (Tseng et al., 2005; Peng et al., 2004). There is either the limitation of tag-to-tag transitions or the implicit constraint for the first tag for these tagging schemas. These inherent problems were not well solved. Hence, we propose the SpIn framework constructed on the "C-NC" tagging schema and its specially tailored bigram features. SpIn eliminates the implicit restriction of existing tagging schemas and boosts the performance of the WS task in East Asian languages.
+
+# 3 Proposed Method
+
+We propose adopting the bigram feature to adapt to the "C-NC" tagging schema to model the connection of adjacent characters. Distinguished from character-based models leveraging bigram feature as extra information, merely bigram is employed
+
+
+
+
+Figure 1: The figure is the architecture of SpIn applied to the machine learning model. The features are constructed based on the bigram and symbol features by applying the feature templates.
+Figure 2: The figure is the comparison between the traditional two-tags tagging schema and "C-NC". The traditional two-tags tagging schema (upper) is tagged on the character. However, "C-NC" (bottom) is located in the interval between the characters.
+
+and set up as input unit. Adaptation of SpIn involves machine and deep learning models. Figure 1 and Figure 5 summarize the SpIn framework architecture adapted to the machine and deep learning models separately.
+
+Before exploring the structure of SpIn, we firstly elaborate definition of the proposed "C-NC" and distinction with the traditional two-tags tagging schema that indicates whether the current character is the boundary or not. In the later part of this section, we present the detailed structure of SpIn, including how to apply the SpIn framework to the machine learning and deep learning models. For machine learning, we explain how to build features based on the bigram through applying feature templates. Meanwhile, we present how to build the bigram features based on the feature extractor layer for the deep learning model. In the last subsection, we illustrate the inference layer.
+
+# 3.1 Connection and No-Connection Tagging Schema
+
+Tags "Connection" and "No-Connection" are proposed to model whether two adjacent characters (bigram) are in the same segment or not. If two characters in the bigram are not in the same segment, the corresponding label is "NC"; otherwise, the tag is "C".
+
+Borrow "C-NC" to model traditional two-tags tagging schema indicating the current character as the beginning of a word or the continuation. The tagging procedure is illustrated in the upper section in Figure 2. By contrast, "C-NC" represents the connection state of two adjacent characters as illustrated in the lower section. Comparison between traditional two-tags and "C-NC" is summarized from three aspects:
+
+- Traditional two-tags tagging schemas are labeled on each character. However, the tag "C" or "NC" is located in the interval between two characters.
+- The total number of tags of "C-NC" is one less than the traditional two-tags tagging schema.
+- The implicit restriction of the first character in a sentence exists for the traditional tagging schema. In contrast, there is no limitation of the first state for the "C-NC".
+
+# 3.2 Feature Templates for Machine Learning
+
+Feature engineering directly results in the model performance for machine learning models. Therefore, we leverage the bigrams and symbol information to enrich features by applying feature templates. We define the feature templates below:
+
+| Type | Feature | Example | Description |
| bigram | current_bigram | 市长(Mayor) | the current bigram |
| unigram | bigram_head | 市(City) | the head token of the current bigram |
| unigram | bigramTAIL | 长(Yangtze) | the tail token of the current bigram |
| date,digit,letter | bigram_head.is_symbol | [0,0,0] | whether the head token is a symbol or not |
| date,digit,letter | bigram.tail.is_symbol | [0,0,0] | whether the tail token is a symbol or not |
| bigram | pre_bigram | 京市(Jing City) | the previous bigram of the current bigram |
| date,digit,letter | pre_bigram.is_symbol | [0,0,0] | whether the previous bigram is a symbol or not |
| bigram | pre_pre_bigram | 南京(Nanjing) | the previous bigram of the previous bigram |
| date,digit,letter | pre_pre_bigram.is_symbol | [0,0,0] | whether it is a symbol or not |
| bigram | next_bigram | 长江(Yangtze River) | the next bigram of the current bigram |
| date,digit,letter | next_bigram.is_symbol | [0,0,0] | whether the next bigram is a symbol or not |
| bigram | next_next_bigram | 江大(River Big) | the next bigram of the next bigram |
| date,digit,letter | next_next_bigram.is_symbol | [0,0,0] | whether it is a symbol or not |
+
+Figure 3: The figure is the explanation of the element features.
+
+| Feature | Example | Description |
| Feature(0) | 市长+市+长+[0,0,0]+[0,0,0] | represents the feature of the current bigram |
| Feature(-1) | 京市+[0,0,0] | represents the feature of the previous bigram |
| Feature(-2) | 南京+[0,0,0] | represents the feature of the previous bigram of the previous bigram |
| Feature(+1) | 长江+[0,0,0] | represents the feature of the next bigram |
| Feature(+2) | 江大+[0,0,0] | represents the feature of the next bigram of the next bigram |
+
+Figure 4: The figure is the explanation of generated features through applying feature templates.
+
+- Feature(0) = current/bigram + bigram_head + bigramTAIL + bigram_head.is_symbol + bigramTAIL.is_symbol
+Feature(-1) = pre/bigram + pre/bigram.is-symbol
+Feature(-2) pre_pre_bigram + pre_pre_bigram.is_symbol
+Feature $(+1)$ next_bigram + next_bigram.is-symbol
+- Feature(+2) = next_next/bigram + next_next/bigram.is-symbol
+
+Figure 3 explains the element feature. The symbol feature is a one-dimensional array. It indicates whether the character belongs to symbols or not. The symbols include the date, digit, or letter. Figure 4 illustrates the generated features through applying feature templates for the current bigram. The final features are the concatenation of Feature(0), Feature(-1), Feature(-2), Feature(+1) and Feature(+2).
+
+# 3.3 Feature Extraction Layer
+
+As recent state-of-the-art results on CWS tasks are achieved by applying BERT (Devlin et al., 2018) as the feature extraction layer, we follow the same step. Moreover, we customize the feature by adding the additional symbol feature. Through symbol projection, each character is project into a one-dimensional array such as $[0,0,1]$ , each position represents [date, digit, letter]. This case indicates that the current character belongs to letter. Followed by an activate function ReLU, symbol embedding is generated with the vector size of 3 and denoted as $S_{n}$ . The character embedding generated from BERT is a 768-dimensional vector (denoted as $c_{n}$ ) and is resized as $(768 + 3)$ through concatenating with symbol embedding. The customized character embedding is represented as $e_{n}$ . Two adjacent character embeddings with their symbol embeddings are concatenated as bigram features. Hence, the corresponding bigram features (denoted as $b_{n}$ ) are the size of $(768 + 3)*2$ . Two Fully Connected layers follow the constructed bigram features. The CRF layer (or softmax layer) is em
+
+
+Figure 5: The architecture of SpIn applied to the deep learning model. Orange circles below the BERT are the unigram features for each character. Pink circles are the symbol features generated through symbols projection and a ReLU activation function. "+" is the concatenation operation. The unigram features concatenate with symbol features. Dark green circles are bigram features generated after concatenating every two light green circles.
+
+ployed as the inference layer. The architecture of SpIn that is applied to the deep learning model is shown in Figure 5.
+
+# 3.4 Inference Layer
+
+Following previous work (Tseng et al., 2005; Peng et al., 2004), the CRF (Lafferty et al., 2001) layer is adopted as an inference layer for the machine learning model for a fair comparison. The CRF tries to find the optimal tag sequence $Y'$ regarding the input sequence $X$ where:
+
+$$
+Y ^ {\prime} = \underset {Y \in L ^ {n}} {\arg \max } P (Y | X) \tag {1}
+$$
+
+$$
+\begin{array}{l} P (Y | X) = \frac {1}{Z (x)} \exp \left(\sum_ {i, k} \lambda_ {k} t _ {k} \left(y _ {i - 1}, y _ {i}, x, i\right) \right. \tag {2} \\ + \sum_ {i, l} \mu_ {l} s _ {l} (y _ {i}, x, i)) \\ \end{array}
+$$
+
+$L^n$ are all the possible tag sequences, $Z$ is the normalization factor, $t_k$ , $s_l$ are status feature function and $\lambda_k$ , $\mu_l$ are trainable parameters.
+
+# 4 Experiments
+
+Evaluation is first conducted on the CWS to prove the SOTA performance of SpIn. Contrast experiments involve both machine learning and deep learning models for further demonstrating the robustness of SpIn. An ablation study is conducted to investigate the effect of each component.
+
+# 4.1 Datasets
+
+Five Chinese word segmentation datasets are evaluated in the experiments, including Chinese Penn Treebank 6.0 (CTB6) (Xue et al., 2005) and CITYU, AS, PKU, MSR from SIGHAN 2005 bake-off task (Emerson, 2005). PKU, MSR, and CTB6 are simplified Chinese, and the other two AS and CITYU are traditional Chinese.
+
+# 4.2 Evaluation of Machine Learning Model
+
+# 4.2.1 Parameters & Evaluation Metrics
+
+We set L-BFGS as the optimization algorithm for the CRF layer. The L1-norm is 0.598, and the L2-norm is 0.0323. The maximum iterations are 150. Following the widely accepted evaluation methodologies, the F1 score is adopted as the metric for exhibiting reliability.
+
+# 4.2.2 Experiment Results
+
+The evaluation results of SpIn adapted to the machine learning model are listed in Table 1. For a fair comparison, the baseline is selected from the paper in which the machine learning model is applied. Compared with the baseline which is the best result of Bakeoff2005 $^{2}$ , SpIn achieves a significant improvement up to $+1.3\%$ F1 score on the AS dataset. Likewise, SpIn performs better on all similar longitudinal comparisons conducted on the CITYU and MSR datasets.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| Baseline | 94.3 | 95.2 | 95.0 | 96.4 | - |
| SpIn_PL | 95.5 | 96.5 | 94.6 | 96.5 | 96.0 |
| +1.2 | +1.3 | -0.4 | +0.1 | - |
+
+Table 1: SpIn of Machine Learning version (SpIn_PL) v.s. the best results of SIGHAN 2005 Bakeoff. The F1 score is employed as the metric.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| BMES | 94.4 | 94.7 | 91.3 | 95.8 | 95.2 |
| BIS | 95.2 | 95.6 | 91.8 | 96.2 | 95.7 |
| BI | 93.5 | 93.3 | 93.5 | 95.1 | 93.6 |
| C-NC | 95.5 | 96.5 | 94.6 | 96.5 | 96.0 |
+
+# 4.2.3 Ablation Study
+
+As detailed in Figure 1 and Figure 5, the structure of the SpIn contains four main components: the "C-NC" tagging schema, the bigram features, the symbol features, and the inference layer. Since the CRF layer is a common approach and widely used in the era of machine learning as a decoder to restrict unreasonable tag transition, we exclude it in this ablation section and concentrate on the efficacy of the other three components. Our investigation is mainly carried out through:
+
+- substituting "C-NC" with traditional tagging schemas;
+- substituting bigram with unigram features;
+- removing symbol features;
+
+Substitution of "C-NC" Contrast experiments of tagging schemas are illustrated in Table 2. Keep bigram features, substitute "C-NC" with traditional "BMES", "BIS" and "BI" (equivalent to "START-NONSTART" and "SEP-APP") tagging schemas. Experiment conditions are set still. For adapting these three character-based tagging schemas, the bigram feature is considered rich context information for the current character. Each character feature is substituted with the bigram feature, representing the concatenation of the current and the previous character feature with their corresponding symbol feature. For the first character in the sentence, we put a "PAD" token to join the first character and form its bigram. The corresponding tag of the original character is labeled on the substituted bigram. The experiment results in Table 2 illustrate that "C-NC" does promote performance on all five datasets compared with traditional tagging schemas.
+
+Substitution of Bigram Features Keep the "C-NC" tagging schema and conduct the contrast ex
+
+Table 2: "C-NC" v.s traditional tagging schemas. The F1 score is employed as the metric.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| Unigram | 86.5 | 88.0 | 86.5 | 87.1 | 89.6 |
| Bigram | 95.5 | 96.5 | 94.6 | 96.5 | 96.0 |
+
+Table 3: unigram v.s. bigram features. The F1 score is employed as the metric.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| W/O Symbols | 94.6 | 95.4 | 92.7 | 96.1 | 93.4 |
| Symbols | 95.5 | 96.5 | 94.6 | 96.5 | 96.0 |
+
+Table 4: with symbols v.s. without symbols. The F1 score is employed as the metric.
+
+periment to investigate the effect of features. Integrating "C-NC" with unigram features downgrades "C-NC" as "BI" or "START-NONSTART". The comparison between bigram and traditional unigram features is illustrated in Table 3. Although "C-NC" is employed, the traditional unigram feature performs worse than SpIn. Therefore, bigram is essential and specially tailored for our proposed "C-NC".
+
+Substitution of Symbol Features Table 4 illustrates the effect of the symbol features. After employing the symbol features, the result is further pushed up to $+2.6\%$ F1 score on the CTB6 dataset. Symbol features promote the performance of SpIn on the CWS task. Hence, the symbol features are leveraged in the following experiments by default.
+
+For the "C-NC" tagging schema, if unigram is adopted, it will be equivalent to "BI" or "START-NONSTART", and significant performance loss has been observed on all datasets. Similarly, the decline in F score has been observed after removing the symbol feature. In summary, the whole framework contributes to the performance boosts instead of any component.
+
+# 4.3 Evaluation of Deep Learning Model
+
+# 4.3.1 Parameters & Evaluation Metrics
+
+The sequence length is 128; the learning rate is $2e-5$ ; batch size is 64, and the training epochs are 10. The early stop mechanism is introduced to avoid over-fitting. Adam is employed as the optimizer. All the parameters mentioned above are still set in the following experiments. Besides the F1 score, the recall of Out-of-Vocabulary words (R_oov) is a critical metric to evaluate the generalization of the word segmentation model. Hence, R_oov is also employed to prove SpIn is robust and effective for East Asian Languages. Besides the F1 and R_oov, we employ the Standard Deviation (SD) of five experiments to indicate model reliability.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| F1 | R_oov | F1 | R_oov | F1 | R_oov | F1 | R_oov | F1 | R_oov |
| Chen et al., 2017 | 95.6 | 81.40 | 94.6 | 73.50 | 94.3 | 72.67 | 96.0 | 71.60 | 96.2 | 82.48 |
| Gong et al., 2019 | 96.2 | 73.58 | 95.2 | 77.33 | 96.2 | 69.88 | 97.8 | 64.20 | 97.3 | 83.89 |
| Huang et al., 2020 | 97.6 | 87.27 | 96.6 | 79.26 | 96.6 | 79.71 | 97.9 | 83.35 | 97.6 | 87.77 |
| Meng et al., 2019 | 97.9 | - | 96.7 | - | 96.7 | - | 98.3 | - | - | - |
| Tian et al., 2020 | 97.8 | 87.57 | 96.58 | 78.48 | 96.51 | 86.76 | 98.28 | 86.67 | 97.16 | 88.00 |
| Qiu et al., 2020 | 96.91 | 86.91 | 96.44 | 76.39 | 96.41 | 78.91 | 98.05 | 78.92 | 96.99 | 87.0 |
| Ke et al., 2021 | 98.20 | 90.66 | 97.01 | 80.89 | 96.92 | 80.90 | 98.50 | 83.03 | 97.89 | 89.21 |
| SpIn_DL | 98.6 (0.06) | 90.68 (0.02) | 97.5 (0.01) | 81.36 (0.05) | 98.0 (0.02) | 93.53 (0.10) | 98.7 (0.04) | 93.13 (0.02) | 98.6 (0.10) | 93.90 (0.06) |
+
+Table 5: SpIn of Deep Learning version (SpIn_DL) v.s. dominant deep neural methods on the CWS task. Values in the brackets are SD of five experiments.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| BMES | 97.7 | 96.8 | 96.3 | 97.7 | 97.2 |
| BIS | 98.1 | 97.1 | 96.8 | 98.1 | 97.5 |
| BI | 98.3 | 97.2 | 97.4 | 98.3 | 98.0 |
| C-NC | 98.6 | 97.5 | 98.0 | 98.7 | 98.6 |
+
+Table 6: "C-NC" v.s. traditional tagging schemas. Refer to Table 5 for baseline. The F1 score is employed as the metric.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| Unigram | 98.3 | 97.3 | 97.7 | 98.4 | 98.3 |
| Bigram | 98.6 | 97.5 | 98.0 | 98.7 | 98.6 |
+
+# 4.3.2 Experiment Results
+
+The experiment results are reported in Table 5. SpIn brought an improvement up to $+1.08\%$ F1 score on the PKU dataset and at least $+0.2\%$ F1 score on the MSR dataset. Moreover, the best OOV performance observed on all five datasets shows the effectiveness of SpIn on OOV words. $+6.77\%$ improvement is achieved on the PKU dataset. The promotions on the OOV recall demonstrate the better generalization capability and robustness of SpIn.
+
+Similar to the above experiments of the machine learning model, we also conduct the ablation study to evaluate the effects of different factors on the deep learning model as reported in Table 6, 7, 8, 9. The F1 score is employed in these four contrast experiments as the metric. The baseline refers to previous work mentioned in Table 5 from line 2 to line 8.
+
+Bigram features are also applied as context features to adapt traditional tagging schemas. The bigram feature is generated by concatenating the current and the previous character feature with their corresponding symbol feature. Similarly, we add extra "PAD" for the first character to construct the first bigram feature. The corresponding tag of the original character is labeled on the bigram feature. The experiment results in Table 6 show that "C-NC" achieves the best performance. Therefore, in the situation of rich features, the "C-NC" tagging schema also works for deep learning models.
+
+Table 7: bigram v.s unigram features. Refer to Table 5 for baseline. The F1 score is employed as the metric.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| W/O Symbols | 98.4 | 97.3 | 98.0 | 98.6 | 98.5 |
| Symbols | 98.6 | 97.5 | 98.0 | 98.7 | 98.6 |
+
+Table 8: with symbols v.s. without symbols. Refer to Table 5 for baseline. The F1 score is the metric.
+
+ | CITYU | AS | PKU | MSR | CTB6 |
| CRF | 98.5 | 97.5 | 98.0 | 98.6 | 98.6 |
| softmax | 98.6 | 97.4 | 98.0 | 98.7 | 98.5 |
+
+Table 9: softmax v.s. CRF as inference layer. Refer to Table 5 for baseline. The F1 score is the metric.
+
+We also adapt the unigram feature to the "C-NC" tagging schema to follow the variable-controlling method. It makes "C-NC" the same as "BI". The contrast experiment between the bigram and the unigram feature is conducted. The results are shown in Table 7. In contrast with SpIn(ML), the bigram feature achieves insignificant improvement in SpIn(DL) because of rich pre-trained feature representation. Nevertheless, there are still $+0.3\%$ F1 score boosts are observed on CITYU, PKU, MSR, and CTB6 datasets.
+
+Table 8 illustrates the effect of the symbol features for the deep neural model. In contrast with the results in Table 4, the symbol features are insignificant in result improvements. Nevertheless, $+0.2\%$ F1 score improvements are gained on CITYU and AS datasets. The reason for inconspicuous performance is that BERT simplifies feature engineering with its rich representation.
+
+As SpIn eliminates the restriction of tag-to-tag transition and the first tag in a sentence, the softmax can further substitute the CRF. Table 9 illustrates that replacing the CRF with the softmax does not affect the performance. The competitive results are achieved with less complexity of the network.
+
+# 4.4 Comparison of SpIn_DL and SpIn_ML
+
+Table 11 illustrates the comparison between the SpIn_DL and SpIn_ML. The model size and response time are approximated to the nearest integer. The model size of SpIn_DL is four times as large as SpIn_ML. For SpIn_DL, model size depends
+
+ | BCCWJ |
| F1 | R_oov |
| Kitagawa and Komachi, 2018 | 98.42 | - |
| Higashiyama et al., 2019 | 98.93 | - |
| BMES+Unigram | 97.71 | 90.08 |
| BIS+Unigram | 98.17 | 91.73 |
| BI+Unigram | 98.39 | 92.51 |
| SpIn | 98.94 (0.08) | 93.01 (0.01) |
+
+Table 10: SpIn v.s. dominant methods on JWS. Values in the brackets are SD of five experiments.
+
+ | Size | Time (CPU) | F1 score |
| SpIn_DL | 400M | 15000us/char | 97.5 |
| SpIn_ML | 100M | 30us/char | 96.5 |
+
+on the network structure. However, for SpIn_PL, the model size depends on the scale of training data. We choose the AS (the largest dataset) from the five datasets to conduct the comparative experiment. Therefore, the maximum model size of SpIn_PL is near 100M. The inference process is performed on the empty CPU machine. We randomly select 2000 sentences from all datasets for testing. The sentence length is limited to [10, 50]. We conducted 10 experiments and get the average value. The speed of SpIn_PL is 500 times as fast as SpIn_DL. In contrast, the performance difference (F1 score) between SpIn_PL and SpIn_DL is only $1\%$ .
+
+# 4.5 Qualitative Analysis
+
+Besides the academic studies, we also compare SpIn with the well-established commercial model LTP4.0 (Che et al., 2021). LTP4.0 leverages large training datasets. However, in this qualitative analysis, SpIn is merely trained on the smaller CTB6 dataset. In Figure 6, the ground truth agrees with SpIn for both sentences. The main issue focuses on the words "precalcining kiln" in the top sentence and "total failure" at the bottom. "Precalcining kiln" is a professional word leading to the out-of-vocabulary problem. The word "the whole chessboard" tends to be associated with "lose all" because the word is an idiom indicating "lose the whole chess game". These two featured cases reveal the generalization capacity of SpIn while handling biased samples.
+
+# 5 Adaptation to Asian Languages
+
+Japanese Word Segmentation (JWS) and Korean Word Segmentation (KWS) are evaluated on SpIn_DL to further prove SpIn is universal.
+
+Table 11: SpIn_DL v.s. SpIn_ML.
+
+ | KAIST | GSD |
| F1 | R_oov | F1 | R_oov |
| BMES+Unigram | 87.62 | 78.34 | 87.12 | 78.27 |
| BIS+Unigram | 92.19 | 83.72 | 89.94 | 81.97 |
| BI+Unigram | 92.26 | 83.78 | 90.03 | 82.08 |
| SpIn | 92.37 (0.04) | 83.81 (0.08) | 91.19 (0.09) | 82.24 (0.12) |
+
+Table 12: SpIn v.s. dominant methods on KWS. Values in the brackets are SD of five experiments.
+
+Input1:中国近年来还从国外引进了预分解窑生产线SpIn:[中国][近年来][还从][国外][引进][了][预分解窑][生产线]LTP:[中国][近年来][还从][国外][引进][了][预分][解][窑][生产线]
+Input2:只要有百分之一的漏失,就可能全盘皆输SpIn:[只要][有][百分之一][的][漏失][]就[可能][全盘皆输]LTP:[只要][有][百分之一][的][漏失][]就[可能][全盘][皆][输]
+
+# Figure 6: SpIn v.s. LTP4.0
+
+# 5.1 Datasets & Settings
+
+The widely used dataset Balanced Corpus of Contemporary Written Japanese (BCCWJ) version 1.1 (Maekawa et al., 2014) is evaluated in JWS. We follow the same dataset split with the Project Next NLP for BCCWJ. UD_Korean-GSD corpora $^{3}$ and KAIST $^{4}$ are used to evaluate KWS. These two widely used datasets in syntactic parsing tasks are automatically converted from structural trees in the Google UD Treebank (McDonald et al., 2013) and the KAIST Treebank (Choi et al., 1994). BERT-base-Chinese is substituted with BERT_Multilingual that contains Japanese and Korean as the feature extraction layer.
+
+# 5.2 Results of JWS and KWS
+
+As LSTM (Long Short Term Memory) neural network is employed in (Kitagawa and Komachi, 2018), we exclude performance boosts gained from BERT and conduct the contrast experiment between the traditional methods and the SpIn. We employ unigram and traditional tagging schemas in the comparative experiments. Table 10 demonstrates that SpIn also achieves SOTA results on JWS. In contrast with works leveraging word dictionaries and character type information, SpIn is closed without any extra knowledge. Besides, compared with the traditional methods that also leverage BERT, significant improvement up to $+0.55\%$ F1 score is obtained. Meanwhile, the best R_oov is observed. As no WS work was conducted on these two Korean datasets, we report results compared with traditional methods in Table 12. Performance boosts are observed on both datasets especially up to $+1.25\%$ F1 improvement on the GSD dataset.
+
+R_oov boosts indicate SpIn is with good generalization ability and works effectively for Korean.
+
+# 6 Conclusion
+
+SpIn provides a novel viewpoint and implements the WS task by modeling two consecutive characters' separation states. Our simple but effective framework is robust and universal. State-of-the-art performances of word segmentation tasks are achieved in East Asian languages. Moreover, the significant boosts on OOV words demonstrate that SpIn has the robustness and generalization ability.
+
+# References
+
+Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 608-615, Vancouver, Canada. Association for Computational Linguistics.
+Wanxiang Che, Yunlong Feng, Libo Qin, and Ting Liu. 2021. N-LTP: An open-source neural language technology platform for Chinese. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 42-49, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for Chinese word segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1197-1206, Lisbon, Portugal. Association for Computational Linguistics.
+Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1193-1203, Vancouver, Canada. Association for Computational Linguistics.
+Key-Sun Choi, Young S Han, Young G Han, and Oh W Kwon. 1994. Kaist tree bank project for korean: Present and future development. In Proceedings of the International Workshop on Sharable Natural Language Resources, pages 7-14. CiteSeer.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
+Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing.
+Jingjing Gong, Xinchi Chen, Tao Gui, and Xipeng Qiu. 2019. Switch-lstms for multi-criteria Chinese word segmentation. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6457-6464. AAAI Press.
+Shohei Higashiyama, Masao Utiyama, Eiichiro Sumita, Masao Ideuchi, Yoshiaki Oida, Yohei Sakamoto,
+
+and Isaac Okada. 2019. Incorporating word attention into character-based word segmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2699-2709, Minneapolis, Minnesota. Association for Computational Linguistics.
+Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, and Wei Chu. 2020. Towards fast and accurate neural Chinese word segmentation with multi-criteria learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2062-2072, Barcelona, Spain (Online). International Committee on Computational Linguistics.
+Zhen Ke, Liang Shi, Songtao Sun, Erli Meng, Bin Wang, and Xipeng Qiu. 2021. Pre-training with meta learning for Chinese word segmentation. pages 5514-5523.
+Yoshiaki Kitagawa and Mamoru Komachi. 2018. Long short-term memory for Japanese word segmentation. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation, Hong Kong. Association for Computational Linguistics.
+John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, page 282-289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
+Changki Lee and Hyunki Kim. 2013. Automatic Korean word spacing using pegasos algorithm. Inf. Process. Manage., 49(1):370-379.
+Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for chinese word segmentation. Comput. Linguist., 35(4):505-512.
+Yang Liu and Yue Zhang. 2012. Unsupervised domain adaptation for joint segmentation and POS-tagging. In Proceedings of COLING 2012: Posters, pages 745-754, Mumbai, India. The COLING 2012 Organizing Committee.
+Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing.
+Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese word segmentation with bilLSTMs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4902-4908, Brussels, Belgium. Association for Computational Linguistics.
+Kikuo Maekawa, Makoto Yamazaki, Toshinobu Ogiso, Takehiko Maruyama, Hideki Ogura, Wakako Kashino, Hanae Koiso, Masaya Yamaguchi, Makiro
+
+Tanaka, and Yasuharu Den. 2014. Balanced corpus of contemporary written Japanese language resources and evaluation. Language Resources and Evaluation, 48.
+Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal Dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92-97, Sofia, Bulgaria. Association for Computational Linguistics.
+Yuxian Meng, Wei Wu, Fei Wang, Xiaoya Li, Ping Nie, Fan Yin, Muyu Li, Qinghong Han, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for chinese character representations. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 2746-2757. Curran Associates, Inc.
+Hajime Morita, Daisuke Kawahara, and Sadao Kurohashi. 2015. Morphological analysis for unsegmented languages using recurrent neural network language model. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2292-2297, Lisbon, Portugal. Association for Computational Linguistics.
+Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for Chinese word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 293-303, Baltimore, Maryland. Association for Computational Linguistics.
+Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 562-568, Geneva, Switzerland. COLING.
+Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
+Xipeng Qiu, Hengzhi Pei, Hang Yan, and Xuanjing Huang. 2020. A concise model for multi-criteria Chinese word segmentation with transformer encoder. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2887-2897, Online. Association for Computational Linguistics.
+
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
+Richard W. Sproat, Chilin Shih, William Gale, and Nancy Chang. 1996. A stochastic finite-state word-segmentation algorithm for Chinese. Computational Linguistics, 22(3):377-404.
+Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 970-979, Edinburgh, Scotland, UK. Association for Computational Linguistics.
+Yuanhe Tian, Yan Song, Fei Xia, Tong Zhang, and Yonggang Wang. 2020. Improving Chinese word segmentation with wordhood memory networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8274-8285, Online. Association for Computational Linguistics.
+Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing.
+Yiou Wang, Jun'ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving Chinese word segmentation and POS tagging with semi-supervised methods using large auto-analyzed data. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 309-317, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.
+Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Nat. Lang. Eng., 11(2):207-238.
+Nianwen Xue. 2003. Chinese word segmentation as character tagging. In International Journal of Computational Linguistics & Chinese Language Processing, Volume 8, Number 1, February 2003: Special Issue on Word Formation and Chinese Language Processing, pages 29-48.
+Hang Yan, Xipeng Qiu, and Xuanjing Huang. 2020. A graph-based model for joint chinese word segmentation and dependency parsing. Transactions of the Association for Computational Linguistics, 8:78-92.
+Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural word segmentation with rich pretraining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 839-849, Vancouver, Canada. Association for Computational Linguistics.
+Jie Yang, Yue Zhang, and Shuailong Liang. 2019. Subword encoding in lattice LSTM for Chinese word
+
+segmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2720-2725, Minneapolis, Minnesota. Association for Computational Linguistics.
+Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from unlabeled data with co-training for Chinese word segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 311-321, Seattle, Washington, USA. Association for Computational Linguistics.
+Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Transition-based neural word segmentation. pages 421-431.
+Meishan Zhang, Yue Zhang, and Guohong Fu. 2018. Transition-based neural word segmentation using word-level features. J. Artif. Int. Res., 63(1):923-953.
+Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 840-847, Prague, Czech Republic. Association for Computational Linguistics.
+Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An improved Chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 162-165, Sydney, Australia. Association for Computational Linguistics.
\ No newline at end of file
diff --git a/wordsegmentationbyseparationinferenceforeastasianlanguages/images.zip b/wordsegmentationbyseparationinferenceforeastasianlanguages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..476e7b86972bb4aeb9d4915963e309676a71af67
--- /dev/null
+++ b/wordsegmentationbyseparationinferenceforeastasianlanguages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:058d05882e317e0fddd96cbe090e43702c1ec9c9e03ea23352723a6b5bd754e6
+size 533713
diff --git a/wordsegmentationbyseparationinferenceforeastasianlanguages/layout.json b/wordsegmentationbyseparationinferenceforeastasianlanguages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..179296516092bca565f309a3cd1d92488232da94
--- /dev/null
+++ b/wordsegmentationbyseparationinferenceforeastasianlanguages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22c1712494e5230355674aa512db3cf3e48876a5554fe755a4d9e0ef8cde94aa
+size 326006
diff --git a/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_content_list.json b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fe360531f3be42b17bd8c7d040106761b125f30c
--- /dev/null
+++ b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7377288683f0bd559678f06838dd434e74862eefc6a08990f9697661d147c0c
+size 73598
diff --git a/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_model.json b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7b24292c72cfad0e0647f6477328012a86a76311
--- /dev/null
+++ b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61e0daf3f51c1812d78737b5547a9dbb7928fdde80a43d5248976323d49b7a98
+size 84622
diff --git a/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_origin.pdf b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8510b171487908752b2be0659a33a8930b17fa5c
--- /dev/null
+++ b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/f226b8ad-bf66-4129-8a5e-6bbc912ed3fb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb3a62ddfbd3e794a107cd4bd0db9ced44000cb1428cfcbdb43055c4d2672adc
+size 9799716
diff --git a/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/full.md b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bbbcaa01db6cbe74e0c6d7312979bc5e2c5e9041
--- /dev/null
+++ b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/full.md
@@ -0,0 +1,279 @@
+# XFUND: A Benchmark Dataset for Multilingual Visually Rich Form Understanding
+
+Yiheng Xu $^{1*}$ , Tengchao Lv $^{1}$ , Lei Cui $^{1}$ , Guoxin Wang $^{2}$ , Yijuan Lu $^{2}$ , Dinei Florencio $^{2}$ , Cha Zhang $^{2}$ , Furu Wei $^{1}$
+
+1Microsoft Research Asia 2Microsoft Azure AI
+
+{t-yihengxu,tengchaolv,lecu}@microsoft.com
+
+{guow,yijlu,dinei,chazhang,fuwei}@microsoft.com
+
+# Abstract
+
+Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and pre-trained LayoutXLM models have been publicly available at https://aka.ms/layoutxlm.
+
+# 1 Introduction
+
+Recently, multimodal pre-training for visually rich document understanding (VRDU) has achieved new SOTA performance on several public benchmarks (Xu et al., 2021, 2020), including form understanding (Jaume et al., 2019), receipt understanding (Park et al., 2019), complex layout understanding (Stanisławek et al., 2021), document image classification (Harley et al., 2015) and document VQA task (Mathew et al., 2021), due to the advantage that text, layout and image information is jointly learned end-to-end in a single framework. However, since most evaluation benchmarks focus
+
+on English VRDs, it is hard to explore the performance of a document understanding system on VRDs in other languages. Simply translating these documents automatically with machine translation services might help, but it is often not satisfactory due to the poor translation quality on document images (Afli and Way, 2016). Therefore, it is vital to explore the multilingual generalization ability of multimodal pre-training for VRDU tasks.
+
+Multilingual pre-trained models such as mBERT (Devlin et al., 2019), XLM (Conneau and Lample, 2019), XLM-RoBERTa (Conneau et al., 2020), mBART (Liu et al., 2020), and the recent InfoXLM (Chi et al., 2021) and mT5 (Xue et al., 2021) have pushed many SOTA results on cross-lingual natural language understanding tasks by pre-training the Transformer models on different languages. These models have successfully bridged the language barriers in a number of cross-lingual transfer benchmarks such as XNLI (Conneau et al., 2018) and XTREME (Hu et al., 2020). Although a large amount of multilingual text data has been used in these cross-lingual pre-trained models, text-only multilingual models cannot be easily used in the VRDU tasks because they are usually fragile in analyzing the documents due to the format/layout diversity of documents in different countries, and even different regions in the same country. Hence, to accurately understand these visually rich documents in different languages, it is crucial to pre-train the multilingual models in a multimodal framework. Meanwhile, it is vital to provide a human-labeled benchmark to further facilitate multilingual document understanding.
+
+To this end, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which contains 7 languages, including Chinese, Japanese, Spanish, French, Italian, German, Portuguese. In addition to the fully annotated data, we propose two subtasks
+
+with three different settings. The two subtasks are semantic entity recognition and relation extraction. And we introduce three different settings to explore the multilingual and complex layout generalization ability: (1) Language-specific fine-tuning follows the typical paradigm of fine-tuning and testing on the same language. (2) Zero-transfer learning means that the model is trained on English data only and then evaluated on each target language. (3) Multitask fine-tuning requires the model to be trained on data from all languages and then evaluated on each target language. These different settings evaluate not only the multilingual representation for each languages but also the cross-lingual generalization across tasks.
+
+Moreover, we also present a multimodal pre-trained model for multilingual VRDU tasks, aka LayoutXLM, which is a multilingual extension of the recent LayoutLMv2 model (Xu et al., 2021). To evaluate the multilingual generalization ability of this framework, we use the pre-training objectives of LayoutLMv2, including Masked Visual-Language Model (MVLM), Image-Text Matching (ITM), and Image-Text Alignment (ITA). In addition, we pre-train the model with the IIT-CDIP dataset (Lewis et al., 2006) as well as a great number of publicly available digital-born multilingual PDF files from the internet, which helps the LayoutXLM model to learn from real-world documents. In this way, the model obtains textual and visual signals from a variety of document templates/arrays/formats in different languages, thereby taking advantage of the local invariance property from both textual, visual and linguistic perspectives. Experiment results show that the pre-trained LayoutXLM outperforms several SOTA cross-lingual pre-trained models(Conneau et al., 2020; Chi et al., 2021) on the XFUND benchmark dataset, which also demonstrates the potential of the multimodal pre-training strategy for multilingual document understanding.
+
+The contributions of this paper are summarized as follows:
+
+- We introduce XFUND, a multilingual form understanding benchmark dataset that includes human-labeled forms with key-value pairs in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese).
+- We propose LayoutXLM, a multimodal pretrained model for multilingual document un
+
+
+Figure 1: The illustration of corpus construction.
+
+derstanding, which is trained with large-scale real-world scanned/digital-born documents.
+
+- LayoutXLM has outperformed other SOTA multilingual baseline models on the XFUND dataset, which demonstrates the great potential for the multimodal pre-training for the multilingual VRDU task. The pre-trained LayoutXLM model and the XFUND dataset have been publicly available.
+
+# 2 XFUND
+
+As illustrated in Figure 1, we develop our XFUND dataset in four steps including §2.1 Template Collection, §2.2 Form Creation, §2.3 Key-value Annotation, and §2.4 Data Finalization and Statistics, spending around 1,500 hours of human labor in total. Further details of ethnic consideration are presented in §A Ethical Consideration.
+
+# 2.1 Template Collection
+
+Forms are usually used to collect information in different business scenarios. To avoid the privacy and sensitive information issue with real-world documents, we collect the documents publicly available on the internet and remove the content within the documents while only keeping the templates to fill in synthetic information manually. We collect form templates in 7 languages from the internet.
+
+# 2.2 Form Creation
+
+With the collected form templates, the human annotators manually fill synthetic information into these templates following corresponding requirements. Each template is allowed to be used only once, which means each form is different from the others. Besides, since the FUNSD (Jaume et al., 2019) documents contain both digitally filled-out forms and handwritten forms, we also ask annotators to fill in the forms by typing or handwriting. The completed
+
+
+(a) Chinese
+
+
+(b) Italian
+
+
+(c) Spanish
+Figure 2: Three sampled forms from the XFUND benchmark dataset (Chinese and Italian), where red denotes the headers, green denotes the keys and blue denotes the values.
+
+forms are finally scanned into document images for further OCR processing and key-value labeling.
+
+# 2.3 Key-value Annotation
+
+Key-value pairs are also annotated by human annotators. Equipped with the synthetic forms, we use Microsoft Read API to generate OCR tokens with bounding boxes. With an in-house GUI annotation tool, annotators are shown the original document images and the bounding boxes visualization of all OCR tokens. The annotators are asked to group the discrete tokens into entities and assign pre-defined labels to the entities. Also, if two entities are related, they are linked together as a key-value pair.
+
+# 2.4 Data Finalization and Statistics
+
+We design testing scripts to filter and check the annotated files and ask specific annotators for ethic checking. Cases with detected issues will be sent to the data annotation pipeline again for new valid labels.
+
+Finally, the XFUND benchmark includes 7 languages with 1,393 fully annotated forms, where sampled documents are shown in Figure 2. Each language includes 199 forms, where the training set includes 149 forms, and the test set includes 50 forms. Detailed information is shown in Table 1.
+
+| Lang | Split | Header | Question | Answer | Other | Total |
| ZH | training | 229 | 3,692 | 4,641 | 1,666 | 10,228 |
| testing | 58 | 1,253 | 1,732 | 586 | 3,629 |
| JA | training | 150 | 2,379 | 3,836 | 2,640 | 9,005 |
| testing | 58 | 723 | 1,280 | 1,322 | 3,383 |
| ES | training | 253 | 3,013 | 4,254 | 3,929 | 11,449 |
| testing | 90 | 909 | 1,218 | 1,196 | 3,413 |
| FR | training | 183 | 2,497 | 3,427 | 2,709 | 8,816 |
| testing | 66 | 1,023 | 1,281 | 1,131 | 3,501 |
| IT | training | 166 | 3,762 | 4,932 | 3,355 | 12,215 |
| testing | 65 | 1,230 | 1,599 | 1,135 | 4,029 |
| DE | training | 155 | 2,609 | 3,992 | 1,876 | 8,632 |
| testing | 59 | 858 | 1,322 | 650 | 2,889 |
| PT | training | 185 | 3,510 | 5,428 | 2,531 | 11,654 |
| testing | 59 | 1,288 | 1,940 | 882 | 4,169 |
+
+Table 1: Statistics of the XFUND dataset. Each number in the table indicates the number of entities in each category.
+
+# 2.5 Task Definition
+
+Key-value extraction is one of the most critical tasks in form understanding. Inspired by FUNSD (Jaume et al., 2019), we define this task with two sub-tasks, which are semantic entity recognition and relation extraction.
+
+Semantic Entity Recognition Given a visually rich document $\mathcal{D}$ , we acquire discrete token set $t = \{t_0, t_1, \ldots, t_n\}$ , where each token $t_i = (w, (x_0, y_0, x_1, y_1))$ consists of a word $w$ and its bounding box coordinates $(x_0, y_0, x_1, y_1)$ .
+
+
+Figure 3: Architecture of the LayoutXLM Model, where the semantic entity recognition and relation extraction tasks are also demonstrated.
+
+$\mathcal{C} = \{c_0, c_1,.., c_m\}$ is the semantic entity labels where the tokens are classified into. Semantic entity recognition is the task of extracting semantic entities and classifying them into given entity types. In other words, we intend to find a function $F_{SER} : (\mathcal{D}, \mathcal{C}) \to \mathcal{E}$ , where $\mathcal{E}$ is the predicted semantic entity set:
+
+$$
+\mathcal {E} = \{(\{t _ {0} ^ {0}, \dots , t _ {0} ^ {n _ {0}} \}, c _ {0}), \dots , (\{t _ {k} ^ {0}, \dots , t _ {k} ^ {n _ {k}} \}, c _ {k}) \}
+$$
+
+Relation Extraction Equipped with the document $\mathcal{D}$ and the semantic entity label set $\mathcal{C}$ , relation extraction aims to predict the relation between any two predicted semantic entities. Defining $\mathcal{R} = \{r_0, r_1,.., r_m\}$ as the semantic relation labels, we intend to find a function $F_{RE}:(\mathcal{D},\mathcal{C},\mathcal{R},\mathcal{E})\to \mathcal{L}$ where $\mathcal{L}$ is the predicted semantic relation set:
+
+$$
+\mathcal {L} = \{(h e a d _ {0}, t a i l _ {0}, r _ {0}), \dots , (h e a d _ {k}, t a i l _ {k}, r _ {k}) \}
+$$
+
+where $head_{i}$ and $tail_{i}$ are two semantic entities. In this work, we mainly focus on the key-value relation extraction.
+
+# 3 LayoutXLM
+
+In this section, we present a powerful baseline model LayoutXLM and introduce its model architecture, pre-training objectives, and pre-training dataset. We follow the LayoutLMv2 (Xu et al.,
+
+2021) architecture and transfer the model to large-scale multilingual document datasets.
+
+# 3.1 Model Architecture
+
+Similar to the LayoutLMv2 framework, we built the LayoutXLM model with a multimodal Transformer architecture. The framework is shown in Figure 3. The model accepts information from three different modalities, including text, layout, and image, which are encoded respectively with text embedding, layout embedding, and visual embedding layers. The text and image embeddings are concatenated, then plus the layout embedding to get the input embedding. The input embeddings are encoded by a multimodal Transformer with the spatial-ware self-attention mechanism. Finally, the output contextual representation can be utilized for the following task-specific layers. For brevity, we refer to (Xu et al., 2021) for further details on architecture.
+
+# 3.2 Pre-training
+
+The pre-training objectives of LayoutLMv2 have shown effectiveness in modeling visually rich documents. Therefore, we naturally adapt this pre-training framework to multilingual document pre-training. Following the idea of cross-modal alignment, our pre-training framework for docu
+
+ment understanding contains three pre-training objectives, which are Multilingual Masked Visual-Language Modeling (text layout alignment), Text-Image Alignment (fine-grained text-image alignment), and Text-Image Matching (coarse-grained text-image alignment).
+
+Multilingual Masked Visual-Language Modeling The Masked Visual-Language Modeling (MVLM) is originally proposed in the vanilla LayoutLM and also used in LayoutLMv2, aiming to model the rich text in visually rich documents. In this pre-training objective, the model is required to predict the masked text token based on its remaining text context and whole layout clues. Similar to the LayoutLM/LayoutLMv2, we train the LayoutXLM with the Multilingual Masked Visual-Language Modeling objective (MMVLM).
+
+In LayoutLM/LayoutLMv2, an English word is treated as the basic unit, and its layout information is obtained by extracting the bounding box of each word with OCR tools, then subtokens of each word share the same layout information. However, for LayoutXLM, this strategy is not applicable because the definition of the linguistic unit is different from language to language. To prevent the language-specific pre-processing, we decide to obtain the character-level bounding boxes. After the tokenization using SentencePiece with a unigram language model, we calculate the bounding box of each token by merging the bounding boxes of all characters it contains. In this way, we can efficiently unify the multilingual multimodal inputs.
+
+Text-Image Alignment The Text-Image Alignment (TIA) task is designed to help the model capture the fine-grained alignment relationship between text and image. We randomly select some text lines and then cover their corresponding image regions on the document image. The model needs to predict a binary label for each token based on whether it is covered or not.
+
+Text-Image Matching For Text-Image Matching (TIM), we aim to align the high-level semantic representation between text and image. To this end, we require the model to predict whether the text and image come from the same document page.
+
+# 3.3 Pre-training Data
+
+The LayoutXLM model is pre-trained with documents in 53 languages. In this section, we briefly
+
+describe the pipeline for preparing the large-scale multilingual document collection.
+
+Data Collection To collect a large-scale multilingual visually rich document collection, we download and process publicly available multilingual digital-born PDF documents following the principles and policies of Common Crawl. Using digital-born PDF documents can benefit the collecting and pre-processing steps. On the one hand, we do not have to identify scanned documents among the natural images. On the other hand, we can directly extract accurate text with corresponding layout information with off-the-shelf PDF parsers and save time for running expensive OCR tools.
+
+Pre-processing The pre-processing step is needed to clean the dataset since the raw multilingual PDFs are often noisy. We use an open-source PDF parser called PyMuPDF3 to extract text, layout, and document images from PDF documents. After PDF parsing, we discard the documents with less than 200 characters. We use the language detector from the FastText (Joulin et al., 2017) library and split data per language. Following CCNet (Wenzek et al., 2020), we classify the document as the language if the language score is higher than 0.5. Otherwise, unclear PDF files with a language score of less than 0.5 are discarded.
+
+Data Sampling After splitting the data per language, we use the same sampling probability $p_l \propto (n_l / n)^{\alpha}$ as XLM (Conneau and Lample, 2019) to sample the batches from different languages, where $n_l$ is the document counts per language and $n$ denotes the total number. Following InfoXLM (Chi et al., 2021), we use $\alpha = 0.7$ for LayoutXLM to make a reasonable compromise between performance on high- and low-resource languages. Finally, we follow this distribution and sample a multilingual document dataset with 22 million visually rich documents. In addition, we also sample 8 million scanned English documents from the IIT-CDIP dataset so that we totally use 30 million documents to pre-train the LayoutXLM, where the model can benefit from the visual information of both scanned and digital-born document images.
+
+# 4 Key-value Extraction with PLMs
+
+In this section, we present a simple yet efficient baseline framework based on pre-trained language
+
+models (PLMs) for our two sub-tasks. Equipped with this framework, we integrate two existing popular cross-lingual pre-trained language models, XLM-RoBERTa and InfoXLM, and our proposed LayoutXLM as the pre-trained language model backbones.
+
+In this framework, given a visually rich document $\mathcal{D}$ , we will pass discrete token set $\mathbf{T} = \{t_0, t_1, \dots, t_n\}$ into these backbone models to obtain the contextual representation of each tokens $\mathbf{H} = \{\mathbf{h}_0, \mathbf{h}_1, \dots, \mathbf{h}_n\}$ . For different tasks, the representations will be processed with different modules to predict the required labels.
+
+# 4.1 Semantic Entity Recognition
+
+For this task, we simply follow the typical sequence labeling paradigm with BIO labeling format and build task-specific feed-forward network layers $(\mathsf{FFN}^{SER})$ over the output of backbone models.
+
+$$
+\mathbf {h} _ {i} ^ {S E R} = \mathrm {F F N} ^ {S E R} (\mathbf {h} _ {i})
+$$
+
+# 4.2 Relation Extraction
+
+For the relation extraction task, we first incrementally construct the set of relation candidates by producing all possible pairs of given semantic entities. For each pair, the representation of the head entity $\mathbf{h}_i^{head}$ or tail entity $\mathbf{h}_j^{tail}$ is the concatenation of the first token vector in each entity and the entity type embedding $\mathbf{e}^{head} / \mathbf{e}^{tail}$ obtained with a specific type embedding layer. After respectively projected by two feed-forward network layers, the representations of head and tail are fed into a bi-affine classifier consisting of trainable weights $\mathbf{U}$ , $\mathbf{W}$ , and $\mathbf{b}$ .
+
+$$
+\mathbf {h} _ {i} ^ {\text {h e a d}} = \operatorname {F F N} ^ {\text {h e a d}} \left(\left[ \mathbf {h} _ {i}; \mathbf {e} ^ {\text {h e a d}} \right]\right)
+$$
+
+$$
+\mathbf {h} _ {j} ^ {\text {t a i l}} = \operatorname {F F N} ^ {\text {t a i l}} \left(\left[ \mathbf {h} _ {j}; \mathbf {e} ^ {\text {t a i l}} \right]\right)
+$$
+
+$$
+\mathbf {h} _ {i, j} ^ {\text {r e l a t i o n}} = \mathbf {h} _ {i} ^ {\text {h e a d}} \mathbf {U h} _ {j} ^ {\text {t a i l}} + \mathbf {W} (\mathbf {h} _ {i} ^ {\text {h e a d}} \circ \mathbf {h} _ {j} ^ {\text {t a i l}}) + \mathbf {b}
+$$
+
+# 5 Experiments
+
+# 5.1 Settings
+
+Cross-lingual Evaluation Besides the experiments of typical language-specific fine-tuning, we also design two additional settings to demonstrate the ability to transfer knowledge among different languages, which are zero-shot transfer learning and multitask fine-tuning. Specifically, (1) language-specific fine-tuning refers to the typical fine-tuning paradigm of fine-tuning on language X and testing on language X. (2) Zero-shot transfer
+
+learning means the models are trained on English data only and then evaluated on each target language. (3) Multitask fine-tuning requires the model to train on data in all languages. We evaluate models in these three settings over two sub-tasks in XFUND: semantic entity recognition and relation extraction, and compare LayoutXLM to two cross-lingual language models: XLM-R and InfoXLM.
+
+Pre-training LayoutXLM Following the original LayoutLMv2 recipe, we train LayoutXLM models with two model sizes. For the LayoutXLMBASE model, we use a 12-layer Transformer encoder with 12 heads and set the hidden size to $d = 768$ . For the LayoutXLMLARGE model, we increase the layer number to 24 with 16 heads and hidden size to $d = 1,024$ . ResNeXt101-FPN is used as a visual backbone in both models. Finally, the number of parameters in these two models are approximately 345M and 625M. During the pre-training stage, we first initialize the Transformer encoder along with text embeddings from InfoXLM and initialize the visual embedding layer with a Mask-RCNN model trained on PubLayNet. The rest of the parameters are initialized randomly. Our models are trained with 64 Nvidia V100 GPUs with batch size of 1,024 for 150k training steps.
+
+Fine-tuning on XFUND For a fair comparison, we train all models with the basic hyper-parameter settings and slightly adapt them to make sure every optimization has well converged. For the semantic entity recognition task, we train for 1,000 steps with batch size of 16. For the relation extraction task, we train for 3,000 steps with batch size of 8. We use the linear decay with a learning rate of 5e-5 and warm-up ratio of 0.1.
+
+# 5.2 Results
+
+We evaluate the LayoutXLM model on language-specific fine-tuning tasks, and the results are shown in Table 2. Compared with the pre-trained models such as XLM-R and InfoXLM, the LayoutXLM LARGE model achieves the highest F1 scores in both SER and RE tasks. The significant improvement shows LayoutXLM's capability to transfer knowledge obtained from pre-training to downstream tasks, which further confirms the effectiveness of our multilingual pre-training framework.
+
+For the cross-lingual zero-shot transfer, we present the evaluation results in Table 3. Although the models are only fine-tuned on FUNSD dataset (in English), it can still transfer the knowledge to
+
+ | Model | FUNSD | ZH | JA | ES | FR | IT | DE | PT | Avg. |
| SER | XLM-RoBERTaBASE | 0.667 | 0.8774 | 0.7761 | 0.6105 | 0.6743 | 0.6687 | 0.6814 | 0.6818 | 0.7047 |
| InfoXLMBASE | 0.6852 | 0.8868 | 0.7865 | 0.6230 | 0.7015 | 0.6751 | 0.7063 | 0.7008 | 0.7207 |
| LayoutXLMBASE | 0.794 | 0.8924 | 0.7921 | 0.7550 | 0.7902 | 0.8082 | 0.8222 | 0.7903 | 0.8056 |
| XLM-RoBERTaLARGE | 0.7074 | 0.8925 | 0.7817 | 0.6515 | 0.7170 | 0.7139 | 0.711 | 0.7241 | 0.7374 |
| InfoXLMARGE | 0.7325 | 0.8955 | 0.7904 | 0.6740 | 0.7140 | 0.7152 | 0.7338 | 0.7212 | 0.7471 |
| LayoutXLMARGE | 0.8225 | 0.9161 | 0.8033 | 0.7830 | 0.8098 | 0.8275 | 0.8361 | 0.8273 | 0.8282 |
| RE | XLM-RoBERTaBASE | 0.2659 | 0.5105 | 0.5800 | 0.5295 | 0.4965 | 0.5305 | 0.5041 | 0.3982 | 0.4769 |
| InfoXLMBASE | 0.2920 | 0.5214 | 0.6000 | 0.5516 | 0.4913 | 0.5281 | 0.5262 | 0.4170 | 0.4910 |
| LayoutXLMBASE | 0.5483 | 0.7073 | 0.6963 | 0.6896 | 0.6353 | 0.6415 | 0.6551 | 0.5718 | 0.6432 |
| XLM-RoBERTaLARGE | 0.3473 | 0.6475 | 0.6798 | 0.6330 | 0.6080 | 0.6171 | 0.6189 | 0.5762 | 0.5910 |
| InfoXLMARGE | 0.3679 | 0.6775 | 0.6604 | 0.6346 | 0.6096 | 0.6659 | 0.6057 | 0.5800 | 0.6002 |
| LayoutXLMARGE | 0.6404 | 0.7888 | 0.7255 | 0.7666 | 0.7102 | 0.7691 | 0.6843 | 0.6796 | 0.7206 |
+
+Table 2: Language-specific fine-tuning accuracy (F1) on the XFUND dataset (fine-tuning on X, testing on X), where "SER" denotes the semantic entity recognition and "RE" denotes the relation extraction.
+
+ | Model | FUNSD | ZH | JA | ES | FR | IT | DE | PT | Avg. |
| SER | XLM-RoBERTaBASE | 0.667 | 0.4144 | 0.3023 | 0.3055 | 0.371 | 0.2767 | 0.3286 | 0.3936 | 0.3824 |
| InfoXLMBASE | 0.6852 | 0.4408 | 0.3603 | 0.3102 | 0.4021 | 0.2880 | 0.3587 | 0.4502 | 0.4119 |
| LayoutXLMBASE | 0.794 | 0.6019 | 0.4715 | 0.4565 | 0.5757 | 0.4846 | 0.5252 | 0.539 | 0.5561 |
| XLM-RoBERTaLARGE | 0.7074 | 0.5205 | 0.3939 | 0.3627 | 0.4672 | 0.3398 | 0.418 | 0.4997 | 0.4637 |
| InfoXLMARGE | 0.7325 | 0.5536 | 0.4132 | 0.3689 | 0.4909 | 0.3598 | 0.4363 | 0.5126 | 0.4835 |
| LayoutXLMARGE | 0.8225 | 0.6896 | 0.519 | 0.4976 | 0.6135 | 0.5517 | 0.5905 | 0.6077 | 0.6115 |
| RE | XLM-RoBERTaBASE | 0.2659 | 0.1601 | 0.2611 | 0.2440 | 0.2240 | 0.2374 | 0.2288 | 0.1996 | 0.2276 |
| InfoXLMBASE | 0.2920 | 0.2405 | 0.2851 | 0.2481 | 0.2454 | 0.2193 | 0.2027 | 0.2049 | 0.2423 |
| LayoutXLMBASE | 0.5483 | 0.4494 | 0.4408 | 0.4708 | 0.4416 | 0.4090 | 0.3820 | 0.3685 | 0.4388 |
| XLM-RoBERTaLARGE | 0.3473 | 0.2421 | 0.3037 | 0.2843 | 0.2897 | 0.2496 | 0.2617 | 0.2333 | 0.2765 |
| InfoXLMARGE | 0.3679 | 0.3156 | 0.3364 | 0.3185 | 0.3189 | 0.2720 | 0.2953 | 0.2554 | 0.3100 |
| LayoutXLMARGE | 0.6404 | 0.5531 | 0.5696 | 0.5780 | 0.5615 | 0.5184 | 0.4890 | 0.4795 | 0.5487 |
+
+Table 3: Zero-shot transfer accuracy (F1) on the XFUND dataset (fine-tuning on FUNSD, testing on X), where "SER" denotes the semantic entity recognition and "RE" denotes the relation extraction.
+
+different languages. In addition, it is observed that the LayoutXLM model significantly outperforms the other text-based models. This verifies that LayoutXLM can capture the common layout invariance among languages and transfer to others.
+
+Finally, Table 4 shows the evaluation results on the multitask learning. In this setting, the pretrained LayoutXLM model is fine-tuned with all 8 languages simultaneously and evaluated on each specific language, in order to investigate whether improvements can be obtained by multilingual fine-tuning. We observe that the multitask learning further improves the model performance compared to the language-specific fine-tuning, which also confirms that document understanding can benefit from the layout invariance among different languages.
+
+# 6 Related Work
+
+Multimodal Pre-training Multimodal pretraining has become popular in recent years due
+
+to its successful applications in vision-language representation learning. Lu et al. (2019) proposed ViLBERT for learning task-agnostic joint representations of image content and natural language by extending the popular BERT architecture to a multimodal two-stream model. Su et al. (2020) proposed VL-BERT that adopts the Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. Li et al. (2020a) propose VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. Chen et al. (2020) introduced UNITER that learns through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. Li et al. (2020b) proposed a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags
+
+ | Model | FUNSD | ZH | JA | ES | FR | IT | DE | PT | Avg. |
| SER | XLM-RoBERTaBASE | 0.6633 | 0.883 | 0.7786 | 0.6223 | 0.7035 | 0.6814 | 0.7146 | 0.6726 | 0.7149 |
| InfoXLMBASE | 0.6538 | 0.8741 | 0.7855 | 0.5979 | 0.7057 | 0.6826 | 0.7055 | 0.6796 | 0.7106 |
| LayoutXLMBASE | 0.7924 | 0.8973 | 0.7964 | 0.7798 | 0.8173 | 0.821 | 0.8322 | 0.8241 | 0.8201 |
| XLM-RoBERTaLARGE | 0.7151 | 0.8967 | 0.7828 | 0.6615 | 0.7407 | 0.7165 | 0.7431 | 0.7449 | 0.7502 |
| InfoXLMARGE | 0.7246 | 0.8919 | 0.7998 | 0.6702 | 0.7376 | 0.7180 | 0.7523 | 0.7332 | 0.7534 |
| LayoutXLMARGE | 0.8068 | 0.9155 | 0.8216 | 0.8055 | 0.8384 | 0.8372 | 0.853 | 0.8650 | 0.8429 |
| RE | XLM-RoBERTaBASE | 0.3638 | 0.6797 | 0.6829 | 0.6828 | 0.6727 | 0.6937 | 0.6887 | 0.6082 | 0.6341 |
| InfoXLMBASE | 0.3699 | 0.6493 | 0.6473 | 0.6828 | 0.6831 | 0.6690 | 0.6384 | 0.5763 | 0.6145 |
| LayoutXLMBASE | 0.6671 | 0.8241 | 0.8142 | 0.8104 | 0.8221 | 0.8310 | 0.7854 | 0.7044 | 0.7823 |
| XLM-RoBERTaLARGE | 0.4246 | 0.7316 | 0.7350 | 0.7513 | 0.7532 | 0.7520 | 0.7111 | 0.6582 | 0.6896 |
| InfoXLMARGE | 0.4543 | 0.7311 | 0.7510 | 0.7644 | 0.7549 | 0.7504 | 0.7356 | 0.6875 | 0.7037 |
| LayoutXLMARGE | 0.7683 | 0.9000 | 0.8621 | 0.8592 | 0.8669 | 0.8675 | 0.8263 | 0.8160 | 0.8458 |
+
+Table 4: Multitask fine-tuning accuracy (F1) on the XFUND dataset (fine-tuning on 8 languages all, testing on X), where "SER" denotes the semantic entity recognition and "RE" denotes the relation extraction.
+
+detected in images as anchor points to significantly ease the learning of alignments. Inspired by these vision-language pre-trained models, we would like to introduce the vision-language pre-training into the document intelligence area, where the text, layout, and image information can be jointly learned to benefit the VRDU tasks.
+
+Multilingual Pre-training Multilingual pretrained models have pushed many SOTA results on cross-lingual natural language understanding tasks by pre-training the Transformer models on different languages. These models have successfully bridged the language barriers in many cross-lingual transfer benchmarks such as XNLI (Conneau et al., 2018) and XTREME (Hu et al., 2020). Devlin et al. (2019) introduced a new language representation model called BERT and extend to a multilingual version called mBERT, which is designed to pretrain deep bidirectional representations from the unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pretrained BERT model can be fine-tuned with just one additional output layer to create SOTA models for a wide range of tasks. Conneau and Lample (2019) proposed two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. Conneau et al. (2020) proposed to train a Transformer-based masked language model on 100 languages, using more than two terabytes of filtered CommonCrawl data, which significantly outperforms mBERT on a variety of cross-lingual benchmarks. Recently, Chi et al. (2021) formulated cross-lingual language
+
+model pre-training as maximizing mutual information between multilingual-multi-granularity texts. The unified view helps to better understand the existing methods for learning cross-lingual representations, and the information-theoretic framework inspires to propose a pre-training task based on contrastive learning. Liu et al. (2020) presented mBART - a sequence-to-sequence denoising autoencoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. Xue et al. (2021) introduced mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. The pre-trained LayoutXLM model is built on the multilingual textual models as the initialization, which benefits the VRDU tasks in different languages worldwide.
+
+# 7 Conclusion
+
+In this paper, we introduce the multilingual form understanding benchmark XFUND, which includes key-value labeled forms in 7 languages. Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual visually rich document understanding. We make XFUND and LayoutXLM publicly available to advance the document understanding research. For future research, we will further enlarge the multilingual training data to cover more languages as well as more document layouts and templates. In addition, as there are a great number of business documents with the same content but in different languages, we will also investigate how to leverage the contrastive learning of parallel documents for the multilingual pre-training.
+
+# References
+
+Haithem Afli and Andy Way. 2016. Integrating optical character recognition and machine translation of historical documents. In Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH), pages 109-116, Osaka, Japan. The COLING 2016 Organizing Committee.
+Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In Computer Vision - ECCV 2020, pages 104-120, Cham. Springer International Publishing.
+Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3576-3588, Online. Association for Computational Linguistics.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
+Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057-7067.
+Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Adam W Harley, Alex Ufkes, and Konstantinos G Derpanis. 2015. Evaluation of deep convolutional nets for document image classification and retrieval. In
+
+International Conference on Document Analysis and Recognition (ICDAR).
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 4411-4421. PMLR.
+Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. 2019. Funsd: A dataset for form understanding in noisy scanned documents. 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW).
+Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.
+D. Lewis, G. Agam, S. Argamon, O. Frieder, D. Grossman, and J. Heard. 2006. Building a test collection for complex document information processing. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06, page 665-666, New York, NY, USA. Association for Computing Machinery.
+Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2020a. What does BERT with vision look at? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5265-5275, Online. Association for Computational Linguistics.
+Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020b. Oscar: Object-semantics aligned pretraining for vision-language tasks. In Computer Vision - ECCV 2020, pages 121-137, Cham. Springer International Publishing.
+Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13-23.
+
+Minesh Mathew, Dimosthenis Karatzas, and C. V. Jawahar. 2021. Docvqa: A dataset for vqa on document images. In 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 2199-2208.
+
+Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. 2019. {CORD}: A consolidated receipt dataset for post-ocr} parsing. In Workshop on Document Intelligence at NeurIPS 2019.
+
+Tomasz Stanisławek, Filip Graliński, Anna Wróblewska, Dawid Lipinski, Agnieszka Kaliska, Paulina Rosalska, Bartosz Topolski, and Przemysław Biecek. 2021. Kleister: Key information extraction datasets involving long documents with complex layouts. In Document Analysis and Recognition - ICDAR 2021, pages 564-579, Cham. Springer International Publishing.
+
+Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: pretraining of generic visual-linguistic representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+
+Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.
+
+Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. 2021. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2579-2591, Online. Association for Computational Linguistics.
+
+Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. 2020. Layoutm: Pre-training of text and layout for document image understanding. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 1192-1200. ACM.
+
+Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.
+
+# A Ethical Consideration
+
+The ethical implications of research are always an important consideration for us. While pursuing better model performance and high quality datasets, we respect the intellectual property rights of data resources, the privacy and rights of data sources, and strive to avoid potential harm to vulnerable populations.
+
+When crawling the documents needed to build the XFUND dataset and LayoutXLM pre-training data, we strictly follow each site's robots exclusion standard $^{4}$ to ensure we are allowed to collect data. We also manually excluded websites with privacy concerns, keeping only those pages that we had permission to edit and republish according to the permission rules.
+
+For the data used to build XFUND, we first removed all content and kept only the template, thus removing the maximum amount of sensitive content. On this basis, annotators filled in the templates using synthetic data that does not involve sensitive personal information of annotators, thus ensuring the privacy and rights of annotators. Then, we manually reviewed the templates to prevent potential privacy violations and harm to vulnerable populations. Any data that does not meet the specifications will be completely deleted.
+
+# B LayoutXLM
+
+# B.1 Pre-training Data Samples
+
+We show pre-training samples of each languages in Figure 4.
+
+# B.2 Pre-training Data Distribution
+
+Figure 5 shows the complete list of languages with the distribution of pre-training languages.
+
+
+Figure 4: Real-world business documents with different layouts and languages for pre-training LayoutXLM
+
+
+Figure 5: Language distribution of the data for pre-training LayoutXLM. We also show the document counts per language for different sampling exponents $\alpha$ .
\ No newline at end of file
diff --git a/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/images.zip b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a5f204d97d69f21eea9193a8d7f615ee0991bb46
--- /dev/null
+++ b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c721dee56f68a5b9b6ef7bdbe1828db82ee23658ad152dc72321f98d5e795c35
+size 861627
diff --git a/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/layout.json b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..dc25ef01cd42cb0d24997e3785af9313dd37d51b
--- /dev/null
+++ b/xfundabenchmarkdatasetformultilingualvisuallyrichformunderstanding/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:854cf137d9a20c59abd5c66b095c7fcd085b0476e888cc3693f9bbb9da032a84
+size 285429
diff --git a/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_content_list.json b/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e16c9418af54afe7b2adce9166ef3f1f8be4103c
--- /dev/null
+++ b/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:28398fa85ba96c0032223813446c804b5e1529937a2f04fcb7166a53395cb3fa
+size 103768
diff --git a/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_model.json b/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2a800d30fd5707741efb23c3b8b37291d1315a72
--- /dev/null
+++ b/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a8ae43202b8ee191ab648fb4aa4eef387a168f1c651f83c71675e40fd7cab1c
+size 128014
diff --git a/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_origin.pdf b/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7573940a208fc30906b6b397834951dcebcb92e7
--- /dev/null
+++ b/xgqacrosslingualvisualquestionanswering/7251adde-d857-44d8-9e9f-e5809c4cf3e7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7362c9896fb0920631bc40c0eb33ed45ab1c45a1e01ad1cddda291051fd1399f
+size 2410939
diff --git a/xgqacrosslingualvisualquestionanswering/full.md b/xgqacrosslingualvisualquestionanswering/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ef21227d804714bb8bc6de18827086f76fb9dd16
--- /dev/null
+++ b/xgqacrosslingualvisualquestionanswering/full.md
@@ -0,0 +1,356 @@
+# xGQA: Cross-Linguual Visual Question Answering
+
+# Jonas Pfeiffer1, Gregor Geigle1, Aishwarya Kamath2, Jan-Martin O. Steitz3, Stefan Roth3, Ivan Vulić4, Iryna Gurevych1
+
+1Ubiquitous Knowledge Processing Lab, Technical University of Darmstadt
+
+$^{2}$ Center for Data Science, New York University
+
+3Visual Inference Lab, Technical University of Darmstadt
+
+$^{4}$ Language Technology Lab, University of Cambridge
+
+# Abstract
+
+Recent advances in multimodal vision and language modeling have predominantly focused on the English language, mostly due to the lack of multilingual multimodal datasets to steer modeling efforts. In this work, we address this gap and provide xGQA, a new multilingual evaluation benchmark for the visual question answering task. We extend the established English GQA dataset (Hudson and Manning, 2019) to 7 typologically diverse languages, enabling us to detect and explore crucial challenges in cross-lingual visual question answering. We further propose new adapter-based approaches to adapt multimodal transformer-based models to become multilingual, and—vice versa—multilingual models to become multimodal. Our proposed methods outperform current state-of-the-art multilingual multimodal models (e.g., $\mathbf{M}^3\mathbb{P}$ ) in zero-shot cross-lingual settings, but the accuracy remains low across the board; a performance drop of around 38 accuracy points in target languages showcases the difficulty of zero-shot cross-lingual transfer for this task. Our results suggest that simple cross-lingual transfer of multimodal models yields latent multilingual multimodal misalignment, calling for more sophisticated methods for vision and multilingual language modeling. $^1$
+
+# 1 Introduction
+
+Transformer-based architectures (Vaswani et al., 2017) have become ubiquitous in NLP (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020, inter alia) and in computer vision (CV) (Carion et al., 2020; Dosovitskiy et al., 2021), offering unmatched task performance. Having a shared architecture for multiple modalities opened up possibilities for effective fusion of information, yielding impressive performance gains across various multimodal tasks such as image captioning, phrase
+
+
+Figure 1: Example taken from the xGQA dataset with the same question uttered in 8 languages.
+
+grounding, visual question answering, referring expression comprehension and image-text retrieval (Lu et al., 2019; Tan and Bansal, 2019; Li et al., 2020b; Zhang et al., 2021; Ni et al., 2021; Kamath et al., 2021; Miech et al., 2021; Frank et al., 2021; Bugliarello et al., 2021; Radford et al., 2021; Jia et al., 2021; Eichenberg et al., 2021; Singh et al., 2021; Fu et al., 2021; Yang et al., 2021; Yuan et al., 2021; Wang et al., 2021a; Li et al., 2021; Geigle et al., 2022, inter alia). Yet, progress in this area has been limited mostly to the English language, as the main multimodal datasets consist only of English text. Due to the scarcity of multilingual evaluation benchmarks, there has been limited development of models that tackle this joint problem.
+
+Aiming to address this gap, in this paper we propose xGQA, a multilingual evaluation benchmark for the visual question answering task, extending the monolingual English-only GQA dataset (Hudson and Manning, 2019). For xGQA we manually translate and adapt the balanced GQA test-dev set into 7 new languages from 7 language families, covering 5 distinct scripts; see Figure 1 and Ta
+
+ble 1 later. In addition, we provide new fixed data splits to guide cross-lingual few-shot learning experiments, where only a small number of examples in the target language are utilized.
+
+As pretraining is (i) notoriously computationally expensive for high-resource languages and (ii) only limited amounts of multilingual multimodal resources are available, we also propose computationally efficient adapter-based (Houlsby et al., 2019) approaches as additional baselines for constructing multilingual multimodal models. In a nutshell, we extend multimodal models pretrained only on English text (Zhang et al., 2021) to become multilingual and—vice versa—multilingual models (Devlin et al., 2019) to become multimodal. To this end, we follow the approaches of Artetxe et al. (2020) and Pfeiffer et al. (2020b, 2021) and extend monolingual and multilingual models to new languages and scripts via learning new tokenizers and corresponding word-embedding matrices, as well as adapters for the target languages. To transfer the respective multilingual multimodal adapter-based models to the target task, we propose a novel modality-specific split architecture, which uses modality-dependent adapter weights (see Figure 2 for an illustration of the architecture).
+
+Our results clearly indicate that the proposed adapter-based architecture outperforms the recent state-of-the-art pretrained multilingual multimodal $\mathbf{M}^3\mathbb{P}$ model (Ni et al., 2021) in zero-shot crosslingual settings. However, the overall performance of zero-shot transfer remains low across the board, with an average drop of around 38 accuracy points across target languages. Using a small number of target language examples in a few-shot setup considerably improves performance for all approaches, but cross-lingual transfer performance still lags substantially behind source language performance. This demonstrates the inherent difficulty of the task, even though the corresponding questions are arguably simple as they are template based and only contain 8.5 words on average (see Figure 1).
+
+Contributions. 1) We propose the first evaluation benchmark for cross-lingual visual question answering, covering 7 diverse target languages; 2) we propose novel adapter-based approaches for the creation of multilingual multimodal models; 3) we systematically benchmark state-of-the-art and new multilingual multimodal models in zero-shot and few-shot learning setups, demonstrating the difficulty of the proposed task and serving as strong
+
+reference points for future work; 4) we provide a thorough analysis of the different approaches, highlighting the aspects and question types that lead to the most common model failures, again motivating future work in this domain.
+
+# 2 Background and Related Work
+
+Multilingual Language Models. Pretrained multilingual transformer-based LMs such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) adopt the same pretraining regime as their respective monolingual counterparts: BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019). They are pretrained via self-supervised masked language modelling objective (MLM) on concatenated text corpora of more than 100 languages, where text is tokenized using WordPiece, SentencePiece or BytePair encodings. These multilingual models have been shown to work surprisingly well for cross-lingual tasks, despite the fact that they do not rely on direct cross-lingual supervision (e.g., parallel data, translation dictionaries; Pires et al., 2019; Wu and Dredze, 2019; Artetxe et al., 2020; Hu et al., 2020; K et al., 2020; Rust et al., 2021).
+
+Vision and Language Models. Most transformer-based multimodal models (Lu et al., 2019; Tan and Bansal, 2019; Chen et al., 2020; Li et al., 2020a; Gan et al., 2020; Li et al., 2020b; Bugliarello et al., 2021; Ni et al., 2021, inter alia) jointly encode text tokens and image region features by preprocessing images using object detection models—such as Faster R-CNN (Ren et al., 2015)—to extract features for regions of interest (RoI) (Anderson et al., 2018). The image region features are passed through an affine layer, which learns to project the region features to the joint embedding space of the multimodal transformer. The bounding box coordinates of the RoI act as positional embeddings for the visual features. As such, they undergo an affine transformation to the embedding space and are combined with their respective image region representation. The position-aware image region embeddings get passed into the transformer. The multi-head attention then attends over all text and image inputs at every layer, learning a joint representation of both modalities. On the other hand, Kamath et al. (2021) avoid using object detectors as a black-box for pre-extracting these region features and instead make it a central part of the multimodal transformer architecture. Training the object detector end-to-end with the multimodal transformer
+
+adds flexibility and better representation capacity.
+
+Similar to MLM, multimodal transformer-based models are trained with self-supervised objectives such as masked feature regression, masked object detection, masked attribute detection, and contrastive losses such as cross-modality matching (Tan and Bansal, 2019). Typically, image captioning datasets are used for pretraining such as COCO (Lin et al., 2014), Flickr30k (Plummer et al., 2015), Conceptual Captions (CC) (Sharma et al., 2018), and SBU (Ordonez et al., 2011). Similar to unimodal language models, the [CLS] token is used as a contextual representation for classification tasks.
+
+Multilingual multimodal models have also been proposed recently: $\mathrm{M}^{3}\mathrm{P}$ (Ni et al., 2021) is trained on the Wikipedias of 50 different languages and the English multimodal CC dataset. In order to align tokens of languages other than English with image representations, $\mathrm{M}^{3}\mathrm{P}$ utilizes a code-switching mechanism, where words of the English CC examples are randomly replaced with words from corresponding bilingual dictionaries. In UC $^{2}$ , Zhou et al. (2021) augment English multimodal datasets with other languages via machine translation and propose masked region-to-token modeling and visual translation language modeling. $^{2}$
+
+Adapters (Rebuffi et al., 2017; Houlsby et al., 2019) have been introduced as a more efficient fine-tuning strategy for transfer learning in NLP and CV. Instead of fine-tuning all the weights of a pretrained model on the target task, small feed-forward layers are introduced at each layer of the pretrained model. During task fine-tuning, only the adapter weights are updated, while the pretrained parameters remain fixed/frozen. Adapters have been shown to be very training efficient (Rücklé et al., 2021), and among an increasing amount of applications they can be utilized to transfer between domains (Rücklé et al., 2020) and tasks (Poth et al., 2021), and in machine translation (Bapna and First, 2019; Philip et al., 2020; Le et al., 2021) and cross-lingual transfer (Pfeiffer et al., 2020b, 2021; Üstün et al., 2020; Ansell et al., 2021, inter alia) scenarios.
+
+Datasets. Pretraining and fine-tuning data for multilingual multimodal models is typically based on (multimodal information from) Wikipedia (WikiCaps, WIT, Schamoni et al., 2018; Srinivasan et al., 2021), or on available downstream task data. Multi30k (Elliott et al., 2016) is a multi-
+
+lingual image captioning dataset for retrieval-type questions, covering English, German, French, and Czech; GEM (Su et al., 2021) covers image and video retrieval tasks across 20 and 30 different languages, respectively; HowTo100M (Huang et al., 2021) is a multilingual and multimodal pretraining dataset for image and video retrieval; Multi-Subs (Wang et al., 2021b) focuses on fill-in-the-blank tasks and lexical translation, covering English, Spanish, German, Portuguese, and French. Gao et al. (2015); Shimizu et al. (2018) propose bilingual visual question answering datasets for English, and Chinese and Japanese respectively. In contemporary work Liu et al. (2021) propose MaRVL, a binary multilingual question answering dataset similar to NLVR2 (Suhr et al., 2019), spanning 5 typologically diverse languages (Chinese, Tamil, Swahili, Indonesian, and Turkish).
+
+Previous datasets predominantly focus on (arguably simpler) retrieval-type tasks, only cover a small set of similar languages (e.g., Multi30k, MultiSubs), or only cover binary questions. In contrast, we propose the first multilingual visual question answering dataset, which covers a typologically more diverse set of languages.
+
+Most recently, IGLUE (Bugliarello et al., 2022)—a multilingual multimodal benchmark that integrates xGQA—was proposed: IGLUE brings together visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages.
+
+# 3 xGQA
+
+The original English GQA dataset (Hudson and Manning, 2019) was constructed by leveraging Visual Genome scene graphs (Krishna et al., 2017). An English question engine that utilizes content (i.e. information about objects, attributes, and relations provided) and structure (a linguistic grammar that couples hundreds of structural patterns and detailed lexical semantic resources) was used to generate over 22 million diverse questions, which are visually grounded in the image scene graphs. As the questions are automatically generated using templates, they do not necessarily reflect the wide spectrum of natural language, making any assumptions on the performance in the wild difficult.
+
+Each question is associated with additional metadata such as structural types: (1) verify for yes/no questions (e.g. "Do you see any cats?"), (2) query for all open questions (e.g. "Who is wearing
+
+| Language | iso | Family | Script | Speakers |
| English | en | IE:Germanic | Latin | 400M |
| German | de | IE:Germanic | Latin | 95M |
| Portuguese | pt | IE:Romance | Latin | 250M |
| Russian | ru | IE:Slavic | Cyrillic | 150M |
| Indonesian | id | Austronesian | Latin | 43M |
| Bengali | bn | IE:Iranian | Bengali | 230M |
| Korean | ko | Koreanic | Korean | 77M |
| Chinese | zh | Sino-Tibetan | Chinese | 1.2B |
+
+jeans?"), (3) choose for questions that present two alternatives to choose from (e.g. "Is it red or blue?"), (4) logical which involve logical inference (e.g. "Is the field soft and snowy"), and (5) compare for comparison questions between two or more objects (e.g. "Are all the animals zebras?"). For further details regarding the metadata, we refer the reader to Hudson and Manning (2019).
+
+Dataset Design. The principal objective when devising xGQA was to create a genuinely typologically diverse multimodal and multilingual evaluation benchmark for visual question answering. We utilize the balanced $^3$ test-dev set of GQA, which consists of 12,578 questions about 398 images. $^4$ Due to the defined structural patterns, the formulation of the questions is simple, with an average length of 8.5 words. $^5$ The resulting xGQA dataset covers translations in 7 languages, each representing a distinct language family, and contains examples written in 5 different scripts (see Table 1).
+
+Few-Shot Data Splits. In order to conduct cross-lingual few-shot learning experiments, we provide new data splits of different sizes. We split on images and add all questions associated with the image to the respective set. The development and test sets consist of 50 and 300 images, respectively. The training splits consist of 1, 5, 10, 20, 25, and 48 images, see Table 2. We ensure that the distribution
+
+Table 1: Languages covered by xGQA. IE stands for Indo-European.
+
+| Set | Test | Dev | Train |
| #Img | 300 | 50 | 1 | 5 | 10 | 20 | 25 | 48 |
| #Ques | 9666 | 1422 | 27 | 155 | 317 | 594 | 704 | 1490 |
+
+Table 2: Few-shot dataset sizes. The GQA test-dev set is split into new development, test sets, and training splits of different sizes. We maintain the distribution of structural types in each split.
+
+of structural types within each set is maintained.
+
+xGQA is the first truly typologically diverse multilingual multimodal benchmark, unlocking new experimentation and analysis opportunities in cross-lingual zero-shot and few-shot scenarios. While the questions in xGQA are intuitive and easy for humans to solve, we later show that current state-of-the-art models still have difficulty with transfer.
+
+# 4Baselines
+
+To analyze the performance and current gaps on xGQA, we first evaluate the recently proposed $\mathrm{M}^3\mathrm{P}$ model, which has been pretrained on multilingual and multimodal data. However, pretraining is computationally expensive and only limited amounts of multilingual multimodal resources are available. Therefore, we further propose new and more efficient approaches that (1) extend state-of-the-art multilingual language models to the multimodal domain and (2) provide multilingual capabilities to state-of-the-art multimodal models.
+
+Unless noted otherwise, we follow the predominant fine-tuning strategy for GQA; a prediction head is placed on top of the output of a pretrained transformer. All possible 1853 answers of the GQA task are mapped to a class label. The question associated with an image together with the position-aware region features are passed as input to the transformer, supervised using a cross-entropy loss.[6]
+
+# 4.1 Multimodal $\rightarrow$ Multilingual
+
+$\mathbf{OSCAR}^{+}\mathbf{Emb}$ . To extend a monolingual transformer LM to a multilingual domain, Artetxe et al. (2020) fine-tune a new word-embedding layer in the target language. Inspired by this idea, we now describe how we extend the current state-of-the-art monolingual multimodal transformer model $\mathbf{OSCAR}^{+}$ (Zhang et al., 2021) to learn new embeddings for the target languages.
+
+In the language-extension phase, we replace the embedding matrix of $\mathrm{OSCAR + }$ with a randomly
+
+
+Figure 2: Architecture of an adapter-based multilingual multimodal model. Text and image inputs share the weights of the multi-head attention (MHA) and feedforward (FFN) layers, as well as the language and multimodal align adapters. Each modality is passed through a modality specific task adapter, the outputs of which are concatenated.
+
+initialized embedding matrix. $^{7}$ The transformer weights are frozen while only the newly introduced embeddings are fine-tuned on unlabeled text data of the target language with the MLM objective.
+
+In the target-task phase, the original OSCAR+ model is fine-tuned on the English training data of GQA, where the transformer layers are fine-tuned, but the embedding layer is frozen. During inference, the embedding layer is replaced with the target language's embedding layer.
+
+$\mathbf{OSCAR} + ^{Ada}$ . We extend this by adding adapters.
+
+In the language-extension phase we follow Pfeiffer et al. (2021) in order to extend the model to the target languages. Similar to $\mathrm{OSCAR} + {}^{Emb}$ , we train a new embedding layer. We further add language adapters at every transformer layer. Given that $\mathrm{OSCAR}+$ is trained on English text, we follow Pfeiffer et al. (2020b) when training English language adapter modules, without replacing the embedding matrix. The transformer weights are frozen while only the newly introduced embeddings and language adapter weights are fine-tuned on unlabeled text data of the language.
+
+For the target-task phase, we propose a novel modality-split architecture (see Figure 2) inspired by the cross-lingual transfer method of Pfeiffer et al. (2020b). At each transformer layer, text and image representations are passed through the pretrained
+
+multi-head attention (MHA) and feed-forward (FFN) layers. Both image and text representations are also passed through the pre-trained language adapters. Each modality is then passed through modality-specific text and image task adapters and next through a shared multimodal alignment adapter. We follow Pfeiffer et al. (2020b), freezing transformer, embedding and language adapter weights during training, thus fine-tuning only the task and multimodal aligner adapter weights, together with the prediction head. At inference time, the embedding layer and the language adapters are replaced with the target language weights.
+
+# 4.2 Multilingual $\rightarrow$ Multimodal
+
+mBERT $^{Ada}$ . For experiments where we extend a multilingual model to become multimodal, we utilize mBERT (Devlin et al., 2019).
+
+Given that mBERT is able to represent many different languages, it is not necessary to learn new embedding layers for the target languages in the language-extension phase. Instead, we utilize the mBERT-compatible language adapters available on AdapterHub.ml (Pfeiffer et al., 2020a).
+
+For the target-task phase, we follow OSCAR+ for the image representation layer, where image features are combined with their respective positional information and passed through an affine transformation layer. We experiment with the same adapter architecture from Figure 2, as described for $\mathrm{OSCAR}^{+^{\text{Ada}}}$ . We again freeze transformer, embedding and language adapter weights during training. However, in contrast to $\mathrm{OSCAR}^{+^*}$ , we randomly initialize and fine-tune the affine image transformation layer. We also fine-tune the task, multimodal aligner adapter weights, and prediction head, all on the GQA task. At inference time, the embedding layer and the language adapters are replaced with the corresponding target language weights.
+
+# 5 Experimental Setup
+
+# 5.1 Language-Extension Phase
+
+For $\mathrm{OSCAR} + {}^{Emb}$ and $\mathrm{OSCAR} + {}^{Ada}$ , we follow the general setups proposed by Pfeiffer et al. (2020b,
+
+2021). We train a new word-piece tokenizer for each target language with a vocabulary size of $30\mathrm{k}$ . We fine-tune the randomly initialized embedding layer, and (for $\mathrm{OSCAR} + ^{Ada}$ ) adapter layers for $100\mathrm{k}$ update steps with a batch size of 64 and a learning rate of $1\mathrm{e} - 4$ . For mBERT $^{Ada}$ , we utilize the language adapters from AdapterHub.ml.
+
+# 5.2 Fine-tuning on GQA
+
+We follow the standard setup proposed by Li et al. (2020b), passing the representation of the [CLS] token through a prediction head. We fine-tune the respective models using a cross-entropy loss with labels being all possible answers in the GQA dataset. Following prior work (Li et al., 2020b), we use a batch size of 192 and train for 5 epochs on the unbalanced GQA training portion.
+
+$\mathbf{M}^3\mathbf{P}$ . We fine-tune all weights of the pretrained model with a learning rate of $3\mathrm{e} - 5$ .
+
+$\mathbf{OSCAR}^{+Emb}$ , $\mathbf{OSCAR}^{+Ada}$ , and $\mathbf{mBERT}^{Ada}$ . We use the pretrained weights and image region features provided by Zhang et al. (2021). However, we do not pass the object attribute labels as inputs to the model. The object attribute labels are in English and utilizing them in cross-lingual scenarios is non-trivial. $^{10}$ We leave this for future work.
+
+For the $\mathrm{OSCAR}^{+^{Emb}}$ setting, we fine-tune the transformer weights and the prediction head and freeze the embedding layer, using a learning rate of $3\mathrm{e}-5$ . For the $\mathrm{OSCAR}^{+^{Ada}}$ and $\mathrm{mBERT}^{Ada}$ settings, we add adapter layers as described in §4.1 and illustrated in Figure 2. We freeze all pretrained weights—including embeddings, transformer layers, and language adapters—and only fine-tune the newly introduced adapters and the prediction head. For $\mathrm{mBERT}^{Ada}$ , we also add and train the affine image transformation layer. We fine-tune the adapter-based models with a learning rate of $1\mathrm{e}-4$ .
+
+# 5.3 Zero-Shot Cross-Linguual Transfer
+
+For zero-shot cross-lingual evaluation, we utilize the model fine-tuned on the GQA training data and evaluate on the multilingual xGQA test data. The model checkpoint that performed best on the English GQA validation data is selected for transfer.
+
+$\mathbf{M}^3\mathbf{P}$ . As the model is pre-trained to cover, among others, xGQA languages, no additional steps are required for cross-lingual transfer.
+
+$\mathbf{OSCAR} + ^{Emb}$ . We replace the English embedding layer with the target-language embedding layer.
+
+$\mathbf{OSCAR}^{+^{Ada}}$ . We replace the English embedding and language adapter layers with the embedding and adapters layers of the target language.
+
+$\mathbf{mBERT}^{Ada}$ . We replace the language adapter layers with the adapters layers of the target language.
+
+# 5.4 Few-Shot Cross-Linguual Transfer
+
+For few-shot cross-lingual scenarios we follow Lauscher et al. (2020) and start from the same finetuned model as for zero-shot transfer (see §5.3). We then fine-tune the same parts of the model as when training on the English training data as in §5.2, but on the small portions of multimodal data available in the target language. We train on the different data splits, consisting of 1, 5, 10, 15, 20, 25, and 48 images (see Table 2). We experiment with training for a different number of epochs (5, 10) using different learning rates (1e-5 and 5e-5 for $\mathrm{M}^3\mathrm{P}$ and $\mathrm{OSCAR + }^{Emb}$ , and 5e-5 and 1e-4 for $\mathrm{OSCAR + }^{Ada}$ and $\mathrm{mBERT^{Ada}}$ ). We find that training for longer and with a larger learning rate performed best for all settings.
+
+# 6 Results and Discussion
+
+The main results are presented in Table 3 (zero-shot experiments) and in Table 4 (few-shot).
+
+# 6.1 Zero-Shot Cross-Linguual Transfer
+
+One of our core findings is that multimodal zero-shot cross-lingual transfer is extremely difficult; we witness an average drop in accuracy of more than 38 points on the target languages of the xGQA dataset compared to English GQA scores (e.g., compare the results with $\mathbf{M}^3\mathbf{P}$ ).
+
+While, as expected, $\mathrm{OSCAR + }$ achieves the best accuracy on the English test set, the massively multilingual models— $\mathbf{M}^{3}\mathbf{P}$ and mBERT—perform considerably better in cross-lingual transfer.[11] This
+
+| model | en | de | pt | ru | id | bn | ko | zh | mean |
| M3P | 58.43 ±1.4 | 23.93 ±3.2 | 24.37 ±4.0 | 20.37 ±3.4 | 22.57 ±6.1 | 15.83 ±3.6 | 16.90 ±3.8 | 18.60 ±1.0 | 20.37 |
| \( OSCAR^{+Emb} \) | 62.23 ±0.3 | 17.35 ±1.0 | 19.25 ±0.4 | 10.52 ±4.0 | 18.26 ±0.4 | 14.93 ±2.0 | 17.10 ±1.8 | 16.41 ±3.2 | 16.26 |
| \( OSCAR^{+Ada} \) | 60.30 ±0.4 | 18.91 ±0.8 | 27.02 ±2.3 | 17.50 ±1.2 | 18.77 ±0.3 | 15.42 ±2.0 | 15.28 ±2.7 | 14.96 ±2.1 | 18.27 |
| \( mBERT^{Ada} \) | 56.25 ±0.5 | 29.76 ±2.3 | 30.37 ±1.8 | 24.42 ±1.1 | 19.15 ±2.8 | 15.12 ±1.9 | 19.09 ±0.9 | 24.86 ±1.8 | 23.25 |
+
+Table 3: Zero-shot transfer results when transferring from English GQA. Average accuracy and standard deviation are reported. Best results are highlighted in bold; mean scores are not averaged over the source language (English).
+
+
+(a) M3P
+
+
+(a) M3P
+
+
+(b) $\mathrm{OSCAR} + {}^{\mathrm{Ada}}$
+
+
+(b) $\mathrm{OSCAR} + {}^{\mathrm{Ad}}$
+
+
+(c) mBERTAda
+
+
+(c) mBERT $^{\text{Ada}}$
+Figure 3: Zero-shot accuracy across different languages and structural question types from xGQA.
+Figure 4: Few-shot accuracy (with 48 images) across different languages and question types from xGQA.
+
+indicates, that joint multilingual pretraining is important and a simple multilingual adapter-based or embedding-based extension of monolingual models achieves inferior cross-lingual performance.
+
+While the pretraining method $\mathrm{M}^3\mathrm{P}$ achieves better accuracy on the English test set, the adapter-based multimodal extension of mBERT outperforms $\mathrm{M}^3\mathrm{P}$ in cross-lingual transfer. We hypothesize that, when fine-tuning all transformer weights on monolingual multimodal data, the cross-lingual alignment breaks within $\mathrm{M}^3\mathrm{P}$ . However, this does not happen in adapter-based settings, as the multilingual weights are frozen and thus remain intact.
+
+Analysis of Structural Question Types. Figure 3 depicts our analysis of the structural question types in zero-shot experiments. We observe large drops in accuracy especially for query and choose type
+
+questions. Query type questions are free-form and thus semantically the most difficult to answer, even in the source language (English). This explains the overall low accuracy across all approaches in zero-shot settings for this question type.
+
+This is in stark contrast with the choose-type questions, which the models perform very well on in the source language. However, we report a substantial accuracy drop in zero-shot cross-lingual transfer. This decrease is most likely due to the nature of the question formulation and the modelling implementation. Choose-type questions are formulated such that the answer to the question is a word or phrase which appears in the question, i.e. "Is it red or blue?" The label classes, and consequently the prediction head, are constructed as a set of all answers appearing in the dataset. This means that the model learns a distributed repre
+
+| Lang | Model | # Training Images |
| 0 | 1 | 5 | 10 | 20 | 25 | 48 |
| de | M3P | 24.78 | 31.49 | 39.31 | 41.05 | 42.22 | 42.54 | 43.16 |
| \( OSCAR+^{Emb} \) | 17.49 | 17.84 | 29.09 | 34.48 | 37.35 | 38.45 | 41.08 |
| \( OSCAR+^{Ada} \) | 17.84 | 21.40 | 31.26 | 35.84 | 37.92 | 38.46 | 40.58 |
| \( mBERT^{Ada} \) | 32.41 | 33.87 | 37.44 | 39.15 | 40.65 | 41.63 | 42.71 |
| pt | M3P | 26.73 | 32.98 | 37.23 | 39.07 | 40.92 | 41.05 | 43.06 |
| \( OSCAR+^{Emb} \) | 19.36 | 22.55 | 32.42 | 36.37 | 39.01 | 40.15 | 43.27 |
| \( OSCAR+^{Ada} \) | 24.58 | 29.61 | 34.73 | 37.46 | 38.82 | 39.70 | 41.75 |
| \( mBERT^{Ada} \) | 31.45 | 33.27 | 37.31 | 38.88 | 40.51 | 41.03 | 42.62 |
| ru | M3P | 24.29 | 32.32 | 36.71 | 38.53 | 39.94 | 40.13 | 41.85 |
| \( OSCAR+^{Emb} \) | 7.98 | 17.32 | 23.72 | 28.21 | 32.15 | 32.87 | 36.84 |
| \( OSCAR+^{Ada} \) | 16.38 | 19.74 | 27.42 | 30.17 | 33.22 | 34.21 | 37.28 |
| \( mBERT^{Ada} \) | 25.51 | 26.47 | 31.69 | 32.47 | 34.93 | 35.53 | 37.42 |
| id | M3P | 18.74 | 31.37 | 37.24 | 38.65 | 41.07 | 42.00 | 43.12 |
| \( OSCAR+^{Emb} \) | 17.89 | 21.09 | 29.76 | 33.59 | 36.69 | 37.31 | 40.51 |
| \( OSCAR+^{Ada} \) | 18.52 | 23.94 | 31.45 | 34.60 | 37.26 | 37.97 | 40.60 |
| \( mBERT^{Ada} \) | 19.77 | 31.99 | 34.49 | 36.26 | 39.15 | 39.81 | 40.88 |
| bn | M3P | 17.59 | 17.33 | 26.94 | 31.09 | 34.58 | 35.27 | 37.96 |
| \( OSCAR+^{Emb} \) | 13.35 | 17.40 | 21.67 | 26.61 | 31.94 | 32.78 | 36.97 |
| \( OSCAR+^{Ada} \) | 13.96 | 15.60 | 22.35 | 27.20 | 31.25 | 31.81 | 35.45 |
| \( mBERT^{Ada} \) | 13.38 | 11.33 | 23.10 | 26.55 | 31.60 | 32.26 | 34.18 |
| ko | M3P | 19.70 | 22.94 | 32.28 | 35.50 | 37.72 | 37.84 | 38.61 |
| \( OSCAR+^{Emb} \) | 15.11 | 16.43 | 19.99 | 24.78 | 29.48 | 30.43 | 35.59 |
| \( OSCAR+^{Ada} \) | 12.25 | 15.48 | 20.73 | 25.97 | 31.37 | 32.20 | 35.41 |
| \( mBERT^{Ada} \) | 19.92 | 17.71 | 27.83 | 31.27 | 34.44 | 35.03 | 36.51 |
| zh | M3P | 19.66 | 27.76 | 36.15 | 38.21 | 40.48 | 40.53 | 42.55 |
| \( OSCAR+^{Emb} \) | 12.66 | 14.77 | 19.17 | 22.13 | 27.97 | 29.08 | 33.24 |
| \( OSCAR+^{Ada} \) | 13.20 | 15.12 | 19.67 | 22.74 | 26.81 | 28.19 | 31.69 |
| \( mBERT^{Ada} \) | 26.16 | 23.47 | 32.93 | 35.82 | 38.22 | 37.89 | 39.57 |
+
+Table 4: Average accuracy of few-shot results, utilizing different amounts of training data. The $o$ column presents the best zero-shot results. These models are used as initialization for the subsequent few-shot experiments. Bold numbers indicate the best scores.
+
+sensation of each answer in its final layer. Consequently, in cross-lingual transfer, the model is required to automatically align the question's options "red" or "blue" (translated in their respective language), with their English latent representation of the model's prediction head. The very low results in this category indicate that this cross-lingual word alignment breaks in zero-shot scenarios.
+
+Overall, zero-shot transfer with our proposed multimodal adapter-based extension of mBERT $(\mathrm{mBERT}^{Ada})$ achieves the best accuracy, with almost 3 points increase over $\mathbf{M}^3\mathbb{P}$ and almost 5 points increase over OSCAR+. However, the overall accuracy of all approaches remains low in comparison to the results in English. This indicates that zero-shot multimodal cross-lingual transfer is extremely difficult, most likely due to the misalignment issue between visual and cross-lingual internal representations. To investigate this conjecture further, we run similar tests in few-shot setups, which should potentially mitigate the misalignment issue observed in zero-shot setups.
+
+# 6.2 Few-Shot Cross-Linguual Transfer
+
+The main results of few-shot experiments are provided in Table 4, while the plot illustrating the im
+
+pact of different amounts of training data is shown in Figure 5. One crucial finding is that, as expected, utilizing an increasing amount of data instances in the target language consistently improves accuracy for all methods. This culminates in an improvement of up to 20 accuracy points when specializing the model with only 48 images in the target language. This indicates that a small number of target-language examples supports the models in partially repairing its internal cross-lingual multimodal alignment. Interestingly, we find that with as little as 5 images, and their corresponding questions, $\mathrm{M}^{3}\mathrm{P}$ begins to outperform mBERT $^{\text{Ada}}$ —the best performing zero-shot model.
+
+We again analyze the impact of few-shot learning on accuracy across different structural question types, with the results depicted in Figure 4. The overall accuracy increases across all types compared to zero-shot scenarios (cf., Figure 3). However, the most pronounced gains are reported for query and chose-type questions, on which the model performed the worst in zero-shot setups. This implies the improved alignment between latent multimodal and multilingual representations, achieved via fine-tuning the model on a small amount of examples in the target language.
+
+# 6.3 Language Transfer
+
+We witness cross-lingual transfer capability patterns similar to those shown by previous work, where our models perform best on typologically close languages (Pires et al., 2019; Lauscher et al., 2020). Our models transfer best to German (de) and Portuguese (pt), both being part of the Indo-European (IE) language family and also sharing the same script (Latin) with the source language English (en). We see a small drop in accuracy for Russian (ru), Indonesian (id), and Chinese (zh) and a larger drop in accuracy for Bengali (bn) and Korean (ko). All of these languages are typologically different to the source language and in most cases do not share the same script. These differences highlight the importance of language diversity in cross-lingual transfer. Our benchmark thus enables experimentation and evaluation of multilingual multimodal models on a representative set of truly typologically diverse languages.
+
+# 7 Contemporary Work
+
+With the recent rise in interest in multilingual vision and language learning, contemporary work has
+
+
+Figure 5: Few-shot accuracy with different training dataset sizes of the different approaches. Scores are averaged over all languages.
+
+already further analyzed and extended the proposed xGQA dataset. We provide a brief description and pointers to this work in what follows.
+
+Further Analysis. Liu et al. (2022) provide an extensive analysis of multilingual and multimodal models trained on cross-lingual visual question answering, and propose several approaches to mitigate the multilingual misalignment problem discussed in §6.1. Their results suggest that standard approaches taken from text-only cross-lingual transfer scenarios (Pires et al., 2019; Hu et al., 2020) do not leverage the full multilingual capability of the pretrained models. Interestingly, they find that a deeper prediction head does not have any measurable impact on the model's performance in the source language, while at the same time it considerably improves zero-shot transfer results across all target languages.
+
+Translated Test Data. Bugliarello et al. (2022) propose the first benchmark for transfer learning across modalities, tasks, and languages, covering visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages. They extend the xGQA dataset by providing machine translated test-set questions and evaluate state-of-the-art monolingual multimodal models in a translate-test setup. In this setting, they achieve slightly better results. However, the performance remains to fall behind source language performance. The translate-test data can be found at iglue-benchmark.github.io.
+
+# 8 Conclusion
+
+We have proposed xGQA, a first cross-lingual evaluation benchmark for the visual question answering task. xGQA extends the English GQA dataset with development and test data in 7 more typologically
+
+diverse languages, covering 5 different scripts. As additional baselines, we have further proposed new adapter-based methods to extend unimodal multilingual models to become multimodal and—viceversa—monolingual multimodal models to become multilingual. Our results have indicated that 1) efficient adapter-based methods slightly outperform the pretrained multilingual multimodal model $\mathbf{M}^{3}\mathbf{P}$ in zero-shot scenarios, but 2) the overall zero-shot cross-lingual transfer yields harsh accuracy drops compared to the English performance for all models in comparison. Further, accuracy can be partially recovered via few-shot learning, where small amounts of training data are available in the target language. However, the large gaps remain, suggesting the inherent complexity of the cross-lingual task despite it being extremely intuitive and easy to solve by (bilingual) humans.
+
+We hope that our dataset and error analysis will motivate future work on this task and, more broadly, in the exciting emerging domain of multilingual multimodal representation learning.
+
+# Acknowledgments
+
+The Ubiquitous Knowledge Processing Lab acknowledges the financial support of the German Federal Ministry of Education and Research (BMBF) under the promotional reference 13N15897 (MISRIK), and the LOEWE initiative (Hesse, Germany) within the emergenCITY center. Jan-Martin O. Steitz is supported by the LOEWE initiative (Hesse, Germany) within the emergenCITY center. The work of Ivan Vulic is supported by a Huawei research donation and the ERC PoC Grant MultiConvAI: Enabling Multilingual Conversational AI (no. 957356). Stefan Roth is additionally supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 866008).
+
+We thank Leonardo F. R. Ribeiro, Ji-Ung Lee, and Chen Liu for insightful feedback and suggestions on a draft of this paper.
+
+# References
+
+P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6077-6086.
+
+Alan Ansell, Edoardo Maria Ponti, Jonas Pfeiffer, Sebastian Ruder, Goran Glavaš, Ivan Vulić, and Anna Korhonen. 2021. MAD-G: Multilingual adapter generation for efficient cross-lingual transfer. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4762–4781, Punta Cana, Dominican Republic. Association for Computational Linguistics.
+Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, Online. Association for Computational Linguistics.
+Ankur Bapna and Orhan First. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538-1548, Hong Kong, China. Association for Computational Linguistics.
+Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, and Desmond Elliott. 2021. Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs. Transactions of the Association for Computational Linguistics, 9:978-994.
+Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, and Ivan Vulic. 2022. IGLUE: A benchmark for transfer learning across modalities, tasks, and languages. arXiv preprint.
+Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I, volume 12346 of Lecture Notes in Computer Science, pages 213-229. Springer.
+Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: universal image-text representation learning. In Computer Vision - ECCV 2020-16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXX, volume 12375 of Lecture Notes in Computer Science, pages 104-120. Springer.
+Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 8440-8451. Association for Computational Linguistics.
+
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
+Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, and Anette Frank. 2021. MAGMA - multimodal augmentation of generative models through adapter-based finetuning. arXiv preprint.
+Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30K: Multilingual English-German image descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70-74, Berlin, Germany. Association for Computational Linguistics.
+Stella Frank, Emanuele Bugliarello, and Desmond Elliott. 2021. Vision-and-language or vision-for-language? on cross-modal influence in multimodal transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 9847-9857. Association for Computational Linguistics.
+Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, and Zicheng Liu. 2021. VIOLET: End-to-end video-language transformers with masked visual-token modeling. arXiv preprint.
+Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale adversarial training for vision-and-language representation learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? dataset and methods for multilingual image question answering. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS'15, page 2296-2304, Cambridge, MA, USA. MIT Press.
+
+Gregor Geigle, Jonas Pfeiffer, Nils Reimers, Ivan Vulic, and Iryna Gurevych. 2022. Retrieve fast, rerank smart: Cooperative and joint approaches for improved cross-modal retrieval. Transactions of the Association for Computational Linguistics.
+Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.
+Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4411-4421. PMLR.
+Po-Yao Huang, Mandela Patrick, Junjie Hu, Graham Neubig, Florian Metze, and Alexander Hauptmann. 2021. Multilingual multimodal pre-training for zero-shot cross-lingual transfer of vision-language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2443-2459, Online. Association for Computational Linguistics.
+Drew A. Hudson and Christopher D. Manning. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6700-6709. Computer Vision Foundation / IEEE.
+Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4904-4916. PMLR.
+Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual BERT: an empirical study. In Proceedings of the 8th International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia. Open-Review.net.
+Aishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. 2021. MDETR - modulated detection for end-to-end multimodal understanding. In 2021 IEEE International Conference on Computer Vision, ICCV 2021, Online, October 10-17, 2021.
+
+Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73.
+Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483-4499, Online. Association for Computational Linguistics.
+Hang Le, Juan Miguel Pino, Changhan Wang, Jiatao Gu, Didier Schwab, and Laurent Besacier. 2021. Lightweight adapter tuning for multilingual speech translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 817-824. Association for Computational Linguistics.
+Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020a. Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 11336-11344. AAAI Press.
+Lianian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, KaiWei Chang, and Jianfeng Gao. 2021. Grounded language-image pre-training. arXiv preprint.
+Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020b. Oscar: Object-semantics aligned pretraining for vision-language tasks. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXX, volume 12375 of Lecture Notes in Computer Science, pages 121-137. Springer.
+Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of Lecture Notes in Computer Science, pages 740-755. Springer.
+Chen Liu, Jonas Pfeiffer, Anna Korhonen, Ivan Vulić, and Iryna Gurevych. 2022. Delving deeper into
+
+cross-lingual visual question answering. arXiv preprint.
+Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, and Desmond Eliott. 2021. Visually grounded reasoning across languages and cultures. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Online, November, 2021.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. arXiv preprint.
+Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi-olinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13-23.
+Antoine Miech, Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, and Andrew Zisserman. 2021. Thinking fast and slow: Efficient text-to-visual retrieval with transformers. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 9826-9836. Computer Vision Foundation / IEEE.
+Minheng Ni, Haoyang Huang, Lin Su, Edward Cui, Taroon Bharti, Lijuan Wang, Dongdong Zhang, and Nan Duan. 2021. M3P: learning universal representations via multitask multilingual multimodal pretraining. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 3977-3986. Computer Vision Foundation / IEEE.
+Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pages 1143-1151.
+Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020a. AdapterHub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Association for Computational Linguistics.
+Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020b. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Linguual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP),
+
+pages 7654-7673, Online. Association for Computational Linguistics.
+Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2021. UNKs Everywhere: Adapting Multilingual Language Models to New Scripts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 10186-10203. Association for Computational Linguistics.
+Jerin Philip, Alexandre Berard, Matthias Galle, and Laurent Besacier. 2020. Monolingual adapters for zero-shot neural machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4465-4470, Online. Association for Computational Linguistics.
+Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics.
+Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2641-2649.
+Clifton Poth, Jonas Pfeiffer, Andreas Rückle, and Iryna Gurevych. 2021. What to Pre-Train on? Efficient Intermediate Task Selection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Online, November, 2021.
+Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8748-8763. PMLR.
+Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 506-516.
+Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems
+
+28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 91-99.
+Andreas Rückle, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2021. AdapterDrop: On the Efficiency of Adapters in Transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 7930-7946. Association for Computational Linguistics.
+Andreas Rücklé, Jonas Pfeiffer, and Iryna Gurevych. 2020. MultiCQA: Zero-shot transfer of self-supervised text matching models on a massive scale. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2471-2486, Online. Association for Computational Linguistics.
+Phillip Rust, Jonas Pfeiffer, Ivan Vulic, Sebastian Ruder, and Iryna Gurevych. 2021. How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, ACL 2021, Online, August 1-6, 2021. Association for Computational Linguistics.
+Shigehiko Schamoni, Julian Hitschler, and Stefan Riezler. 2018. A dataset and reranking method for multimodal MT of user-generated image captions. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas, AMTA 2018, Boston, MA, USA, March 17-21, 2018 - Volume 1: Research Papers, pages 140-153. Association for Machine Translation in the Americas.
+Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565, Melbourne, Australia. Association for Computational Linguistics.
+Nobuyuki Shimizu, Na Rong, and Takashi Miyazaki. 2018. Visual question answering dataset for bilingual image understanding: A study of cross-lingual transfer using attention maps. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1918-1928. Association for Computational Linguistics.
+Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 2021. FLAVA: A foundational language and vision alignment model. arXiv preprint.
+Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. 2021. WIT:
+
+wikipedia-based image text dataset for multimodal multilingual machine learning. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 2443-2449. ACM.
+Lin Su, Nan Duan, Edward Cui, Lei Ji, Chenfei Wu, Huaishao Luo, Yongfei Liu, Ming Zhong, Taroon Bharti, and Arun Sacheti. 2021. GEM: A general evaluation benchmark for multimodal tasks. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, pages 2594-2603. Association for Computational Linguistics.
+Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6418-6428, Florence, Italy. Association for Computational Linguistics.
+Hao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5099-5110. Association for Computational Linguistics.
+Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language adaptation for truly Universal Dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2302-2315, Online. Association for Computational Linguistics.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
+Jianfeng Wang, Xiaowei Hu, Zhe Gan, Zhengyuan Yang, Xiyang Dai, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2021a. UFO: A unified transformer for vision-language representation learning. arXiv preprint.
+Josiah Wang, Pranava Madhyastha, Josiel Figueiredo, Chiraag Lala, and Lucia Specia. 2021b. Multisubs: A large-scale multimodal and multilingual dataset. arXiv preprint.
+Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing
+
+and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguistics.
+
+Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. 2021. Crossing the format boundary of text and boxes: Towards unified vision-language modeling. arXiv preprint.
+
+Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. 2021. Florence: A new foundation model for computer vision. arXiv preprint.
+
+Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. 2021. VinVL: Revisiting Visual Representations in Vision-Language Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5579-5588.
+
+Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, and Jingjing Liu. 2021. UC2: universal cross-lingual cross-modal vision-and-language pre-training. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 4155-4165. Computer Vision Foundation / IEEE.
+
+# A Appendix
+
+We experiment with different multimodal adapter architectures as illustrated in Figure 6. In initial experiments we find that splitting the modalities (settings 2-5) outperforms a joint adapter (setting 1). However, a joint "alignment" architectures (settings 4-5) outperform settings where we only use modality-specific adapters (settings 2-3). We more thoroughly investigate settings 4-5 and report scores in Table 5. Interestingly, we find that when only using the language adapter for the textual inputs, cross-lingual accuracy drops for both $\mathrm{OSCAR + }$ and mBERT; The difference is more pronounced for $\mathrm{OSCAR + }$ . We speculate that this is due to a latent misalignment of the representation spaces, partly due to the residual connection. Due to the better performance of setting 5 on average, we have reported scores of this architecture in the main paper (as illustrated in Figure 2).
+
+| model | Setting | en | de | pt | ru | id | bn | ko | zh | mean |
| \(\mathsf{OSCAR}^{+\mathsf{Ada}}\) | 4 | 60.21 | 18.60 | 25.48 | 8.22 | 17.79 | 10.47 | 9.97 | 12.54 | 14.72 |
| \(\mathsf{OSCAR}^{+\mathsf{Ada}}\) | 5 | 60.30 | 18.91 | 27.02 | 17.50 | 18.77 | 15.42 | 15.28 | 14.96 | 18.27 |
| \(\mathsf{mBERT}^{\mathsf{Ada}}\) | 4 | 57.83 | 27.86 | 28.88 | 22.87 | 20.86 | 14.74 | 18.30 | 24.39 | 22.56 |
| \(\mathsf{mBERT}^{\mathsf{Ada}}\) | 5 | 56.25 | 29.76 | 30.37 | 24.42 | 19.15 | 15.12 | 19.09 | 24.86 | 23.25 |
+
+Table 5: Zero-shot transfer results on xGQA for the different adapter architecture settings (as illustrated in Figure 6) when transferring from English GQA. Average accuracy is reported. Best results for each language and model type are highlighted in bold; mean scores are not averaged over the source language (English).
+
+
+(a) Setting 1
+
+
+(b) Setting 2
+
+
+(c) Setting 3
+Figure 6: The different multimodal multilingual adapter architectures we experimented with. The best performing architecture was setting 5, for which we present results in the main paper.
+
+
+(d) Setting 4
+
+
+(e) Setting 5
\ No newline at end of file
diff --git a/xgqacrosslingualvisualquestionanswering/images.zip b/xgqacrosslingualvisualquestionanswering/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c56bec54aa32d610ff6590b2229cf0d80e8f899e
--- /dev/null
+++ b/xgqacrosslingualvisualquestionanswering/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b40a91873fde97ad6e295d28e1d9cb3aad3d39ca100a981636bfe7468303dda0
+size 501771
diff --git a/xgqacrosslingualvisualquestionanswering/layout.json b/xgqacrosslingualvisualquestionanswering/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..062ff434372b339b23e328fcfcfacceb88c6cbe7
--- /dev/null
+++ b/xgqacrosslingualvisualquestionanswering/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0506e487de5817ac5bf6dfcbb64aea4a9ee829d6677309331b7b2fc3a1a50cfc
+size 447346
diff --git a/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_content_list.json b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3fb95d21e32855b6942ffdf65d167a06040f4d52
--- /dev/null
+++ b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0dec19b1a643f4ac9188473ecf9ae5f613a3fe38a2dffbd75d50020db77a90c9
+size 118025
diff --git a/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_model.json b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2db973ab2a738d2d82f83033838c37b687c7056e
--- /dev/null
+++ b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c9383647a337e4fd3c5d65e82da9a5d322e4a6ef3cda2ea61a3bd6baeba40115
+size 144210
diff --git a/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_origin.pdf b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2856de389c647ecc14b79ce6232ff36646c71501
--- /dev/null
+++ b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/1494fccc-7ac8-41e3-8678-bd869721c321_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61ac33decee32069e54058af4573ad3a131144c7c01acf653a7c0cae1b224669
+size 7659891
diff --git a/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/full.md b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..03df6253f7b4c63871a118d8e344504dd5c7db8b
--- /dev/null
+++ b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/full.md
@@ -0,0 +1,443 @@
+# Your fairness may vary: Pretrained language model fairness in toxic text classification
+
+Ioana Baldini Dennis Wei Karthikeyan Natesan Ramamurthy Mikhail Yurochkin Moninder Singh
+
+IBM Research
+
+{ioana,dwei,knatesa,moninder}@us.ibm.com
+
+mikhail.yurochkin@ibm.com
+
+# Abstract
+
+The popularity of pretrained language models in natural language processing systems calls for a careful evaluation of such models in down-stream tasks, which have a higher potential for societal impact. The evaluation of such systems usually focuses on accuracy measures. Our findings in this paper call for attention to be paid to fairness measures as well. Through the analysis of more than a dozen pretrained language models of varying sizes on two toxic text classification tasks (English), we demonstrate that focusing on accuracy measures alone can lead to models with wide variation in fairness characteristics. Specifically, we observe that fairness can vary even more than accuracy with increasing training data size and different random initializations. At the same time, we find that little of the fairness variation is explained by model size, despite claims in the literature. To improve model fairness without retraining, we show that two post-processing methods developed for structured, tabular data can be successfully applied to a range of pretrained language models. Warning: This paper contains samples of offensive text.
+
+# 1 Introduction
+
+Pre-trained, bidirectional language models (Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019; Clark et al., 2020; He et al., 2021) have revolutionized natural language processing (NLP) research. LMs have provided a route to significant performance increases in several NLP tasks as demonstrated by NLP leaderboards (Rajpurkar et al., 2018; Wang et al., 2019a,b; AI2, 2021). More importantly, LMs have been applied to practical problems, leading to improved results for web search (Nayak, 2019) and have become an asset in
+
+fields such as medical evidence inference (Lehman et al., 2019; Subramanian et al., 2020) and chemistry (Schwaller et al., 2021). While the progress in NLP tasks due to LMs is clear, the reasons behind this success are not as well understood (Rogers et al., 2021; McCoy et al., 2019), and there are also important downsides. In particular, several studies have documented the bias of LMs (Bolukbasi et al., 2016; Hutchinson et al., 2020; Webster et al., 2020; Borkan et al., 2019; de Vassimon Manela et al., 2021) and others discuss potential societal harms (Blodgett et al., 2020; Bender et al., 2021) for individuals or groups. We use the term bias to refer to systematic disparity in representation or outcomes for individuals based on their membership in certain protected groups such as gender, race, and ethnicity.
+
+In this work, we focus on one important application of fine-tuned LMs, toxic text classification. Text toxicity predictors are already used in deployed systems (Perspective API, 2021) and they are a crucial component for content moderation since online harassment is on the rise (Vogels, 2021). In downstream applications such as toxic text classification, it is important to examine the behavior of LMs in terms of measures other than task-specific accuracy. This provides a more holistic understanding of model performance and appropriate uses of LMs for these tasks. As a first step toward this goal, we provide herein an empirical characterization of LMs for the task of toxic text classification using a combination of accuracy and bias measures, and study two post-processing methods for bias mitigation that have proved successful for structured, tabular data. For assessing bias, in this paper, we focus on group fairness, which we explain in Section 2 as it applies in general in machine learning, and discuss what it means in the context of NLP tasks in the same section. The implications of measuring group fairness for the toxicity classification task studied in this paper are
+
+described in Section 3.
+
+One aspect of LMs that is hard to ignore is the increase in their size, as measured by the number of parameters in their architectures. In general, larger LMs seem to perform better on NLP tasks as they have the capacity to capture more complex correlations present in the training data. Bender et al. (2021) claim that this same property may also lead to more pronounced biases in their predictions, as the large data that LMs are trained on is not curated. On the other hand, for image classification models that use large neural networks, Hooker et al. (2020) discuss how model pruning can lead to more biased predictions. In this work, we consider a wide variety of model architectures and sizes. We acknowledge that size is relative and what we consider large in this paper may not be considered as such in a different context.
+
+We address the following questions regarding the effect of various factors on model performance:
+
+1. Model size: How do the accuracy and group fairness of fine-tuned LM-based classifiers vary with their size?
+2. Random seeds: LMs that start from different random initializations can behave differently in classification. What is the effect of random seeds on the accuracy-fairness relationship?
+3. Data size: The size of fine-tuning data is also an important dimension alongside model size. What happens to accuracy and fairness when more/less data is used for fine-tuning?
+4. Bias mitigation via post-processing: Given the expense of training and fine-tuning large LMs, to what extent can we mitigate bias by only post-processing LM outputs?
+
+We study the accuracy-fairness relationship in more than a dozen fine-tuned LMs for two different datasets that deal with prediction of text toxicity. The key contributions of our analysis are:
+
+1. We empirically show that no blanket statement can be made regarding the fairness characteristics of fine-tuned LMs with respect to their size. It really depends on the combination of LM, task, and dataset.
+2. We find that optimizing for accuracy measures alone can lead to models with wide variation in fairness characteristics. Specifically:
+
+(a) While increasing data size for fine-tuning does not improve accuracy much beyond
+
+a point, the improvement in fairness is more significant and may continue after the improvement in accuracy has stopped for certain datasets and tasks. This suggests that choosing data sizes based on accuracy alone could lead to suboptimal performance with respect to fairness.
+
+(b) While accuracy measures are known to vary with different random initializations (Dodge et al., 2020), the variation in fairness measures can be even greater.
+
+3. We demonstrate that post-processing bias mitigation is an effective, computationally affordable solution to enhance fairness in fine-tuned LMs. In particular, one of the methods we experimented with allows for a large accuracy-fairness tradeoff space, leading to relative improvements of $50\%$ for fairness, as measured by equalized odds, while reducing accuracy only by $2\%$ (see Figure 8 religion group).
+
+Our observations strengthen the chorus of recent work addressing bias mitigation in NLP in calling for a careful empirical analysis of fairness with fine-tuned LMs in the context of their application. To allow group fairness analysis, annotations of group membership are preferred and sometimes required, and, thus, we urge the research community to include protected group annotations in datasets to enable extrinsic fairness evaluations that are as close as possible to the point of deployment.
+
+# 2 Background and related work
+
+# 2.1 Fairness in machine learning
+
+As machine learning models have become routinely deployed in practice, many studies noticed their tendency to perform unfairly in various contexts (Angwin et al., 2016, 2017; Buolamwini and Gebru, 2018; Park et al., 2021). To understand and measure model bias, researchers have proposed many definitions of algorithmic fairness. Broadly speaking, they fall into two categories: group fairness (Chouldechova and Roth, 2018) and individual fairness (Dwork et al., 2012). At a high level, group fairness requires similar average outcomes on different groups of individuals considered, for example comparable university acceptance rates across ethnicities. Individual fairness requires similar outcomes for similar individuals, e.g. two university applicants with similar credentials, but different ethnicity, gender, family background, etc.,
+
+should either be both accepted or both rejected. In this paper we consider group fairness, noting that both have their pros and cons (Chouldechova and Roth, 2018; Dwork et al., 2012).
+
+There are many definitions of group fairness and we refer to Verma and Rubin (2018) for a comprehensive overview and to Czarnowska et al. (2021) for a discussion of metrics in the context of measuring social biases in NLP. Statistical parity (SP) is one of the earlier definitions which requires the output of a model to be independent of the sensitive attribute, such as race or gender. In other words, the average outcome (e.g. prediction) across groups defined by the sensitive attribute needs to be similar. An alternative measure is equalized odds (EO) (Hardt et al., 2016), which requires the model output conditioned on the true label to be independent of the sensitive attribute. The violation of conditional independence for a given label (positive or negative) can be measured by the difference in accuracy across sensitive groups conditioned on that label. Taking the maximum or an average (average EO) of these label-specific differences quantifies the overall EO violation.
+
+Many methods for achieving group fairness have been proposed. These methods are typically categorized as follows: (a) modifying the training data (pre-processing), (b) incorporating fairness constraints while training the model (in-processing), and (c) transforming the model output to enhance fairness (post-processing). A summary and implementation of group bias mitigation approaches are discussed in Bellamy et al. (2019). In this study, we investigate the use of post-processing methods to enhance fairness in classification tasks. We chose post-processing approaches since they do not require modification of training data or model training procedures, and, hence, can be efficiently applied to all LMs we consider. In addition, post-processing approaches could minimize the environmental impact of re-training/fine-tuning LMs (Patterson et al., 2021; Strubell et al., 2019). We consider two post-processing approaches proposed by Wei et al. (2020) and Hardt et al. (2016), which have shown considerable success in mitigating bias for tabular data. Wei et al. (2020) optimize a score (predicted probability) transformation function to satisfy fairness constraints that are linear in conditional means of scores while minimizing a cross-entropy objective. Hardt et al. (2016) propose to solve a linear program to find probabilities with
+
+which to change the predicted output labels such that the equalized odds violation is minimized.
+
+# 2.2 Fairness in Natural Language Processing
+
+In NLP systems, bias is broadly understood in two categories, intrinsic and extrinsic. Intrinsic bias refers to bias inherent in the representations, e.g. word embeddings used in NLP (Bolukbasi et al., 2016). Extrinsic bias refers to bias in downstream tasks, such as disparity in false positive rates across groups defined by sensitive attributes in a specified application/task. The concepts of intrinsic and extrinsic bias also correlate well with the notions of representational and allocative harms. While allocative harms arise from disparities across different groups in terms of decisions that lead to allocation of benefits/harms, representational harms are those perpetuated by representation of individuals in the feature space (Crawford, 2017). Abbasi et al. (2019) discuss how harms from stereotypical representations manifest as allocative harms later in the ML pipeline. However, probably because of the complexity of LMs, measuring intrinsic bias in the representations created by LMs may not necessarily reflect the behavior of models built by fine-tuning LMs. Goldfarb-Tarrant et al. (2021) discuss how intrinsic measures of bias do not correlate with extrinsic, application-specific, bias measures. Since we are concerned with the application of LMs to the specific task of toxic text classification, we restrict our focus to group fairness measures, which fall under the category of extrinsic bias. Previous work on bias mitigation in NLP has been focused on pre- and in-processing methods (Sun et al., 2019; Ball-Burack et al., 2021) and to the best of our knowledge, we are the first to use post-processing methods with NLP tasks.
+
+# 3 Methodology
+
+We are interested in studying how group fairness varies across different fine-tuned LMs for binary classification. We choose to focus on text toxicity as the prediction task. Due to an increase in online harassment (Vogels, 2021) and the potential of both propagating harmful stereotypes of minority groups and/or inadvertently reducing their voices, the task of predicting toxicity in text has received increased attention in recent years (Kiritchenko et al., 2021). While we acknowledge that text toxicity presents different complex nuances (e.g., offensive text, harassment, hate speech), we focus on a binary task
+
+formulation. We adopt the definition of toxicity described in Borkan et al. (2019) as “anything that is rude, disrespectful, or unreasonable that would make someone want to leave a conversation”.
+
+# 3.1 Datasets
+
+We used two datasets that deal with toxic text classification: 1) Jigsaw, a large dataset released for the "Unintended Bias in Toxicity Classification" Kaggle competition (Jigsaw, 2019) that contains online comments on news articles, and 2) HateXplain, a dataset recently introduced with the intent of studying explanations for offensive and hate speech in Twitter and Twitter-like data (i.e., gab.com). Both datasets have fine-grained annotations for religion, race and gender. We used as sensitive groups the coarse-grained groups (e.g., mention of any religion, see Section 3.3) as opposed to the finer-grained annotations (e.g., Muslim). Details about the sizes of the datasets, the splits we used and text samples can be found in Appendix A.1.
+
+# 3.2 Language models, fine-tuning and computation infrastructure
+
+We consider more than a dozen LMs that cover a large spectrum of sizes. We selected the models to not only represent various sizes but also different styles of architecture and training. The models in our study are shown in Table 1 along with the number of parameters and the size of the PyTorch (Paszke et al., 2019) model on disk. If not specified, the version of the model used is base. For all our experiments, we used the Hugging Face implementation of Transformers (Wolf et al., 2020) and the corresponding implementations for all LMs in our study. In particular, we use the text sequence classifier without any modifications to increase reproducibility.
+
+We run model fine-tuning for 1-3 epochs and choose the best model based on the highest accuracy obtained on the dev split. When presenting experimental results, we focus primarily on balanced accuracy as the Jigsaw dataset is highly imbalanced and reporting only accuracy may be misleading. In general, higher accuracy leads to higher balanced accuracy, with the exception of two LMs - GPT2 and SqueezeBERT. For these two, the best balanced accuracy is less than 2 percentage points higher than the balanced accuracy resulting from choosing the highest overall accuracy across the various hyper-parameter runs. We experiment with two learning rates (2e - 6 and 2e - 5) and observe that
+
+the large models tend to prefer smaller learning rate, degenerating for higher learning rates. For large LMs with Jigsaw we fine-tune for one epoch to keep the compute time under 24 hours. The model accuracy we obtained are in line with state-of-the-art results for these types of tasks. The large LMs are fine-tuned on A100 Nvidia GPUs, while the rest of the models are fine-tuned on V100 Nvidia GPUs. The experiments for HateXplain run from 10 minutes to under an hour, while the experiments for the large models with Jigsaw can take up to 24 hours.
+
+# 3.3 Sensitive groups and fairness measures
+
+In all our measurements, we considered the following topics as sensitive: religion, race and gender. We categorize a text sample as belonging to a sensitive group if it mentions one of these topics (e.g., religion), and otherwise to the complementary group (no religion). Except in Section 5.5, we do not analyze finer-grained subgroups (e.g., Jewish), but consider larger groups (any reference to religion, such as Muslim, Jewish, atheist). There are several reasons that justify this choice. First, unlike tabular data where each sample corresponds to an individual belonging to one identity (e.g., either female or male), we do not have information on the demographics of the person producing the text. Our categorization is based on the content. In addition, for the datasets we used, most subgroups account for significantly less than $1\%$ of the data. Moreover, there is considerable overlap between subgroups. For example, in the test split for Jigsaw, $40\%$ of the text belonging to the male subgroup also belongs to the female subgroup. To summarize, we analyze the bias/fairness of toxic text prediction in the presence or absence of information that refers to religion, race or gender, respectively. The intent is to not have the performance of the predictor be influenced by these sensitive topics.
+
+We use equalized odds as the group fairness measure. Equalized odds is defined as the maximum of the absolute true positive rate difference and false positive rate difference, where these differences are between a sensitive group and its complementary group. In toxic text classification, a true positive means that a toxic text is correctly identified as such, while a false positive means that a benign piece of text is marked as toxic. In terms of harms, a false negative (toxic text that is missed) may cause individuals to feel threatened or
+
+Table 1: The size (number of parameters, size on disk) for the language models considered in this study.
+
+| Size Group | Language Model | # of parameters | Size on disk |
| Small | ALBERT (Lan et al., 2020) | 12M | 45MB |
| MobileBERT (Sun et al., 2020) | 25.3M | 95MB |
| SqueezeBERT (Iandola et al., 2020) | 51M* | 196MB |
| DistilBERT (Sanh et al., 2020) | 66M | 256MB |
| Regular | BERT (Devlin et al., 2019) | 110M | 418MB |
| ELECTRA (Clark et al., 2020) | 110M | 418MB |
| Funnel (small) (Dai et al., 2020) | 117M* | 444MB |
| RoBERTa (Liu et al., 2019) | 125M | 476MB |
| GPT2 (Radford et al., 2019) | 117M | 487MB |
| DeBERTa (He et al., 2021) | 140M | 532MB |
| Large | ELECTRA-large | 335M | 1.3GB |
| BERT-large | 340M | 1.3GB |
| RoBERTa-large | 355M | 1.4GB |
| DeBERTa-large | 400M | 1.6GB |
+
+* Approximate number of parameters.
+
+disrespected, while a false positive may be seen as censoring, which is particularly problematic if it reduces the voices of minority protected groups from online conversations. By using the sensitive groups of religion/race/gender mentioned above, we aim to analyze and reduce the effect of the presence or absence of religion/race/gender terms on the false negative and false positive rates. By taking the maximum, we are emphasizing the larger discrepancy as opposed to other studies that take the average of the two rate differences (average equalized odds). Note that unlike statistical parity, equalized odds does allow the sensitive (e.g., mention of religion) and complementary (no religion) groups to have different toxicity (positive prediction) rates.
+
+# 4 Bias mitigation post-processing
+
+We investigated the use of post-processing methods to mitigate violations of equalized odds. By post-processing, we mean methods that operate only on the outputs of the fine-tuned LMs and do not modify the models themselves2. The ability to avoid retraining models is a major advantage of post-processing due to the large computational cost of fine-tuning LMs. Post-processing also targets unfairness at a point closest to deployment and hence can have a direct impact on downstream operations that use the model predictions.
+
+Hardt, Price, Srebro (2016) (HPS): The first post-processing method that we consider is by Hardt et al. (2016) (abbreviated HPS, using the last names of the authors), who were the original pro
+
+posers of the equalized odds criterion for fairness. We used the open-source implementation of their method from Bellamy et al. (2019), which postprocesses binary predictions to satisfy EO while minimizing classification loss. While this method is effective in enforcing EO, one limitation is that it does not offer a trade-off between minimizing the deviation from EO and reducing the loss in accuracy.
+
+Fair Score Transformer (FST): We study the FST method of Wei et al. (2020), in part to provide the above-mentioned trade-off, and in part because it is a recent post-processing method shown to be competitive with several other methods (including in-processing). FST takes predicted probabilities (referred to as scores) as input and post-processes them to satisfy a fairness criterion. We choose generalized equalized odds (GEO), a score-based variant of EO, as the fairness criterion and then threshold the output score to produce a binary prediction. The application of FST required attention to three issues: 1) its ability to work with input scores that may not be calibrated probabilities; 2) the choice of fairness parameter $\epsilon$ , which bounds the allowed GEO on the data used to fit FST; 3) the choice of binary classification threshold $t$ . We consider a range of $\epsilon$ and $t$ values to explore the trade-off between EO and accuracy. Due to numerical instability of the FST implementation in the original paper (occasional non-convergence in reasonable time for the Jigsaw dataset), we obtained a closed-form solution for one step in the optimization that leads to a more efficient implementation, running in minutes for all models and all datasets considered. More details on this implementation and the tuning of the parameters can be found in
+
+Appendix A.3.
+
+Threshold post-processing (TPP): We also tested the effect of thresholding alone, without fairness-enhancing transformations. We refer to this as threshold post-processing (TPP). This simple method corresponds to FST without calibrating the LM outputs, choosing $\epsilon$ large enough so that FST yields an identity transformation, and thresholding at level $t$ .
+
+
+Figure 1: Balanced accuracy versus equalized odds for fine-tuned LMs on the Jigsaw and HateXplain datasets.
+
+# 5 The accuracy-fairness relationship in toxic text classification
+
+We report on the performance and fairness characteristics of several LMs while varying parameters such as random seeds and training data size. We also experiment with post-processing methods for group bias mitigation and show that it is possible to reduce some of the bias presented by these models.
+
+# 5.1 Characterization of language models of varied sizes
+
+The first set of experiments present how performance and fairness measures vary across models.
+
+In Figure 1 we show the performance as measured by balanced accuracy $^3$ and the group fairness as measured by equalized odds on the $x$ -axis (lower EO is better). The models are color-coded by their size - dark blue for small models, orange for regular size models and light blue for large models. The variation in balanced accuracy is not as wide as the variation in equalized odds. For the HateXplain dataset, the gap between balanced accuracy and fairness variability is more prominent. In terms of accuracy (not balanced), the models perform even closer as shown in the plots in Appendix A.2. For EO, the spread is significant, with gaps of 0.10 between the largest and smallest values for Jigsaw, and 0.15 for HateXplain. Depending on the dataset and sensitive group, some larger models seem to lead to lower EO; for example, ELECTRA-large achieves the best accuracy-EO results for religion as the sensitive group (Jigsaw). For race, SqueezeBERT, which is one of the small models in the study, achieves one of the best balanced accuracy-EO operating points for Jigsaw (considering it is half the size of RoBERTa which has better balanced accuracy but similar EO), hinting that size is not well correlated with the fairness of the model. Similarly, for HateXplain (religion), DistilBERT, again a small model, obtains the best balanced accuracy-EO operating point. In the next section, we analyze models trained using various random seeds and find a low correlation between EO and model size.
+
+These results strongly suggest that fairness measures should be included in the evaluation of LMs. In the next sections, we demonstrate that, if fairness is not carefully considered, we can end up with models with widely varying fairness characteristics depending on the training conditions.
+
+# 5.2 The influence of random seeds
+
+Fine-tuning LMs depends on a random seed used for mini-batch sampling and for initializing the weights in the last layers of the network responsible for the binary classification. It is well documented in the literature that this random seed may influence the accuracy of the resulting model (Dodge et al., 2020). In Figure 2 we show that while balanced accuracy is somewhat stable, fairness can vary widely by only changing the random seed. In fact, if we were to plot the accuracy instead of the
+
+balanced accuracy, all points would be virtually on a horizontal line for Jigsaw, as shown in Figure A.2. The variations for EO are larger. For Jigsaw, we observe a variation of up to 0.05 in equalized odds for some cases. For HateXplain, the variation is considerably larger, with several models presenting a spread of 0.15 or more for the sensitive group of religion. For example, for DeBERTa-L, depending on the random seed, one could get one of the best models with respect to performance-fairness trade-offs, or one of the worst (balanced accuracy varies within 0.79-0.80, while EO varies over 0.11-0.30). The results in our experiments align with the ones discussed in a recent study on underspecification in machine learning (D'Amour et al., 2020), where different random seeds lead to small variations in accuracy, but considerable variations in intrinsic bias as measured by gendered correlations.
+
+To further probe whether there is a correlation between fairness and model size, we used the results for multiple random seeds to compute Pearson's coefficient of correlation. These values are -0.357 for Jigsaw and -0.188 for HateXplain, with p-values of 5e-6 and 0.017, respectively. These results show a low correlation between fairness as measured by EO and model size.
+
+# 5.3 Low data regime
+
+In general, it is well known that more training data improves model accuracy. We experiment with finetuning the models using a fraction of the training dataset, while keeping the test set the same. When the smaller datasets are subsampled from the original dataset, we ensure that the larger datasets include the smaller ones to simulate situations when more data is collected and used for training. The results are shown for one small/regular/large model in Figure 3. Each data point in the graph represents the average of eleven runs performed with different random seeds, one for each run. In very few cases, the random seed led to a degenerate model and we did not include these runs in the averaged results. Overall, there were up to five degenerate runs for each dataset (across all 14 models in this study, not only the ones presented in the figure).
+
+We observe that in the case of Jigsaw, equalized odds generally keeps improving even when the accuracy plateaus, suggesting that, from a fairness point of view, it may be beneficial to collect more data for fine-tuning. This does not seem to be the case for the HateXplain dataset, where the accuracy
+
+
+Figure 2: Balanced accuracy versus equalized odds for fine-tuned LMs when varying the random seed used in fine-tuning.
+
+does not plateau and the fairness measure oscillates. A reason could be that HateXplain is much smaller in size than Jigsaw and hence Jigsaw's training is more stable. Similar trends are observed for the rest of the models in our study.
+
+# 5.4 Bias mitigation through post-processing
+
+In this section, we experiment with applying postprocessing methods for group bias mitigation. We first discuss the results of parameter tuning for Fair Score Transformer (FST) (Wei et al., 2020). More details can be found in Appendix A.3. The FST method has one tunable parameter, $\epsilon$ . Using the transformed scores from FST, we also investigate tuning the threshold used in the binary classifier, instead of using the default value of 0.5, as explained in Section 4. Figure 4 depicts the data points obtained by varying $\epsilon$ and the classification threshold4. Note that we plot EO decreasingly on the x-axis, and overall better operating points are
+
+ | Religion | Christian | Jewish | Muslim | Race | White | Black | Gender | Female | Male | LGBT |
| Baseline | 0.18 | 0.10 | 0.06 | 0.20 | 0.10 | 0.12 | 0.13 | 0.10 | 0.12 | 0.13 | 0.15 |
| FST | 0.08 | 0.03 | 0.06 | 0.11 | 0.09 | 0.11 | 0.11 | 0.05 | 0.07 | 0.07 | 0.15 |
+
+Table 2: BERT (Jigsaw): Equalized odds before and after applying FST for all sensitive groups and their subgroups.
+
+
+Jigsaw
+DistilBERT
+
+
+HateXplain
+
+
+
+
+
+
+ELECTRA-large
+Figure 3: Accuracy, balanced accuracy and equalized odds (religion) for fine-tuned LMs when varying the fine-tuning data size and the random seeds. Error bars denote $\pm 1$ SE (standard error) of the mean.
+
+
+BERT
+
+closer to the top right corner. When choosing an operating point, the points on the black Pareto frontier are the most interesting points: highest balanced accuracy and lowest equalized odds. For reference, we also show the baseline points without bias mitigation for the dev and test sets. All data points are plotted for fine-tuned BERT. Similar trends are observed for the rest of the models considered in this study and for the HateXplain dataset.
+
+We also experimented with calibrating the scores using logistic regression before post-processing. In Figure 5, we plot the Pareto frontiers of bias mitigation when applying FST, with and without calibration, along with the threshold post-processing (TPP) method. We also show the result of HPS, which yields a single operating point, as well as
+
+
+Figure 4: FST tuning for BERT: Balanced accuracy versus equalized odds on the Jigsaw dataset when varying fairness parameter $\epsilon$ and classification threshold $t$ for the FST method for group bias mitigation (religion).
+
+
+Figure 5: BERT: Balanced accuracy versus equalized odds on the Jigsaw dataset when applying the FST and HPS methods for group bias mitigation and threshold post-processing (TPP) alone (religion).
+
+the baselines without bias mitigation. In general on the Jigsaw dataset, FST is successful in reducing EO with different degrees of success depending on the model/group (see Appendix A.4 for additional plots), offering an interesting set of points with different accuracy-EO trade-offs. For reference, we show the corresponding point for the test set (orange x) for the operating point in dev that achieves an equalized odds of at most 0.05 (orange square). In certain cases, FST manages to lower the equalized odds with minimal or no decrease in accuracy, as seen for religion in Figure 5. Note that all points in the plots except for the x points are plotted using the dev split.
+
+In comparison, HPS seems particularly effective in lowering the equalized odds and thus improving the fairness of the model, with some penalty on
+
+the accuracy. For Jigsaw, applying only TPP (i.e., tuning the threshold used in the binary classification) also offers some interesting operating points. TPP has a small search space compared to FST and sometimes the Pareto frontier is reduced to one point, as is the case in Figure 5. In general, FST has superior Pareto frontiers compared to TPP alone. In addition, as we discuss in Appendix A.4, TPP proved inefficient for the HateXplain dataset. Last, using score calibration before feeding the scores to FST does not seem to offer significant improvements. Similar trends can be observed for the rest of the models.
+
+Overall, we find the post-processing methods for bias mitigation worth considering. They are straightforward to apply, run in the order of seconds or minutes on the CPU of a regular laptop and they offer interesting operating points. On the other hand, pre-processing or in-processing techniques for bias elimination would incur significant computational cost. Obtaining the Pareto frontiers is instantaneous as the search space for FST is not that large. For more results and discussion of bias mitigation, we refer the reader to Appendix A.4.
+
+# 5.5 Sensitive groups and subgroups
+
+In our analysis so far, we looked at sensitive groups that refer to religion, race and gender. In this section we use the Jigsaw dataset to zoom in and analyze the equalized odds for a sensitive group and its constituent subgroups. We select all subgroups that have at least 100 samples in the test split. We continue to apply FST only at the larger group level (e.g., religion) and examine its effect on subgroups. In Table 2, we show the EO measure for BERT before and after applying FST for all sensitive groups and subgroups. FST consistently manages to lower EO for individual subgroups, without overly favoring one subgroup over another. There are a few instances that do not observe any change, mostly the smallest subgroups. Note that subgroups can be overlapping since they do not represent identities of individuals, instead they derive from the text which may mention multiple subgroups. One notable example is that male and female subgroups have similar EO, both baseline and after FST. This justifies using larger sensitive groups for fitting FST since it seems the discussion of gender overall is problematic as opposed to one gender in particular.
+
+# 6 Limitations
+
+In our study, we covered a series of different models that varied in network architecture, size as number of parameters, training procedures, and pretraining data. As we did not keep any of the elements constant (e.g., architecture) while varying the rest (e.g., pretraining data, size, training procedure), it is hard to draw insights on how each individual element affects the fairness of the resulting prediction outcomes. We would like to emphasize that identifying toxic text is not an easy task, not even for humans. As such, we expect the datasets to be noisy and contain samples that are not annotated correctly. Upon manual inspection, we could identify some samples for which we did not agree with their labels. Motivated by this observation, we started looking into understanding the quality of datasets used in toxic text prediction (Arhin et al., 2021). As a consequence, while we expect the trends shown in this paper to hold, the actual absolute numbers may vary with datasets/tasks. More observations and limitations can be found in Section 8.
+
+# 7 Conclusions
+
+In this work, we addressed the following research questions for language models: how do model size, training size, random seeds affect the relationship between performance and fairness (as measured by equalized odds)? Can post-processing methods for bias mitigation lead to better operating points for both accuracy and fairness? We find these questions important in the context of the ethics of using language models in text toxicity prediction, in particular, and in NLP research, in general. We presented a comprehensive study of language models and their performance/fairness relationship. We chose several models to cover different sizes and different architectures. While we did not consider some of the largest recent models available, we believe we have experimented with a wide variety of models that have been discussed well in the literature. We hope that this study can drive the following point across: we cannot make a blanket statement on the fairness of language models with respect to their size or architecture, while training factors such as data size and random seeds can make a large difference. This makes it all the more important for researchers/practitioners to make fairness an integral part of the performance evaluation of language models.
+
+# 8 Ethics Statement
+
+This research used a considerable amount of computational resources and this is our main ethics concern for conducting this work. We did try to keep the number and the size of models we experimented with limited, to reduce the carbon footprint of the experiments. We hope the results we show in this paper are worth the computational resources used.
+
+In this study, we looked at coarse-grained groups defined by the text content mentioning religion/race/gender, which may obfuscate the behavior of the models with respect to finer-grained groups, such as females and males. Similarly, we did not consider intersectionality.
+
+Bias mitigation can lead to undesirable outcomes. For example, one aspect we did not look into is what happens with other groups when the mitigation is applied only for one of the groups. In addition, we focused only on group fairness and do not provide any insights into individual fairness. We also recognize that abstract metrics have limitations and the societal impacts resulting from bias mitigation are not well understood (Olteanu et al., 2017). These issues are universal to bias mitigation techniques and not particular to our use case.
+
+Last, but not least, the datasets we used are English only. We acknowledge the importance of performing similar studies on multi-lingual datasets.
+
+# References
+
+Mohsen Abbasi, Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. 2019. Fairness in representation: quantifying stereotyping as a representational harm. In Proceedings of the 2019 SIAM International Conference on Data Mining.
+Allen Institute for AI AI2. 2021. Leaderboards.
+Julia Angwin, Jeff Larson, Lauren Kirchner, and Surya Mattu. 2017. Minority Neighborhoods Pay Higher Car Insurance Premiums Than White Areas With the Same Risk. https://www_propublica.org/article/minority-neighborhoods-higher-car-insurance-premiums-white-areas-same-risk.
+Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. www propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
+Kofi Arhin, Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, and Moninder Singh. 2021. Ground-truth, whose truth?
+
+examing the difficulties with annotating toxic text datasets. In Data-Centric AI Workshop colocated with NeurIPS 2021.
+Pranjal Awasthi, Matthaus Kleindessner, and Jamie Morgenstern. 2020. Equalized odds postprocessing under imperfect group information. In The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020.
+Ari Ball-Burack, Michelle Seng Ah Lee, Jennifer Cobbe, and Jatinder Singh. 2021. Differential tweetment: Mitigating racial dialect bias in harmful tweet detection. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
+Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, et al. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development.
+Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
+Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
+Aja Bogdanoff. 2017. Saying goodbye to Civil Comments. [Online; accessed 21-July-2021].
+Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems.
+Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced metrics for measuring unintended bias with real data for text classification. In Companion of The 2019 World Wide Web Conference, WWW.
+Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency.
+Alexandra Chouldechova and Aaron Roth. 2018. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810.
+Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, and Massimiliano Pontil. 2019. Leveraging labeled and unlabeled data for consistent fair binary classification. Advances in Neural Information Processing Systems, 32.
+
+Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Kate Crawford. 2017. The trouble with bias. https://www.youtube.com/watch?v= fMym_BKWQzk.
+Paula Czarnowska, Yogarshi Vyas, and Kashif Shah. 2021. Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics. Transactions of the Association for Computational Linguistics.
+Zihang Dai, Guokun Lai, Yiming Yang, and Quoc Le. 2020. Funnel-Transformer: Filtering out sequential redundancy for efficient language processing. In Annual Conference on Neural Information Processing Systems 2020.
+Alexander D'Amour, Katherine A. Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yi-An Ma, Cory Y. McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vlademyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, and D. Sculley. 2020. Underspecification presents challenges for credibility in modern machine learning. CoRR, abs/2011.03395.
+Daniel de Vassimon Manela, David Errington, Thomas Fisher, Boris van Breugel, and Pasquale Minervini. 2021. Stereotype and skew: Quantifying gender bias in pre-trained and fine-tuned language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics.
+J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT.
+Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah A. Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. CoRR, abs/2002.06305.
+Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference.
+
+Benjamin Fish, Jeremy Kun, and Ádám D Lelkes. 2016. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM International Conference on Data Mining. SIAM.
+Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sanchez, Mugdha Pandya, and Adam Lopez. 2021. Intrinsic bias metrics do not correlate with application bias. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017.
+Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems.
+Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: decoding-enhanced BERT with disentangled attention. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+Sara Hooker, Nyalleng Moorosi, Gregory Clark, S. Bengio, and Emily L. Denton. 2020. Characterising bias in compressed models. ArXiv.
+Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social biases in NLP models as barriers for persons with disabilities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
+Forrest Iandola, Albert Shaw, Ravi Krishna, and Kurt Keutzer. 2020. SqueezeBERT: What can computer vision teach NLP about efficient neural networks? In Proceedings of SustainNLP: Workshop on Simple and Efficient Natural Language Processing.
+Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, and Silvia Chiappa. 2020. Wasserstein fair classification. In Uncertainty in Artificial Intelligence.
+Kaggle Jigsaw. 2019. Jigsaw Unintended Bias in Toxicity Classification. [Online; accessed 21-July-2021].
+Faisal Kamiran, Asim Karim, and Xiangliang Zhang. 2012. Decision theory for discrimination-aware classification. In 2012 IEEE 12th International Conference on Data Mining. IEEE.
+Michael P Kim, Amirata Ghorbani, and James Zou. 2019. Multiaccuracy: Black-box post-processing for fairness in classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society.
+
+Svetlana Kiritchenko, Isar Nejadgholi, and Kathleen C. Fraser. 2021. Confronting abusive language online: A survey from the ethical and human rights perspective. Journal of Artificial Intelligence Research.
+Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
+Eric Lehman, Jay DeYoung, Regina Barzilay, and Byron C Wallace. 2019. Inferring which medical treatments work from reports of clinical trials. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL).
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
+Binny Mathew, Punyajoy Saha, Seid Muhie Yi-mam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. HateXplain: A benchmark dataset for explainable hate speech detection. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021.
+Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Association for Computational Linguistics.
+Pandu Nayak. 2019. Understanding searches better than ever before.
+Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for english tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos.
+Alexandra Olteanu, Kartik Talamadupula, and Kush R. Varshney. 2017. The limits of abstract evaluation metrics: The case of hate speech detection. In Proceedings of the 2017 ACM on Web Science Conference, WebSci 2017, Troy, NY, USA, June 25 - 28, 2017.
+Yoonyoung Park, Jianying Hu, Moninder Singh, Issa Sylla, Irene Dankwa-Mullan, Eileen Koski, and Amar K. Das. 2021. Comparison of Methods to Reduce Bias From Clinical Prediction Models of Postpartum Depression. JAMA Network Open.
+Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward
+
+Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32.
+David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. 2021. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350.
+Perspective API. 2021. Using Machine Learning to Reduce Toxicity Online. [Online; accessed 21-July-2021].
+Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. arXiv preprint arXiv:1709.02012.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners.
+Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia. Association for Computational Linguistics.
+Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2021. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics.
+Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2020. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.
+Philippe Schwaller, Daniel Probst, Alain C. Vaucher, Vishnu H. Nair, David Kreutzer, Teodoro Laino, and Jean-Louis Reymond. 2021. Mapping the space of chemical reactions using attention-based neural networks. Nature Machine Intelligence.
+Irene Solaiman and Christy Dennison. 2021. Process for adapting language models to society (PALMS) with values-targeted datasets. In Annual Conference on Neural Information Processing Systems.
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL.
+Shivashankar Subramanian, Ioana Baldini, Sushma Ravichandran, Dmitriy Katz-Rogozhnikov, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Varshney Kush R, Annmarie Wang, Pradeep Mangalath, and Laura Kleiman. 2020. A natural language processing system for extracting evidence of drug repurposing from scientific publications. Proceedings of the AAAI Conference on Innovative Applications of Artificial Intelligence.
+
+Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Literature review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
+
+Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020.
+
+Sahil Verma and Julia Rubin. 2018. *Fairness definitions* explained. In *Proceedings of the International Workshop on Software Fairness*, New York, NY, USA. Association for Computing Machinery.
+
+Emily A. Vogels. 2021. The State of Online Harassment. [Online; accessed 21-July-2021].
+
+Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv preprint 1905.00537.
+
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of ICLR.
+
+Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, and Slav Petrov. 2020. Measuring and reducing gendered correlations in pre-trained models. CoRR, abs/2010.06032.
+
+Dennis Wei, Karthikeyan Natesan Ramamurthy, and Flavio P. Calmon. 2021. Optimized score transformation for consistent fair classification. Journal of Machine Learning Research.
+
+Dennis Wei, Karthikeyan Natesan Ramamurthy, and Flavio du Pin Calmon. 2020. Optimized score transformation for fair classification. In The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020.
+
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics.
+
+Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. 2017. Learning non-discriminatory predictors. In Conference on Learning Theory, pages 1920-1953. PMLR.
+
+Forest Yang, Mouhamadou Cisse, and Oluwasanmi O Koyejo. 2020. Fairness with overlapping groups; a probabilistic perspective. Advances in Neural Information Processing Systems, 33.
+
+# A Appendix
+
+In this appendix, we discuss the datasets we used in our experiments, include additional experimental results and provide more details on post-processing methods for bias mitigation. We conclude with remarks on the reproducibility of this study.
+
+# A.1 Datasets
+
+# A.1.1 Jigsaw Unintended Bias in Toxicity Classification
+
+In 2019, Jigsaw released a large dataset as part of the "Unintended Bias in Toxicity Classification" Kaggle competition (Jigsaw, 2019). The dataset is a collection of roughly two million samples of text from online discussions (Bogdanoff, 2017). The samples are rated for toxicity and annotated with attributes for sensitive groups. Table 3 shows the groups we considered in our analysis and the available fine-grained group annotations. Note that we considered the coarser groups; a sample text belongs to a sensitive (coarse) group if any (fine-grained) annotation for the sample text exists. We used the original training dataset split in a 80/20 ratio for training and development (dev) tuning, respectively. For reporting test results, we used the private test split released on Kaggle. Statistics for the dataset splits are shown in Table 5. Each sample in the dataset (see Table 4 for a few samples from the dataset) has a toxicity score and we consider anything higher than 0.5 to be toxic.
+
+For the Jigsaw dataset, a combination of automation and crowdsourcing was used to ensure that identity (i.e., sensitive group) labels are a reasonable approximation of true identity-related content (see Jigsaw FAQ). Not all the dataset was labeled for identity terms. While these labels are imperfect, we do not believe that the degree of imperfection invalidates our study. We note that the problem of protected attribute labels being imperfect is well-accepted and studied (Awasthi et al., 2020).
+
+Noisy and incomplete sensitive group labels are another reason why we chose equalized odds as the fairness measure. EO is a valid fairness measure
+
+even when there is overlap between the protected groups (e.g., the group labeled "non-religion" still has samples mentioning religion). To see this, recall that EO requires that the prediction conditioned on the true label be independent of the protected attribute and its violation can be measured by the difference $|\mathbb{E}[\hat{Y}|Y = 1,A = 1] - \mathbb{E}[\hat{Y}|Y = 1]|$ (similarly for $Y = 0$ ). The first term in the difference is measured on a subset of comments $(A = 1)$ that contain identity information. This is a good estimate if a sufficient number of samples were annotated, regardless of the potentially missing identity annotations on the remaining samples. The second term does not depend on annotations at all. Thus, the estimate of EO is not affected by the lack of annotations on some of the comments.
+
+Table 3: The sensitive groups for Jigsaw dataset with their corresponding fine-grained annotations.
+
+| Group | Fine-grained annotation |
| religion | atheist, buddhist, christian, hindu, jewish, other religion |
| race | white, asian, black, latino, other race or ethnicity |
| gender and sexual orientation* | bisexual, female, male, heterosexual, homosexual gay or lesbian, transgender, other gender, other sexual orientation |
+
+*Throughout the paper, we use "gender" for short.
+
+# A.1.2 HateXplain: Toxic text in Twitter and Twitter-like text
+
+HateXplain (Mathew et al., 2021) was recently introduced with the intent of studying explanations in offensive and hate speech in Twitter and Twitter like data (i.e., gab.com). For the purposes of our study, we collapse the annotations for offensive and hate speech into one class of toxic text. Similar to the Jigsaw dataset, HateXplain samples have fine-grained annotations for sensitive groups. We use as groups the coarse-level annotations, as we did for the Jigsaw dataset. The groups that we consider are presented in Table 6 and a few examples from the dataset are shown in Table 7. Note the text in each sample is represented in the dataset as a list of tokens; in the table, we concatenated them with spaces and this is the way we use them as inputs for the classifiers as well. We used the splits as provided in the dataset; dataset statistics are shown in Table 8.
+
+# A.2 The influence of random seeds on accuracy and equalized odds
+
+In this section we present graphs similar to the ones in Section 5.2 using accuracy as a measure of performance instead of balanced accuracy. These plots make it obvious how close in performance all models are and emphasize the gap in fairness measure observed across different random seeds for each fine-tuned model. The results are shown in Figure 6. Note that all Jigsaw models get an accuracy in performance of approximately $95\%$ with a gap of approximately .05 for equalized odds. HateXplain models exhibit a higher variance in accuracy $(4 - 5\%)$ across all models with an even larger gap of 0.15 for equalized odds for most models. Note that each LM has a modest variation in accuracy that spans approximately $1\%$ .
+
+For HateXplain, we also experimented with BERTweet (Nguyen et al., 2020), a BERT-base sized model following the RoBERTa pretraining procedure that is further trained on Twitter data, using the checkpoint available in the Hugging Face model hub. In our experiments, BERTweet presented the largest variation for accuracy (results not shown), achieving both the best and the worst accuracy across all models (across the 11 random seeds we used), spanning a spread of $4.5\%$ . The EO measure for BERTweet exhibited a variation of 0.12 for religion. We acknowledge that a more thorough analysis is required to better understand the effects of in-domain pretraining (in this case on tweets) for both accuracy and fairness. For example, recent work showed that model behavior can be adjusted to a set of "target values" if the model is trained on a small, well-behaved dataset (Solaiman and Dennison, 2021).
+
+# A.3 Fair Score Transformer (FST)
+
+In this section, we expand on our discussion of the application of FST in this work.
+
+The generalized equalized odds (GEO) criterion targeted by FST is computed as the maximum of the between-group absolute differences in average scores for positively-labeled and negatively-labeled instances (Wei et al., 2020). It is analogous to EO where instead of the predicted label, the corresponding probability for the label is used instead.
+
+Regarding issue 1) mentioned in Section 4 (calibration of input scores), we found that the distributions of softmax outputs of the tested LMs are bimodal and highly concentrated near values of
+
+Table 4: Jigsaw dataset samples.
+
+| Comment text | Toxicity | Group |
| The Atwood fable is Donald, is it? My impression of this noise (over Atwood) is that it's a gimmick by Atwood and her publisher to cash in on the Donald effect. As if we needed slaves in bonnets to remind us that Donald is a jerk (and where was Atwood's novel when Monica was being pawed over?). A word to defenders of women: don't spend your political capital on stupid analogies. | Toxic | Gender |
| I got a question for you, dear, and it is a fair question: We all know what is happening in Syria; where are all the women's marches over the slaughter in that country?. And, why has Trudeau been silent, like his pal Barry Obama, on taking effective military action against Syria? All you lefties are the same: you have no side vision. | Normal | Gender |
+
+
+
+
+Figure 6: Accuracy versus equalized odds for fine-tuned LMs when varying the random seed used in fine-tuning.
+
+Table 5: Jigsaw dataset statistics: sample counts per dataset split and sensitive group.
+
+| Split | Total | Religion | Race | Gender |
| Train | 1443899 | 50748 | 31022 | 70703 |
| Dev | 360975 | 12769 | 7999 | 17869 |
| Test | 97320 | 3316 | 1911 | 4367 |
+
+0 and 1 (as commonly observed with deep neural networks). Such skewed distributions appear to violate FST's expectation of probabilities as input and are typically not encountered on tabular datasets on which FST was previously tested. Thus we experimented with calibrating the LM outputs. We considered both logistic regression of the class label on the logit outputs of the LMs (a generalization of temperature scaling (Guo et al., 2017)), as well
+
+Table 6: The sensitive groups for HateXplain dataset with their corresponding fine-grained annotations.
+
+| Group | Fine-grained annotation |
| race | African, Arab, Asian, Caucasian, Hispanic |
| religion | Islam, Buddhism, Jewish, Hindu, Christian |
| gender | Men, Women |
+
+as linear regression on the logit outputs followed by clipping of the resulting values to the interval [0, 1]. In general, logistic regression proved somewhat beneficial for the Jigsaw dataset and we included it in our results.
+
+Regarding issue 2) (choice of fairness parameter), we found, as noted by Wei et al. (2020), that while the parameter $\epsilon$ controls the deviation from
+
+Table 7: HateXplain dataset samples.
+
+| Text | Toxicity | Group |
| why is anna acting like she black | Normal | Race |
| like brazil many are descendants of west africa social constructs labels classifications have ruined a many people who falsely self identify as white or just do not even know abt their native land all b c of them bum ass caucazoids aka caucasians | Toxic | Race |
+
+
+religion
+
+
+race
+Figure 7: FST tuning for BERT: Balanced accuracy versus equalized odds on the Jigsaw dataset when varying fairness parameter $\epsilon$ and binary classification threshold $t$ after applying the FST method for group bias mitigation.
+
+
+gender
+
+Table 8: HateXplain dataset statistics: sample counts per dataset split and sensitive group.
+
+| Split | Total | Religion | Race | Gender |
| Train | 15383 | 3924 | 5418 | 3102 |
| Dev | 1922 | 481 | 672 | 396 |
| Test | 1924 | 468 | 685 | 375 |
+
+GEO (i.e. the "GEO difference"), this is not always correlated with the EO difference, which is a function of the output after thresholding. Regarding 3) (classification threshold), we found that varying the threshold $t$ can significantly affect equalized odds as well as accuracy and balanced accuracy, and can sometimes even produce a reasonable trade-off between them. For this reason, we included a version of post-processing (see "Threshold post-processing (TPP)" in Section 4. This effect of the prediction threshold on fairness has not been explored in previous work to our knowledge.
+
+As a result of our observations regarding 2) and 3), we used the following procedure to select a set of $(\epsilon, t)$ pairs to map out a trade-off between fairness and accuracy. The training set used to fine-tune the LMs is never seen by FST. The development dataset ("dev") is used to both tune the FST parameters and evaluate the resulting transformation. As such, the dev dataset was further split into a dev-train set and a dev-eval set. Given an $\epsilon$ value, FST was fit on the dev-train set to ensure a GEO difference of at most $\epsilon$ . Then on the dev
+
+eval set, given $\epsilon$ and $t$ , scores were transformed by FST with parameter $\epsilon$ , thresholded at level $t$ to produce a binary label, and finally evaluated for both fairness and accuracy. Each $(\epsilon, t)$ pair thus yields one point in the equalized odds-accuracy plane, as seen in Figure 7. We selected $(\epsilon, t)$ pairs that are Pareto-efficient on the dev-eval set, to ensure the best fairness-accuracy trade-off.
+
+This is the first time FST is used with unstructured, text data and with large datasets in the order of millions of samples. First, we implemented FST following the proposed implementation in Wei et al. (2020). This first implementation ended up with numerical instabilities that lead to either slow running times (in the order of hours) or even situations when the method did not converge. We managed to improve upon the computational cost of FST, which was instrumental in scaling to the large Jigsaw dataset and allowing rapid experimentation. Specifically, in the dual ADMM algorithm of Wei et al. (2020), the first step (eq. (14) therein) consists of $n$ parallel optimizations, each involving a single variable. We observed that these optimizations can be done in closed form by solving a cubic equation. We refer to Wei et al. (2021, Appendix B.1) for details of the closed-form solution as it is not the focus of the present paper. The replacement of an iterative optimization with a closed-form solution greatly reduces the computational cost of FST. The improved FST runs in the order of 1-2 minutes for the Jigsaw dataset and in seconds for HateXplain.
+
+
+religion
+
+
+race
+Figure 8: BERT: Balanced accuracy versus equalized odds on the Jigsaw dataset when applying the FST and HPS methods for group bias mitigation and threshold post-processing (TPP) alone.
+
+
+gender
+
+
+
+Equally important, it also eliminates instances of the iterative optimization failing to converge.
+
+# A.4 Bias mitigation through post-processing methods
+
+In this section we present additional results on applying post-processing methods for group bias mitigation. We first discuss the results of parameter tuning for Fair Score Transformer (FST) (Wei et al., 2020). More details about FST itself can be found in the Appendix A.3. The FST method has one parameter, $\epsilon$ , that can be fine-tuned. Using the transformed scores from the FST, we also investigate tuning the threshold used in the binary classifier, instead of using the default value of 0.5, as explained in Section 4. Figure 7 depicts the data points obtained by varying epsilon and for each epsilon value, varying the classification threshold. When choosing an operating point, the points on the black Pareto frontier are the most interesting points: highest balanced accuracy and lowest equalized odds. For reference, we also show the baseline points without bias mitigation for the dev and test sets. All data points are plotted for fine-tuned BERT. Similar trends are observed for the rest of the models considered in this study and for the HateXplain dataset.
+
+We also experimented with calibrating the scores using logistic regression before post-processing. In Figure 8, we plot the Pareto frontiers of bias mitigation when applying FST, with and without calibration, along with the threshold post-processing (TPP) method. We also show the result of HPS, which yields a single operating point, as well as the baselines without bias mitigation. In general, on the Jigsaw dataset, FST is successful in reducing EO with different degrees of success depending on
+
+the model/group. It thus offers an interesting set of points with different accuracy-EO trade-offs. For reference, we show the equivalent point for the test set (orange $x$ ) for the operating point in dev that achieves an equalized odds of at most 0.05 (orange square). In certain cases, FST manages to lower the equalized odds with minimal or no decrease in accuracy, as seen in the religion and gender columns in Figure 8. Note that all points in the plots except for the $x$ points are plotted using the dev dataset split, the $x$ points are test points corresponding to dev points that obtain an EO of at most 0.05.
+
+In comparison, HPS seems particularly effective in lowering the equalized odds and thus improving the fairness of the model, with some penalty on the accuracy. For Jigsaw, applying only TPP (i.e., tuning the threshold used in the binary classification) also offers some interesting operating points. TPP has a small search space compared to FST and sometimes the Pareto frontier is reduced to one point, as is the case for the religion group. In general, FST has superior Pareto frontiers compared to TPP alone. In addition, as we will discuss shortly, TPP proved inefficient for the HateXplain dataset. Last, using score calibration before feeding the scores to FST does not seem to offer significant improvements. Similar trends can be observed for the rest of the models.
+
+In Figure 9, we show the results of applying bias mitigation techniques for a few LMs, one for each size category, on the HateXplain dataset with religion as the sensitive group. Unlike Jigsaw, the results of the bias mitigation techniques follow different trends. HPS still manages to substantially reduce the EO for all models, but with a considerable decrease in balanced accuracy (in some cases, more than six percentage points). For FST, the fine-tuning for epsilon and classification threshold does not lead to a large search space as observed in the
+
+
+DistilBERT
+
+
+BERT
+
+
+DEBERTA large
+Figure 9: Balanced accuracy versus equalized odds for fine-tuned LMs (religion) on the HateXplain dataset when applying the FST and HPS methods for group bias mitigation and threshold post-processing (TPP) alone.
+
+Jigsaw case. Moreover, the reduction in EO is more limited and sometimes the improvement observed for the dev set disappears in test. There are cases, though, such as BERT, where FST successfully reduces EO and the reduction is maintained or even improved in test. Across the board, tuning only the threshold used in classification (TPP) did not lead to improved results and we omit showing them in the plots.
+
+Overall, we find the post-processing methods for bias mitigation worth considering. They are straightforward to apply, run in the order of seconds or minutes on the CPU of a laptop and they offer interesting operating points when other methods for bias elimination would incur a significant computational cost, such as pre-processing or in-processing techniques. Obtaining the Pareto frontiers is instantaneous as the search space for FST is not that large.
+
+# A.5 Other post-processing methods for bias mitigation
+
+In addition to the two post-processing methods that we considered in our study, other post-processing methods for bias mitigation include assigning favorable labels to unprivileged groups in regions of high classifier uncertainty (Kamiran et al., 2012), minimizing error disparity while maintaining classifier calibration (Pleiss et al., 2017), a relaxed nearly-optimal procedure for optimizing equalized odds (Woodworth et al., 2017), shifting the decision boundary for the protected group (Fish et al., 2016), iterative post-processing to achieve unbiased predictions on every identifiable subpopula
+
+tion (Kim et al., 2019), recalibrating a classifier using a group-dependent threshold to optimize equality of opportunity (defined as the difference between the group-wise true positive rates) (Chzhen et al., 2019), using optimal transport to ensure similarity in group-wise predicted score distributions (Jiang et al., 2020), and a plug-in approach for transforming the predicted probabilities to satisfy fairness constraints (Yang et al., 2020).
+
+# A.6 Reproducibility statement
+
+The data processing we performed for the datasets we used is briefly explained in Appendix A.1. In all our experiments we used unmodified versions of the model implementations from the Hugging Face transformers library (version 4.3.3) and the main scripts to tune the models are modified versions of the sequence text classification examples accompanying the library. The hyper-parameter tuning we performed was minimal (varying the number of epochs from 1-3, two values for learning rates $2e - 6$ and $2e - 5$ , 11 values for random seeds). More details on the experimental infrastructure can be found in Section 3.2. The main limiting factor in reproducing the results presented in this study is having access to GPUs such as the NVIDIA V100 and A100 and generous, parallel compute time. At the time of this writing, the implementation of FST that we used is evolving proprietary code that may become available for external consumption. More details are provided in Appendix A.3. For HPS, we used the open-source implementation that can be found as part of the AIF360 toolkit, "equalized odds post-processing" method.
\ No newline at end of file
diff --git a/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/images.zip b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4b23e86cf019ed7695adc1e11c8cb78348a9102d
--- /dev/null
+++ b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a2c645fb1cf4857a6258d33dca74e9e0d6053876d9ace7617c8028ba1babacbc
+size 808836
diff --git a/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/layout.json b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..80960d94296fd4cce9e5bd501aa961c2ed4847e3
--- /dev/null
+++ b/yourfairnessmayvarypretrainedlanguagemodelfairnessintoxictextclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d80a1a1b0917931c5f3cec56b62af59da63f813453de86926a1b0b14edbf6b10
+size 494689
diff --git a/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_content_list.json b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a9006dae8dd3c18295441784fc56d557f04cc29f
--- /dev/null
+++ b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f494551ee33c4267d096e95539c34765b3e6c6721435cdead286cca802799485
+size 87268
diff --git a/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_model.json b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d0b7cf2e3f8da9d05b9bd97721f88046ba2dddcc
--- /dev/null
+++ b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:34f5743e9cdedbd0f9141e8f0735fa0cc4b02f0cb7e318d42fd2014654c41596
+size 109191
diff --git a/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_origin.pdf b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e9a3f0ca88e4d6813233709cc00abd77c8fee2d6
--- /dev/null
+++ b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/b5ba48ea-0fe2-444e-9ddf-b7b38b5c7911_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e2a80297f69b2c68ecb755740526e87b3e0971816ccf92cf4b503fe50b10d232
+size 884966
diff --git a/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/full.md b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1e9e44816d99a6e1890e033670dd810691604cfe
--- /dev/null
+++ b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/full.md
@@ -0,0 +1,386 @@
+# Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations
+
+Ji Xin $^{1*}$ , Chenyan Xiong $^{2}$ , Ashwin Srinivasan $^{2}$ , Ankita Sharma $^{2}$ , Damien Jose $^{2}$ , Paul N. Bennett $^{2}$
+
+1 University of Waterloo 2 Microsoft ji.xin@uwaterloo.ca
+panyan.xiong,ashwinsr,ankita.sharma, jose,paul.n.bennett@microsoft.com
+
+# Abstract
+
+Dense retrieval (DR) methods conduct text retrieval by first encoding texts in the embedding space and then matching them by nearest neighbor search. This requires strong locality properties from the representation space, e.g., close allocations of each small group of relevant texts, which are hard to generalize to domains without sufficient training data. In this paper, we aim to improve the generalization ability of DR models from source training domains with rich supervision signals to target domains without any relevance label, in the zero-shot setting. To achieve that, we propose Momentum adversarial Domain Invariant Representation learning (MoDIR), which introduces a momentum method to train a domain classifier that distinguishes source versus target domains, and then adversarially updates the DR encoder to learn domain invariant representations. Our experiments show that MoDIR robustly outperforms its baselines on $10+$ ranking datasets collected in the BEIR benchmark in the zero-shot setup, with more than $10\%$ relative gains on datasets with enough sensitivity for DR models' evaluation. Source code is available at https://github.com/ji-xin/modir.
+
+# 1 Introduction
+
+Rather than matching texts in the bag-of-words space, Dense Retrieval (DR) methods first encode texts into a dense embedding space (Lee et al., 2019; Karpukhin et al., 2020; Xiong et al., 2021) and then conduct text retrieval using efficient nearest neighbor search (Chen et al., 2018; Guo et al., 2020; Johnson et al., 2021). With pre-trained language models and dedicated fine-tuning techniques, the learned representation space has significantly advanced the first stage retrieval accuracy of many language systems, including web search (Xiong et al.,
+
+
+Figure 1: T-SNE plots of embedding space of a BERT reranker for q-d pairs (left) and ANCE dense retriever for queries/documents (right). Both models are trained on web search and transferred to medical search.
+
+
+
+2021), grounded generation (Lewis et al., 2020), open domain question answering (Karpukhin et al., 2020; Izacard and Grave, 2020), etc.
+
+Purely using the learned embedding space for retrieval has raised concerns on the generalization ability, especially in scenarios without dedicated supervision signals. Many have observed diminishing advantages of DR models in various datasets if they are not fine-tuned with task-specific labels, i.e., in the zero-shot setup (Thakur et al., 2021). However, in many scenarios outside commercial web search, zero-shot is the norm. Obtaining training labels is difficult, expensive, and sometimes infeasible, especially in special domains (e.g., medical) where annotation requires strong expertise or is even prohibited because of privacy constraints. The lack of zero-shot ability hinders the democratization of advancements in dense retrieval from data-rich domains to everywhere else. Many equally, if not more important, real-world search scenarios still rely on unsupervised exact match methods that have been around for decades, e.g., BM25 (Robertson and Jones, 1976).
+
+Within the search pipeline, the generalization of first stage DR models is notably worse than
+
+subsequent reranking models (Thakur et al., 2021). Reranking models, similar to many classification models, only require a decision boundary between relevant and irrelevant query-document pairs (q-d pairs) in the representation space. In comparison, DR needs good local alignments across the entire space to support nearest neighbor matching, which is much harder to learn.
+
+In Figure 1, we use t-SNE (van der Maaten and Hinton, 2008) to illustrate this difference. We show learned representations of a BERT-based reranker (Nogueira and Cho, 2019) and a BERT-based dense retriever (Xiong et al., 2021), in zero-shot transfer from web (Bajaj et al., 2016) to medical domain (Voorhees et al., 2021). The representation space learned for reranking yields two manifolds with a clear decision boundary; data points in the target domain naturally cluster with their corresponding classes (relevant or irrelevant) from the source domain, leading to good generalization. In comparison, the representation space learned for DR is more scattered. Target domain data points are grouped separately from those of the source domain; it is much harder for the learned nearest neighbor locality to generalize from source to the isolated target domain region.
+
+In this paper, we present Momentum Adversarial Domain Invariant Representations learning (MoDIR), to improve the accuracy of zero-shot dense retrieval (ZeroDR). We first introduce an auxiliary domain classifier that is trained to discriminate source embeddings from target ones. Then the DR encoder is not only updated to encode queries and relevant documents together in the source domain, but also trained adversially to confuse the domain classifier and to push for a more domain invariant embedding space. To ensure stable and efficient adversarial learning, we propose a momentum method that trains the domain classifier with a momentum queue of embeddings saved from previous iterations.
+
+Our experiments evaluate the generalization ability of dense retrieval with MoDIR using 15 retrieval tasks from the BEIR benchmark (Thakur et al., 2021). On these retrieval tasks from various domains including biomedical, finance, scientific, etc., MoDIR improves the zero-shot accuracy of two standard models, DPR (Karpukhin et al., 2020) and ANCE (Xiong et al., 2021). On tasks where evaluation labels have sufficient coverage for DR (Thakur et al., 2021), MoDIR's improvements are robust
+
+and significant, despite not using any target domain training labels. We also verify the necessity of the proposed momentum approach, without which the domain classifier fails to capture the domain gaps, and the adversarial training does not learn domain invariant representations, resulting in little improvement in ZeroDR.
+
+We conduct further analyses to reveal interesting properties of MoDIR and its learned embedding space. During the adversarial training process, the target domain embeddings are gradually pushed towards the source domain and eventually absorbed as a subgroup of the source. In the learned representation space, our manual examinations find various cases where a target domain query is located close to source queries with similar information needs. This indicates that ZeroDR's generalization ability comes from the combination of information overlaps of source/target domains, and MoDIR's ability to identify the right correspondence between them.
+
+# 2 Related Work
+
+In this section, we recap related work in dense retrieval and adversarial domain adaptation.
+
+Dense Retrieval Different from sparse first stage retrieval models, dense retrieval with Transformer-based models (Vaswani et al., 2017) such as BERT (Devlin et al., 2019) conducts retrieval in the dense embedding space (Lee et al., 2019; Chang et al., 2020; Guu et al., 2020; Karpukhin et al., 2020; Luan et al., 2021). Compared with its sparse counterparts, DR improves retrieval efficiency and also provides comparable or even superior effectiveness for in-domain datasets.
+
+One important research question for DR is how to obtain meaningful negative training instances. DPR (Karpukhin et al., 2020) uses BM25 to find stronger negatives in addition to in-batch random negatives. RocketQA (Qu et al., 2021) uses cross-batch negatives and also filters them with a strong reranking model. ANCE (Xiong et al., 2021) uses an asynchronously updated negative index built from the being-trained DR model to retrieve global hard negatives.
+
+Recently, challenges of ZeroDR have attracted much attention (Thakur et al., 2021; Zhang et al., 2021; Li and Lin, 2021). One way to improve ZeroDR is query generation (Liang et al., 2020; Ma et al., 2021), which first trains a doc2query model in the source domain and then applies the NLG model on target domain documents to generate queries.
+
+The target domain documents and generated queries form weak supervision labels for DR models. Our method differs from them and focuses on directly improving the generalization ability of the learned representation space.
+
+Adversarial Domain Adaptation Unsupervised domain adaptation (UDA) has been studied extensively for computer vision applications. For example, maximum mean discrepancy (Long et al., 2013; Tzeng et al., 2014; Sun and Saenko, 2016) measures domain difference with a pre-defined metric and explicitly minimizes the difference. Following the advent of GAN (Goodfellow et al., 2014), adversarial training for UDA is proposed: an auxiliary domain classifier learns to discriminate source and target domains, while the main classifier model is adversarially trained to confuse the domain classifier (Ganin and Lempitsky, 2015; Bousmalis et al., 2016; Tzeng et al., 2017; Luo et al., 2017; Vu et al., 2020; Vernikos et al., 2020; Tang and Jia, 2020). The adversarial method does not require pre-defining the domain difference metric, allowing more flexible domain adaptation. MoDIR builds upon the success of UDA methods and introduces a new momentum learning technique that is necessary to learn domain invariant representations in the ZeroDR setting.
+
+# 3 Training Domain Invariant Representations for Dense Retrieval
+
+In this work, we aim to improve generalization in ZeroDR under the unsupervised domain adaptation setting (UDA) (Long et al., 2016). Given a source domain with sufficient training signals, the goal is to transfer the DR model to a target domain, with access to its queries and documents, but without any relevance label. This is the common case when applying DR in real-world scenarios: in target domains (e.g., medical), example queries and documents are available but annotating relevance is expensive and may require domain expertise; on the other hand, in the source domain (e.g., web search), training signals are available at large scale (Ma et al., 2020; Thakur et al., 2021).
+
+Our method, MoDIR, improves ZeroDR in the UDA setup by encouraging the DR models to learn a domain invariant representation space that facilitates the generalization from source to target. In this section, we describe (1) how to train a vanilla dense retrieval model, (2) how to train a momentum domain classifier to distinguish the two domains,
+
+and (3) how to adversarially train the DR model for domain invariant representations.
+
+# 3.1 Training the Dense Retrieval Model
+
+The standard design of DR is to use a dual-encoder model (Lee et al., 2019; Karpukhin et al., 2020), where an encoder $g$ takes as input a query/document and encodes it into a dense vector. The relevance score of a q-d pair $x = (q, d)$ is computed using a simple similarity function:
+
+$$
+r (x) = \operatorname {s i m} \left(g \left(q; \theta_ {g}\right), g \left(d; \theta_ {g}\right)\right), \tag {1}
+$$
+
+where $\theta_{g}$ is the collection of parameters of $g$ and sim is a vector similarity function.
+
+The training of DR uses labeled q-d pairs in the source domain $x^{s} = (q^{s},d^{s})$ . With relevant q-d pair as $x^{s + }$ and irrelevant pair as $x^{s - }$ , the encoder $g$ is trained to minimize the ranking loss $L_{R}$ :
+
+$$
+\min _ {\theta_ {g}} \sum_ {x ^ {s +}, x ^ {s -}} L _ {R} \left(r \left(x ^ {s +}\right), r \left(x ^ {s -}\right)\right), \tag {2}
+$$
+
+where $L_{R}$ is a ranking loss function. Our model follows its baseline DPR/ANCE to sample irrelevant documents using BM25 or global hard negatives. Without loss of generality, other modeling designs are kept the same with ANCE: $g$ is fine-tuned from RoBERTaBASE (Liu et al., 2019); the output query/document embeddings are the hidden states of the last layer's [CLS] token; $L_{R}$ is the Negative Log Likelihood (NLL) loss; sim is the dot product.
+
+# 3.2 Estimating the Domain Boundary with Momentum Domain Classifier
+
+To capture domain differences and enable adversarial learning for domain invariance, MoDIR introduces a domain classifier $f$ to predict the probability of a query/document embedding $e$ being source or target, and we use a linear classifier as $f$ :
+
+$$
+f (\mathbf {e}) = \operatorname {s o f t m a x} \left(W _ {f} \mathbf {e}\right). \tag {3}
+$$
+
+The linear classifier has sufficient capacity to distinguish the two domains in the high-dimensional representation space—the main challenge is on training. As illustrated in Figure 1, DR's representation space focuses more on locality than forming manifolds, and therefore it is more difficult to learn the domain boundary in this case. If we simply update $f$ using the same amount of data points as $g$ , $f$ fails to accurately estimate the domain boundary; on the other hand, if we naively feed in more
+
+
+Figure 2: Momentum adversarial training provides a more accurate and robust estimation of the domain boundary in dense retrieval's embedding space.
+
+data points for $f$ , all these data points need to be encoded by the expensive encoder $g$ , which makes the training process infeasibly slow.
+
+To achieve the balance between accuracy and efficiency, we introduce the momentum method for the domain classifier, as shown in Figure 2. We maintain a momentum queue $Q$ that records embeddings from multiple previous batches as the additional training data for $f$ . Specifically, at each step, in addition to source domain training data $x^{s}$ , we sample q-d pairs $x^{t}$ from the target domain, and add embeddings of $x^{s}$ and $x^{t}$ to $Q$ . The momentum queue $Q$ at step $k$ includes embeddings $\mathbf{e}_q / \mathbf{e}_d$ from source and target queries/documents for all recent $n$ batches:
+
+$$
+Q _ {k} = \left\{\mathbf {e} _ {q}, \mathbf {e} _ {d} | (q, d) \in B _ {k - n + 1: k} \right\}, \tag {4}
+$$
+
+where $B_{k - n + 1:k}$ is the collection of all data points from the past $n$ batches, including both source and target ones, and $n$ is the momentum step. For simplicity of sampling, we use the 1:1 ratio between source/target data and also between positive/negative source data.
+
+To ensure efficiency of the momentum method, all embeddings $\mathbf{e}$ from $Q$ are detached from the encoder $g$ . Take the query $q^s$ as an example,
+
+$$
+\mathbf {e} _ {q ^ {s}} = \Phi (g (q ^ {s}; \theta_ {g})), \tag {5}
+$$
+
+where $\Phi$ is the stop-gradient operator, i.e., gradients of $\mathbf{e}_{q^s}$ are not back propagated to $\theta_g$ . Since the linear classifier $f$ is significantly smaller and faster than the transformer-based encoder $g$ , this enables efficient training for $f$ .
+
+At each iteration, $f$ is updated by repetitively minimizing the following discrimination loss $L_{D}$ ,
+
+computed with all embeddings from $Q$ :
+
+$$
+\min _ {W _ {f}} L _ {D} (\mathbf {e}; f), \quad \mathbf {e} \in Q, \tag {6}
+$$
+
+$$
+L _ {D} (\mathbf {e}; f) = \left\{ \begin{array}{l l} - \log f (\mathbf {e}), & \mathbf {e} \text {f r o m s o u r c e}, \\ - \log (1 - f (\mathbf {e})), & \mathbf {e} \text {f r o m t a r g e t}, \end{array} \right. \tag {7}
+$$
+
+where $L_{D}$ is a standard classification loss. In this way, at each iteration, the domain classifier $f$ is trained with more signals than the encoder $g$ (the entire $Q$ versus only one batch), ensuring accurate estimation of the domain boundary. The detached embeddings from $Q$ also ensures training efficiency.
+
+# 3.3 Adversarial Learning for Domain Invariant Representations
+
+MoDIR adversarially trains the encoder $g$ to generate domain invariant representations that are hard for $f$ to distinguish. This is done by minimizing the adversarial loss $L_{M}$ . Here we choose the widely used Confusion loss (Tzeng et al., 2017):
+
+$$
+\begin{array}{l} L _ {M} (x; g, f) = - \frac {1}{2} \left(\log f (g (q)) + \log f (g (d)) \right. \\ \left. + \log (1 - f (g (q))) + \log (1 - f (g (d)))\right), \tag {8} \\ \end{array}
+$$
+
+where $x \in \{x^s, x^t\}$ is a q-d pair from either source or target domain. It reaches the minimum when the embeddings are domain invariant so that the domain classifier predict 50%-50% probability for all data. In order for the encoder to learn domain invariance, we freeze the domain classifier and update only the encoder when minimizing $L_M$ :
+
+$$
+\min _ {\theta_ {g}} \lambda \sum_ {x \in \left\{x ^ {s}, x ^ {t} \right\}} L _ {M} (x; g, f). \tag {9}
+$$
+
+The hyperparameter $\lambda$ balances the learning of DR ranking in the source domain (Equation (2)) and the learning of domain invariance (Equation (9)).
+
+ | Hole@10 | nDCG@10 |
| BM25 | DPR | ANCE | BM25 | DPR | DPR+MoDIR | ANCE | ANCE+MoDIR |
| TREC-COVID | 10.6% | 33.0% | 22.4% | 0.616 | 0.561 | 0.591(+5.3%) | 0.654 | 0.676 (+3.4%) |
| Touché | 29.8% | 63.3% | 56.9% | 0.605 | 0.243 | 0.258(+6.2%) | 0.284 | 0.315 (+10.9%) |
| DBPedia | 41.3% | 73.2% | 65.8% | 0.288 | 0.236 | 0.240(+1.7%) | 0.281 | 0.284 (+1.1%) |
| NFCorpus | 74.1% | 85.2% | 83.1% | 0.297 | 0.208 | 0.212(+1.9%) | 0.237 | 0.244 (+3.0%) |
| Quora | 88.7% | 87.3% | 87.1% | 0.742 | 0.842 | 0.848(+0.7%) | 0.852 | 0.856 (+0.5%) |
| BioASQ | 80.7% | 92.0% | 89.5% | 0.514 | 0.232 | 0.247(+6.5%) | 0.306 | 0.320 (+4.6%) |
| HotpotQA | 87.7% | 92.3% | 90.9% | 0.601 | 0.371 | 0.387(+4.3%) | 0.456 | 0.462 (+1.3%) |
| FEVER | 92.6% | 92.1% | 91.2% | 0.648 | 0.589 | 0.607(+3.1%) | 0.669 | 0.680 (+1.6%) |
| FiQA | 93.4% | 91.9% | 91.5% | 0.239 | 0.275 | 0.276(+0.4%) | 0.295 | 0.296 (+0.3%) |
| ArguAna | 92.7% | 92.6% | 92.6% | 0.441 | 0.414 | 0.413(-0.2%) | 0.415 | 0.418 (+0.7%) |
| NQ | 94.9% | 93.2% | 92.6% | 0.310 | 0.398 | 0.402(+1.0%) | 0.446 | 0.442 (-0.9%) |
| SciFact | 91.5% | 93.2% | 92.8% | 0.620 | 0.478 | 0.476(-0.4%) | 0.507 | 0.502 (-1.0%) |
| SCIDOCS | 92.2% | 94.4% | 93.8% | 0.156 | 0.108 | 0.108(+0.0%) | 0.122 | 0.124 (+1.6%) |
| Climate-FEVER | 95.7% | 94.7% | 94.1% | 0.179 | 0.176 | 0.175(-0.6%) | 0.198 | 0.206 (+4.0%) |
| CQADupStack | 94.8% | 95.2% | 94.9% | 0.316 | 0.281 | 0.280(-0.4%) | 0.296 | 0.297 (+0.3%) |
+
+Table 1: Overall performance and label coverage (Hole rate) on tasks from BEIR. Relative improvements of MoDIR over its base DR model DPR/ANCE are shown in percentages. Datasets are ordered by ANCE's Hole rates, and datasets with lower Hole rates provide more accurate evaluation.
+
+To summarize, for each training batch in the source domain, the domain classifier $f$ and the encoder $g$ are optimized by:
+
+$$
+\min _ {W _ {f}} L _ {D} (\mathbf {e}; f), \quad \mathbf {e} \in Q, \tag {10}
+$$
+
+$$
+\min _ {\theta_ {g}} \sum_ {x ^ {s +}, x ^ {s -}} L _ {R} \left(r \left(x ^ {s +}\right), r \left(x ^ {s -}\right)\right) \tag {11}
+$$
+
+$$
++ \lambda \sum_ {x \in \{x ^ {s}, x ^ {t} \}}
+$$
+
+where $f$ is trained to estimate the boundary between source/target and $g$ is trained to provide domain invariant representations that also captures relevance matching in the source domain.
+
+# 4 Experiments
+
+This section describes experimental setups and evaluates the effectiveness of MoDIR. Furthermore, we dive deep into the importance of momentum training and properties of domain invariant embedding space, which provides new insights for ZeroDR.
+
+# 4.1 Datasets
+
+We choose the MS MARCO passage dataset (Bajaj et al., 2016) as the source domain dataset and choose the 15 publicly available datasets from the BEIR benchmark (Thakur et al., 2021) as target domain datasets (details in Appendix A). These datasets cover a large number of various domains, including biomedical, finance, scientific, etc. We treat each target domain dataset separately and produce an individual model for each of them, following the ZeroDR setting described in Section 3.
+
+# 4.2 Effectiveness of MoDIR
+
+We build MoDIR on top of DPR and ANCE, but it can also be applied to other DR frameworks similarly. Table 1 shows the Hole rates and nDCG scores on the BEIR benchmark; we omit the Hole rates of MoDIR since they are very similar to its baseline DPR/ANCE's. We first discuss Hole rates and baseline selection, and then discuss effectiveness of each model.
+
+Hole Rates and DR Evaluation A hole is an unlabeled q-d pair retrieved by a model, and the percentage of holes among all retrieved q-d pairs is the Hole rate. Datasets with high Hole rates for dense models are less sensitive to dense models' effectiveness (Xiong et al., 2021), and we therefore consider datasets with low Hole rates more important, since they provide more accurate measurements for ZeroDR. On the other hand, many of BEIR's datasets are annotated with candidates generated by some sparse retrieval models at the time of dataset construction, therefore the evaluation of these datasets is biased towards sparse models. Take TREC-COVID as an example, ANCE underperforms BM25 under the original annotation, but it achieves the state of the art (SOTA) after adding extra labels based on ANCE's prediction (Thakur et al., 2021).
+
+Baselines Our baselines include BM25 (Robertson and Jones, 1976), DPR (Karpukhin et al., 2020), and ANCE (Xiong et al., 2021). The original DPR is trained on NQ (Kwiatkowski et al., 2019), but we instead train DPR on MARCO, which not only eliminates training dataset differences but also provides
+
+| Method | LM | n | TREC-COVID | Touche |
| Single Repeat | Confusion | 1 | 0.650 | 0.294 |
| 1k | 0.664 | 0.309 |
| Momentum | Confusion | 100 | 0.649 | 0.294 |
| 1k | 0.676 | 0.315 |
| Minimax | 1k | 0.666 | 0.322 |
| GAN | 1k | 0.641 | 0.325 |
| Vanilla ANCE | 0.654 | 0.284 |
+
+Table 2: Ablation studies show that momentum is critical for learning domain invariant representation. Default settings are underlined and best scores are bold.
+
+better overall results. BEIR also reports results of other methods, such as docT5query (Nogueira et al., 2020), TAS-B (Hofstätter et al., 2021), GenQ (Ma et al., 2021), ColBERT (Khattab and Zaharia, 2020), etc. However, they are not directly comparable with MoDIR since they involve stronger supervision signals from rerankers (TAS-B), data augmentation from expensive sequence-to-sequence models (docT5query and GenQ), and high-latency late interaction (ColBERT). MoDIR instead directly improves the generalization ability of the representation space, and are orthogonal to these methods and can be combined for better performance.
+
+Effectiveness Comparison From Table 1 we can see that MoDIR improves DPR and ANCE's overall effectiveness in the ZeroDR setting. On datasets with low Hole rates, where evaluation is more stable, the gains are significant; on datasets with high Hole rates, the gains are smaller but still stable. Moreover, to present a fair comparison in the realistic ZeroDR setting, results of MoDIR are obtained without hyperparameter tuning or checkpoint selection: in the ZeroDR setting, there is no access to relevance labels in the target domain during training/validation. For all target domain datasets, we keep most of the experimental settings the same with ANCE and evaluate checkpoints after the same number of training steps (details in Appendix B). This evaluation setup is the closest to ZeroDR in the real world, but it may not show the full potential and the best empirical results for MoDIR. We further study this in Section 4.5.
+
+# 4.3 Effectiveness of Momentum Training and Ablation Studies
+
+Our ablation studies evaluate the importance of the momentum method and the effects of other experimental setups. We compare different training setups against vanilla ANCE, using TREC-COVID
+
+
+(a) Documents
+
+
+(b) Queries
+
+
+(c) Documents
+
+
+(d) Queries
+Figure 3: Global and Local Domain-Acc at different training steps with/without momentum (top/bottom).
+
+and Touché which have the best label coverage (lowest Hole rates), and show the results in Table 2.
+
+Firstly, we evaluate the effectiveness of not using the momentum queue: each iteration, the domain classifier is trained either with a single batch $n = 1$ , or repeat1 the current batch for $n = 1$ k times. We can see that using a single batch fails to improve over ANCE, indicating the necessity of using more data to train the domain classifier; repeating the current batch also provides smaller improvements than using different batches from the queue. Secondly, we use a smaller momentum step $n = 100$ for momentum training, which also yields little improvement. This shows that $n$ has to be sufficiently large for the momentum method to work, proving the necessity of our efficiency method to detach embeddings before storing them into the queue. Thirdly, we train MoDIR with two other choices of $L_{M}$ from Equation (9): Minimax and GAN. GAN loss is less stable as described by Tzeng et al. (2017), while Minimax performs comparatively to Confusion. This shows that MoDIR can also be applied with other domain adaptation training methods.
+
+# 4.4 Convergence of Adversarial Training with Momentum
+
+In this experiment, we study how our momentum method helps adversarial training converge to a
+
+| Checkpoint (→) | KNN-Source% | nDCG@10 |
| 0 | 10k | 30k | 50k | 0 | 10k | 30k | 50k |
| w/ Momentum | 5.2% | 6.2% | 14.0% | 17.2% | 0.654 | 0.676 | 0.689 | 0.724 |
| w/o Momentum | 5.2% | 5.4% | 5.6% | 5.6% | 0.654 | 0.650 | 0.673 | 0.668 |
+
+Table 3: K-Nearest Neighbor Source Percentage (KNN-Source%) and nDCG@10 scores after different number of training steps of ANCE with/without momentum, on TREC-COVID.
+
+domain invariant embedding space. To quantify domain invariance, we use Domain Classification Accuracy (Domain-Acc), which includes two measurements based on the choice of domain classifier: (1) Directly take the domain classifier used in MoDIR's training ( $f$ in Section 3.2) and record its accuracy when applied to a new batch, which leads to Local Domain-Acc. (2) Randomly initialize a new domain classifier and train it globally on source and target embeddings, which leads to Global Domain-Acc. Global Domain-Acc measures the real degree of domain invariance: it is lower when embeddings of the two domains are not easily separable. Local Domain-Acc is an efficient approximation provided by the domain classifier $f$ .
+
+In Figure 3, we compare Global and Local Domain-Acc on the TREC-COVID dataset when training ANCE with/without momentum (without momentum is the single setting described in Section 4.3). With momentum, Local Domain-Acc quickly increases to be comparable with Global Domain-Acc. The domain classifier $f$ (used in MoDIR's training) converges quickly and Global Domain-Acc starts to decrease, showing that embeddings from the two domains become less separable. Note that Local Domain-Acc does not decrease because $f$ has seen and memorized almost all data, while Global Domain-Acc's domain classifier is always tested on unseen data for accurate results. This shows that momentum helps with the balance of adversarial training, ensuring its convergence towards a domain invariant representation space.
+
+On the other hand, when momentum is not used, there exists a long-lasting gap between Local and Global Domain-Acc, showing that $f$ does not capture the domain boundary well. As a result, the two domains remain (almost) linearly separable in the embedding space, as shown by the fact that Global Domain-Acc does not decrease, and the model fails to produce domain invariant representations.
+
+# 4.5 Impact of Domain Invariance
+
+In this subsection, we study the behavior and benefits of ANCE+MoDIR in learning domain invari
+
+ance. We focus on TREC-COVID as it provides the most robust evaluation for ZeroDR.
+
+Learning Domain Invariance with Momentum We show how the momentum method gradually pushes for a domain invariant representation space. To measure how much the two domains are mixed together, we use $K$ -Nearest Neighbor Source Percentage (KNN-Source\%): We index source and target documents together; given a target domain query in the embedding space, we retrieve its top-100 nearest documents from the index, and calculate the percentage of source documents from the nearest neighbors; the average percentage for all target domain queries is reported. A higher KNN-Source\% means that the target domain embeddings are surrounded by more source domain ones, indicating a more domain invariant representation space.
+
+The results are in Table 3. With momentum, both KNN-Source% and nDCG gradually increase as training proceeds. This shows that when target domain embeddings are pushed towards the source domain, the ranking performance of the target domain also improves. On TREC-COVID, MoDIR eventually reaches 0.724, which is the SOTA for first stage retrievers. On the other hand, without momentum (the single setting in Section 4.3), KNN-Source% and nDCG scores hardly increase.
+
+We also use t-SNE (van der Maaten and Hinton, 2008) to visualize the learned representation space at different training steps in Figure 4. Before training with MoDIR, the two domains are well separated in the representation space learned by ANCE. With more MoDIR training steps, the target domains are pushed towards the source domain and gradually becomes a subset of it. Without momentum, the two domains remain separated, which is consistent with observations from Table 3.
+
+ZeroDR Effectiveness VS Domain Invariance We study the correlation between ZeroDR ranking effectiveness and domain invariance. We use Global Domain-Acc as the indicator of domain invariance and plot it with the corresponding ZeroDR nDCG scores during training in Figure 5.
+
+
+
+
+
+
+
+
+
+
+(a) MoDIR (0)
+(e) w/o Mom. (0)
+
+
+(b) MoDIR (10k)
+(f) w/o Mom. (10k)
+
+
+(c) MoDIR (30k)
+(g) w/o Mom. (30k)
+
+
+(d) MoDIR (50k)
+(h) w/o Mom. (50k)
+
+
+Figure 4: T-SNE of the representation space after different training steps (in the parentheses), with/without momentum. Blue: source (MARCO); orange: target (TREC-COVID).
+(a) Global Domain-Acc
+Figure 5: Global Domain-Acc and target domain ZeroDR nDCG scores at different training steps: TREC-COVID (left two) and Touché (right two).
+
+
+(b) nDCG@10
+
+
+(c) Global Domain-Acc
+
+
+(d) nDCG@10
+
+Global Domain-Acc starts at near $100\%$ and decreases as training proceeds, showing that source and target embeddings are almost linearly separable at the beginning but are gradually pushed together. ZeroDR accuracy improves as Global Domain-Acc decreases, showing that domain invariance is the source of ZeroDR's improvements. We also record that the DR accuracy on the source domain (MARCO) decreases by no more than $0.5\%$ . This indicates that the high dimensional embedding space has sufficient capacity to learn domain invariant representations while maintaining relevance matching in the source domain.
+
+# 4.6 Case Study
+
+We show two cases of queries from TREC-COVID and their nearest MARCO queries before and after MoDIR training in Table 4. In the first case,
+
+MoDIR pays more attention to "transmission", and potentially retrieves more documents about the transmission of diseases, thereby improving the nDCG score; documents about "coronavirus" are also likely to be retrieved by MoDIR since it is a very noticeable word. In the second case, it focuses on "mRNA" more than "vaccine". However, since the mRNA vaccine is relatively new with few appearances in the MARCO dataset, the shift in focus fails to improve MoDIR for this query.
+
+These examples help reveal the source of generalization ability on ZeroDR. For the DR models to be able to generalize, the source domain itself needs to include relevance information that resembles the target domain's needs; if there is no such information,
+
+| Target | what are the transmission routes of coronavirus? | nDCG@10 gain: 0.23 |
| Source Before | • what is the coronavirus • incubation period for coronavirus
+• what are symptoms of coronavirus |
| Source After | • countries where guinea worm is transmitted • what is the most common method of hiv transmission
+• through which body system are cancer cells able to travel to different locations in the body? |
| Target | what is known about an mRNA vaccine for the SARS-CoV-2 virus? | nDCG@10 gain: -0.12 |
| Source Before | • is there a vaccine for hepatitis • is there a vaccine for tuberculosis
+• shingles vaccination needed for those without chickenpox |
| Source After | • what makes rna • what is used to make mrna
+• what is the mmr vaccine called |
+
+Table 4: Case study: nearest source queries of a target query before and after MoDIR training.
+
+as in the second example, generalization becomes a hard challenge. When the source domain has such coverage, MoDIR is able to align target queries to source ones with similar information needs in its domain invariant representation space, and such alignments enable DR models to generalize.
+
+# 5 Conclusion and Future Work
+
+In this paper, we present MoDIR, a new representation learning method that improves the zero-shot generalization ability of dense retrieval models. We first show that dense retrieval models differ from classification models in that they emphasize locality properties in the representation space. Then we present a momentum-based adversarial training method that robustly pushes text encoders to provide a more domain invariant representation space for dense retrieval. Our experiments demonstrate that, compared with ANCE, a recent SOTA DR model, MoDIR's improvements are robust overall and significant on datasets where ZeroDR's evaluation is more accurate.
+
+We conduct a series of studies to show the effects of our momentum method in learning domain invariant representations. Without momentum, the adversarial learning is unstable. The inherent variance of the DR embedding space hinders the convergence of the domain classifier. With momentum training, the model fuses the target domain data into the source domain representation space and discovers related information from the source domain, thus improving generalization of ZeroDR.
+
+We view MoDIR as an initial step of zero-shot dense retrieval, an area that democratizes the rapid advancements in search technologies to many real-world scenarios. Our approach inherits the success of domain adaptation techniques and upgrades them by addressing the unique challenges of ZeroDR. Understanding the dynamics of dense retrieval is an im
+
+portant future direction for not only representation learning research but also real-world applications.
+
+# Acknowledgments
+
+We thank anonymous reviewers for their constructive feedback.
+
+# References
+
+Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268.
+Alexander Bondarenko, Maik Frobe, Meriem Beloucif, Lukas Gienapp, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. 2020. Overview of Touché 2020: Argument Retrieval. In Working Notes Papers of the CLEF 2020 Evaluation Labs, volume 2696 of CEUR Workshop Proceedings.
+Vera Boteva, Demian Gholipour, Artem Sokolov, and Stefan Riezler. 2016. A full-text learning to rank dataset for medical information retrieval. In European Conference on Information Retrieval, pages 716-722. Springer.
+Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.
+Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representations.
+Qi Chen, Haidong Wang, Mingqin Li, Gang Ren, Scarlett Li, Jeffery Zhu, Jason Li, Chuanjie Liu, Lintao Zhang, and Jingdong Wang. 2018. SPTAG: A library for fast approximate nearest neighbor search.
+
+Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, and Daniel Weld. 2020. SPECTER: Document-level representation learning using citation-informed transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2270-2282, Online. Association for Computational Linguistics.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
+Thomas Diggelmann, Jordan Boyd-Graber, Jannis Bulian, Massimiliano Ciaramita, and Markus Leippold. 2020. CLIMATE-FEVER: A dataset for verification of real-world climate claims. arXiv preprint arXiv:2012.00614.
+Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1180-1189, Lille, France. PMLR.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc.
+Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. 2020. Accelerating large-scale inference with anisotropic vector quantization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3887-3896. PMLR.
+Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. arXiv preprint arXiv:2002.08909.
+Faegheh Hasibi, Fedor Nikolaev, Chenyan Xiong, Krisztian Balog, Svein Erik Bratsberg, Alexander Kotov, and Jamie Callan. 2017. DBpedia-Entity v2: A test collection for entity search. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '17, page 1265-1268, New York, NY, USA. Association for Computing Machinery.
+Sebastian Hofstätter, Sheng-Chieh Lin, Zheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Efficiently teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th International ACM SIGIR Conference on
+
+Research and Development in Information Retrieval, SIGIR '21, page 113-122, New York, NY, USA. Association for Computing Machinery.
+Doris Hoogeveen, Karin M. Verspoor, and Timothy Baldwin. 2015. CQADupStack: A benchmark data set for community question-answering research. In Proceedings of the 20th Australasian Document Computing Symposium, ADCS '15, New York, NY, USA. Association for Computing Machinery.
+Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282.
+Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535-547.
+Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
+Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20, page 39-48, New York, NY, USA. Association for Computing Machinery.
+Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466.
+Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics.
+Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459-9474. Curran Associates, Inc.
+Minghan Li and Jimmy Lin. 2021. Encoder adaptation of dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2110.01599.
+
+Davis Liang, Peng Xu, Siamak Shakeri, Cicero Nogueira dos Santos, Ramesh Nallapati, Zhiheng Huang, and Bing Xiang. 2020. Embedding-based zero-shot retrieval through query generation. arXiv preprint arXiv:2009.10270.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
+Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S. Yu. 2013. Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
+Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. 2016. Unsupervised domain adaptation with residual transfer networks. arXiv preprint arXiv:1602.04433.
+Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and attentional representations for text retrieval. Transactions of the Association for Computational Linguistics, 9:329-345.
+Zelun Luo, Yuliang Zou, Judy Hoffman, and Li F Fei-Fei. 2017. Label efficient learning of transferable representations across domains and tasks. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
+Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2020. Zero-shot neural retrieval via domain-targeted synthetic query generation. arXiv preprint arXiv:2004.14503.
+Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. 2021. Zero-shot neural passage retrieval via domain-targeted synthetic question generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1075-1088, Online. Association for Computational Linguistics.
+Macedo Maia, Siegfried Handschuh, André Freitas, Brian Davis, Ross McDermott, Manel Zarrouk, and Alexandra Balahur. 2018. WWW'18 open challenge: Financial opinion mining and question answering. In Companion Proceedings of the The Web Conference 2018, WWW '18, page 1941-1942, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
+Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv preprint arXiv:1901.04085.
+Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 708-718, Online. Association for Computational Linguistics.
+
+Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. Association for Computational Linguistics.
+Stephen E. Robertson and Karen Spärck Jones. 1976. Relevance weighting of search terms. JASIS, 27(3):129-146.
+Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pages 443-450. Springer.
+Hui Tang and Kui Jia. 2020. Discriminative adversarial domain adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04):5940-5947.
+Nandan Thakur, Nils Reimers, Andreas Rückle, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663.
+James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
+George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, et al. 2015. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC bioinformatics, 16(1):1-28.
+Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
+Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474.
+Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(86):2579-2605.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all
+
+you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
+Giorgos Vernikos, Katerina Margatina, Alexandra Chronopoulou, and Ion Androutsopoulos. 2020. Domain Adversarial Fine-Tuning as an Effective Regularizer. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3103-3112, Online. Association for Computational Linguistics.
+Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R. Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2021. TREC-COVID: Constructing a pandemic information retrieval test collection. SIGIR Forum, 54(1).
+Thuy-Trang Vu, Dinh Phung, and Gholamreza Haffari. 2020. Effective unsupervised domain adaptation with adversarially trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6163-6173, Online. Association for Computational Linguistics.
+Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018. Retrieval of the best counterargument without prior topic knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 241-251, Melbourne, Australia. Association for Computational Linguistics.
+David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7534-7550, Online. Association for Computational Linguistics.
+Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Representations.
+Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369-2380, Brussels, Belgium. Association for Computational Linguistics.
+Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. arXiv preprint arXiv:2108.08787.
+
+# A Datasets Details
+
+Target domain datasets used in our experiments are collected in the BEIR benchmark (Thakur et al., 2021) and include the following domains:
+
+- General-domain (Wikipedia): DBPedia (Hasibi et al., 2017), HotpotQA (Yang et al., 2018), FEVER (Thorne et al., 2018), and NQ (Kwiatkowski et al., 2019).
+- Bio-medical: TREC-COVID (Voorhees et al., 2021), NFCorpus (Boteva et al., 2016), and BioASQ (Tsatsaronis et al., 2015).
+Finance: FiQA (Maia et al., 2018).
+- Controversial arguments: Touché (Bondarenko et al., 2020) and ArguAna (Wachsmuth et al., 2018).
+- Duplicate questions: Quora (Thakur et al., 2021) and CQADupStack (Hoogeveen et al., 2015).
+- Scientific: SciFact (Wadden et al., 2020), SCI-DOCS (Cohan et al., 2020), and Climate-FEVER (Diggelmann et al., 2020)
+
+# B Detailed Experimental Settings
+
+We follow the design of ANCE for the DR encoder's modeling and training. We initialize the encoder with the publicly released checkpoints: "ANCEwarmup" for DPR+MoDIR and "ANCE-passage" for ANCE+MoDIR.3 We randomly initialize the domain classifier. Detailed hyperparameter choices are shown in Table 5. We also use an exponential decay routine for the hyperparameter $\lambda$ to improve training stability, where the value is reduced continuously and shrunk to half every $10\mathrm{k}$ steps.
+
+| Hyperparameter | Value |
| Same as ANCE |
| Learning rate for θg | 1e-6 |
| Effective batch size | 16 |
| Maximum Query Length | 64 |
| Maximum Document Length | 512 |
| New for MoDIR |
| Learning rate for Wf | 5e-6 |
| Early stopping steps | 10k |
| Momentum step n | 1k |
| Initial λ | 1.0 |
+
+Table 5: Detailed hyperparameter choices of MoDIR.
\ No newline at end of file
diff --git a/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/images.zip b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..566c38db1edbcbab13b191f5ed623f6c0736192f
--- /dev/null
+++ b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe20f65c1540eaae63bea21ad24d709194b72bfc2200e916e3dda344c6a41e9f
+size 699483
diff --git a/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/layout.json b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8cfcdeab0515b173c372f239cc5127b6fe81bceb
--- /dev/null
+++ b/zeroshotdenseretrievalwithmomentumadversarialdomaininvariantrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3006089937aaaee7bbc7e2df8813f623379d9002144b83f8349390822cbb10a6
+size 440285
diff --git a/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_content_list.json b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b284707da0dd9b4ee24b221c6c0f7f8158ec0907
--- /dev/null
+++ b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03539d91fec8c16dfb8780a051f57ad814021aef8efb110660ca7037de7ce3e3
+size 63225
diff --git a/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_model.json b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cff833ef657f7741150499de0ffd7f58f1b8cd67
--- /dev/null
+++ b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5bd336866758ce0ec4c810f839114a746ff005247e7e4ca7b61411b95eefbc3e
+size 76030
diff --git a/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_origin.pdf b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4b95c8ab7ba0ca4108181dc838a6cbad74869637
--- /dev/null
+++ b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/d41ec8e1-5940-4bae-8a77-496ac894e3e5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1c9118edc09717ed9f98031bffee886818ec83d4580dbeea613f01c18304d45
+size 313504
diff --git a/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/full.md b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..00b1f0cd3b9512f45147ae8c2c2aa2fd1db0ae1f
--- /dev/null
+++ b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/full.md
@@ -0,0 +1,233 @@
+# Zero-shot Learning for Grapheme to Phoneme Conversion with Language Ensemble
+
+Xinjian Li and Florian Metze and David R Mortensen Shinji Watanabe and Alan W Black
+
+Language Technologies Institute, Carnegie Mellon University
+
+{xinjianl, fmetze, dmortens, swatanab, awb}@cs.cmu.edu
+
+# Abstract
+
+Grapheme-to-Phoneme (G2P) has many applications in NLP and speech fields. Most existing work focuses heavily on languages with abundant training datasets, which limits the scope of target languages to less than 100 languages. This work attempts to apply zero-shot learning to approximate G2P models for all low-resource and endangered languages in Glottolog (about 8k languages). For any unseen target language, we first build the phylogenetic tree (i.e. language family tree) to identify top- $k$ -nearest languages for which we have training sets. Then we run models of those languages to obtain a hypothesis set, which we combine into a confusion network to propose a most likely hypothesis as an approximation to the target language. We test our approach on over 600 unseen languages and demonstrate it significantly outperforms baselines. $^{1}$
+
+# 1 Introduction
+
+Grapheme-to-Phoneme (G2P) plays a crucial role in many NLP tasks. In particular, it is used heavily in many speech-related tasks such as speech recognition and speech synthesis (Arik et al., 2017; Miao et al., 2015). Even in the latest end-to-end systems, it still has a strong impact on the speech performance (Hayashi et al., 2021). Typically, the G2P task is language-dependent—many language-specific factors affect the G2P process such as the general characteristics of scripts (Ager, 2008), phonotactic constraints (Hayes and Wilson, 2008) and other orthography factors (Frost and Katz, 1992). For example, in Table 1, Mandarin and Japanese are not using the Latin script, therefore they cannot share their G2P models with English. As a consequence, to develop a G2P model, we need either to create a training set for the target language, like (CMU, 2000), or to ask linguists to
+
+| Language | Grapheme | Phoneme |
| English | hello | /hələʊ/ |
| Mandarin | 你好 | /nixaʊ/ |
| French | bonjour | /bɔʒuːr/ |
| German | hallo | /halo/ |
| Japanese | こんにちは | /konnichiwa/ |
| Spanish | hola | /ola/ |
+
+Table 1: A small sample of G2P examples from highresource languages in our training set.
+
+explicitly define a set of orthographic rules to map from graphemes to phonemes (Mortensen et al., 2018). Both approaches have achieved success for high-resource languages; however, they can only account for a small number of the world's languages. The majority still do not have access to G2P due to limited training resources. A good G2P model would be beneficial to many speech tasks in low-resource languages (Li et al., 2020a,b; Yan et al., 2021)
+
+In this work, we attempt to tackle this challenging problem by using the language ensemble approach. Our approach allows us to propose an approximated G2P baseline to all languages present in the GlottoLog database: around 8000 of them (Nordhoff and Hammarstrom, 2011). The main insight of our approach is that we can approximate the G2P model of an unseen language using those of related languages because languages related to the target language should have similar orthographic rules (of both the context-free and context-dependent type). For example, a native speaker of English (a Germanic language) is likely to make accurate guesses about how a text in German (another Germanic language) would be pronounced. In Table 1, both German and English pronounce the "h" grapheme explicitly, but Spanish (a Romance language) does not share the same property. We define the similarity between languages
+
+as the shortest distance between two languages in the phylogenetic tree (i.e. language family tree). We first build models for the subset of languages (training languages) where we have a large enough training set (e.g., Italian, Spanish, etc.). Then, for each unseen language (e.g., Catalan), we first find the top- $k$ nearest training languages (like Italian, Spanish, etc.) and use those languages' G2P models to generate $k$ hypotheses. Finally, we ensemble the G2P outputs by building a confusion network and discover the most-likely sequence as an approximation to the target language.
+
+In our experiments, we build a large dataset from Wiktionary in which we use 260 languages as the training languages and test our approach on 600 unseen languages. We apply our approach to 3 different architectures: a joint-sequence n-gram model (Novak et al., 2016), an LSTM sequence-to-sequence model (Rao et al., 2015), and a transformer-based sequence-to-sequence model (Peters et al., 2017). Using any of the architectures, our approach outperforms all baselines by more than $5\%$ PER (phoneme error rate).
+
+The main contributions of this work are as follows:
+
+1. A novel approach to approximate target language G2P models using the nearest languages in a phylogenetic tree
+2. An approach to ensemble predictions from multiple outputs using confusion networks.
+3. A demonstration that our approach achieves significantly better performance than baselines when testing on 600 unseen languages.
+
+# 2 Related Work
+
+Traditionally, a G2P component is built using rule-based models. For example, the phonological constraints can be incorporated into context-sensitive grammars and implemented using finite-state transducers (Kaplan and Kay, 1994). However, designing the rules requires many hours from linguists and can be prohibitive for low-resource languages if they have deep orthographies2.
+
+Statistical models overcome this problem by learning the rules automatically. Typically, there are two steps in building such a model: first, the
+
+sequence of phonemes and graphemes are aligned to each other, then another prediction model is built on top of the alignment. The alignment model is typically done using Expectation and Maximization (Ristad and Yianilos, 1998; Jiamojamarn and Kondrak, 2010). The prediction model can be done using neural networks (Sejnowski and Rosenberg, 1987), decision trees (Black et al., 1998), joint-sequence models (Bisani and Ney, 2008) and WFST-based n-gram models (Novak et al., 2016). More recently, deep neural networks have been applied to the G2P task. Various architectures have been explored, for example, RNNs (Rao et al., 2015; Yao and Zweig, 2015; Lee et al., 2020), CNNs (Yolchuyeva et al., 2019) and Transformers (Yolchuyeva et al., 2020).
+
+Traditionally, each G2P model was typically built for one high-resource language. Recently, many researchers have started to focus on low-resource G2P models. One related work adapts high-resource language models to low-resource language models by measuring similarity between languages and phonemes (Deri and Knight, 2016). This previous work creates a new training set for every low-resource language by adapting the training set from the top-3 nearest languages. However, there are several issues with this approach. First, it has to prepare separate training sets and n-gram models for every testing language, which is quite computationally expensive. It also suffers from the limited training set problem even after merging top-3 languages because the vocabulary size of most training languages are less than 100, which is insufficient to train any stable neural models. In contrast, we only prepare one unified training set and one unified model in our neural approach, which circumvents these problems. Additionally, the testing languages and training languages are mixed in this work, therefore the performance on unseen languages is not clear. Only a limited number of papers so far focus on developing G2P models for unseen languages. The most common strategy is to drop the target language information and make predictions using a shared multilingual model (Peters et al., 2017; Bleyan et al., 2019). This is one of our baseline (the global language model) in this work.
+
+# 3 Approach
+
+In this section, we describe our zero-shot learning approach. We first introduce three G2P models to be used for supervised learning and covering high
+
+resource languages. Next, we define the language similarity and language families. Finally, we explain how to ensemble nearest languages models to predict G2P for an unseen language.
+
+# 3.1 Monolingual Model
+
+In this section, we introduce our monolingual G2P models: a joint n-gram model based on WFSTs, two neural models based on sequence-to-sequence LSTMs, and transformer models. We select those models as they are the three baseline models used in the SIGMORPHON Multilingual G2P task (Gorman et al., 2020). These models are trained for every training language and then used as building blocks to approximate G2P models for unseen testing languages.
+
+The joint n-gram model is a standard monolingual G2P model (Novak et al., 2016). For each training language, the dataset is first aligned using Expectation Maximization, then an n-gram model is built using a WFST3. The neural model is a standard sequence-to-sequence model. We tried two common architectures: bidirectional LSTM and transformer. Unlike the n-gram model, the neural model is trained by combining all training sets into one large dataset. To distinguish different languages, a ISO 639-3 language ID is attached to the input sequence, for example, we attach the "" to "hello", so the input sequence is " h e l l o". This approach was explored in previous work (Peters et al., 2017). It allows the parameters to be shared across different languages. Even language with a limited training set could benefit from other high-resource languages.
+
+# 3.2 Phylogenetic Tree and Nearest Languages
+
+The model discussed in the previous subsection could predict phonemes for any training language, however, it cannot deal with any unseen languages. Our main contribution in this work is to select the highly related languages and then effectively combine those models to approximate the target language. In this subsection, we introduce the concept of the nearest language in terms of the phylogenetic tree (i.e. language family tree), then we explain how we ensemble nearest languages.
+
+There are many metrics to measure the distance between languages from different perspectives (Dryer and Haspelmath, 2013; Littell et al.,
+
+2017). In this work, we only consider the phylogenetic tree (i.e., language family tree) to measure the distance between languages. This is because the phylogenetic information is available for a larger portion of languages than any of the other bases of linguistic distance or similarity. Glottolog provides us with language family information for around 8000 languages (Nordhoff and Hammarström, 2011).
+
+In Figure 1, we write a subtree of the entire phylogenetic tree, in particular, it illustrates two major branches of the linguistic Stammbaum: the Germanic and Italic. Both of them are children of the Proto-Indo-European (PIE) node. The tree also indicates that English and Dutch are closely related languages and that Norwegian and Icelandic are closely related languages. To measure the distance between any pair of languages, we can compute the length of the shortest path between the two languages. In our example, the English/Dutch pair has a distance 2, and the English/Norwegian pair has a distance of 4. The shortest path can be computed efficiently by using Lowest Common Ancestor (LCA).
+
+$$
+d \left(l _ {1}, l _ {2}\right) = H \left(l _ {1}\right) + H \left(l _ {2}\right) - H \left(L C A \left(l _ {1}, l _ {2}\right)\right) \tag {1}
+$$
+
+where $d(l_1, l_2)$ is the distance between language $l_1$ and $l_2$ , $H$ compute the height of a node in the tree. This time complexity is $O(log(M))$ where $M$ is the max height of the phylogenetic tree (Cormen et al., 2009). Suppose the entire language set is $L$ and training languages are $T \subset L$ , we could compute the $k$ nearest languages for every language $l \in L$ , those languages would allow us to ensemble models.
+
+Note that the original tree structure in Glottolog groups languages into separate top-level families, therefore languages belonging to different top-level families do not have any direct path among them. To connect all languages, we add a root node and set all top-level languages as its direct children. There are also several assumptions in our approach that might not be correct: for example, we assume languages belonging to the same family should share similar orthography, however, this is not always the case. They are also influenced by non-linguistic aspects such as political factors and cultural factors. Additionally, we assume each language is only using one script, but some languages are actually written in multiple scripts. For example, Uzbek is written with a Perso-Arabic, Cyrillic,
+
+
+Figure 1: Illustration of a partial phylogenetic tree (i.e. language family tree). The subtree has Proto-Indo-European as the root of the family (there also exists many other root language families). The Germanic branch and Italic branch can be derived (not directly though) from the Proto-Indo-European, they are further divided into the modern languages we are using today. This information can help us compute the similarity between languages.
+
+and Latin script. Despite all those limitations, information on language families provides a reasonable starting point.
+
+# 3.3 Model Ensemble
+
+After obtaining the nearest languages and the monolingual model for each of the training languages, we can use those models to approximate the target model. In particular, we are interested in combining prediction outputs from different models to create a single prediction output. If the models are one of the local prediction models (i.e: for each grapheme, we decide whether to generate a phoneme and which phoneme to generate) (Sejnowski and Rosenberg, 1987; Black et al., 1998), the ensemble task is simple. As we made one phoneme prediction at every grapheme position, we can use the voting to decide the most likely phoneme.
+
+$$
+[ \hat {p} ] = \operatorname {a r g m a x} _ {[ p ]} \sum_ {i} \mathbf {1} ([ p ] = [ p ] _ {i}) \tag {2}
+$$
+
+However, for the more general sequence-to-sequence neural model, it is more complicated. Different models would predict outputs with variable sequences, therefore voting at each position would be meaningless. For example, suppose two phoneme sequences "/helo/" and "/elo/" are generated from "hello" using two different languages. It is difficult to average /h/ and /e/ as they are corresponding to different graphemes. To solve this
+
+problem, we use a robust approach to ensemble outputs with variable lengths. Our approach is similar to the ROVER system (Fiscus, 1997), which is a commonly used approach to combine multiple speech outputs into one output. It has been applied to combine phoneme sequence (Schilpe et al., 2014), but only under the monolingual scenario in which they combine different models to improve the performance. This work focuses on combining multilingual outputs and modifying the standard word-based network to consider the phonological structure.
+
+One actual example from our dataset is illustrated in Figure 2. First, we build one confusion network (or lattice) per language in our nearest language set. The raw confusion network represents a single hypothesis using a directed graph whose edge corresponds to a single phoneme from the hypothesis4. When we compose multiple confusion networks into one confusion network, there would typically be more than one edge connecting two nodes. The set of edges connecting two contiguous nodes is typically referred to as the confusion set (or correspondence set) (Fiscus, 1997; Mangu et al., 2000). For example, the first confusion set from the right network in Figure 2 is $\{\frac{t}{/t}, \frac{s}{/s}\}$ . The goal of our ensemble approach is to compose all confusion networks into a single network, and
+
+
+Figure 2: An illustration of an actual ensemble example from our dataset. The input is 'that' from Old Dutch (odt), its top-2 nearest language in our training set are Dutch (nld) and Middle Dutch (dum). The left-hand side denotes two hypotheses generated from those two languages, from which we compose into a confusion network. The composed confusion network has three confusion sets, which would vote '/t a t' as a final prediction.
+
+then pick up the best hypothesis from the composed network.
+
+Unlike the original work in which hypotheses are composed without any specific order, we iteratively compose the network using the nearest order: we first compose the nearest and second nearest confusion network into a single network, then further merge the third nearest network into it. In each composition step, we align two networks by computing the similarity between pairs of confusion sets. While the standard network computes the similarity step using the exact matching metric, we relax this exact matching scheme and use a more coarse matching strategy by considering the phonological distance structure. In particular, we use the phonologically-equivalent class, which collapses similar sounds into a small number of classes (Mortensen et al., 2016). This means we could easier match $/a/$ , $/o/$ (vowel pairs) than $/a/$ , $/s/$ (vowel, consonant pairs). After composing all confusion networks into one network, the most likely phoneme sequence can be generated from the final network. To generate the sequence, we pick up 1 phoneme per confusion set and concatenate them together. The phoneme in each confusion set is selected using the voting scheme. When there are multiple candidates with equal votes, we break the tie by selecting the candidate generated from the nearest language. Algorithm 1 summarizes the entire steps in our approach.
+
+# 4 Experiments
+
+In this section, we show the experiment results on our G2P models. First, we introduce the main datasets we used to build our model, next we describe our baseline models and G2P architectures we use in our experiments. Finally, we demonstrate that the proposed ensemble approach outperforms
+
+Algorithm 1: G2P algorithm
+Data: input, lang (Grapheme sequence and its language)
+Result: output (ensembled phoneme sequence)
+klangs $\leftarrow$ KNearestLanguage(Iang)
+hyps $\leftarrow []$
+for klang $\in$ klangs do
+hyp $\leftarrow$ G2P(input, klang); /* Generate hypothesis for every nearest language */
+hyps.append(hyp)
+end
+x $\leftarrow$ ConfusionNetwork()
+for hyp $\in$ hyps do
+n $\leftarrow$ ConfusionNetwork(hyp)
+a $\leftarrow$ align(x, n)
+x $\leftarrow$ composite(x, n, a)
+end
+output $\leftarrow []$
+for $cs \in x$ do
+p $\leftarrow$ vote(cs); /* vote 1 phoneme per confusion set */
+output.append(p)
+end
+
+those baseline models in different architectures.
+
+# 4.1 Data
+
+The main training/testing dataset we used is the Wiktionary website. Wiktionary is a large multilingual website containing lexicon information for many languages, including many low-resource languages. One previous work has prepared a dataset using Wiktionary (Deri and Knight, 2016), but the
+
+
+Figure 3: Log-scaled histograms of the count of languages grouped by the vocabulary size available in Wiktionary. The language with over $400\mathrm{k}$ vocabulary is English, however, most languages are low-resource languages for which we have less than 100 Wiktionary entries.
+
+testing languages and training languages are mixed together in this dataset: many testing languages are also available as training languages. To demonstrate our approach on unseen languages, we create a new dataset using the latest Wiktionary. First, we download a dump file from the website and extract all words with pronunciation information5. We group all words by their languages, which gives us 972 languages in total. However, not all languages yield a similar number of training data. Figure 3 shows the log-scaled histogram of language counts for different vocabulary sizes. Only 1 language: English, has more than 400k vocabulary items. Most of the languages are concentrated in the lowest histogram bar. In our dataset, we find that the majority of the language have less than 100 vocabulary items. Therefore, the model needs to be able to handle low-resource training scenarios.
+
+Next, most languages from Wiktionary can be assigned an ISO 639-3 ID, which can be identified in our phylogenetic tree. As mentioned in the previous section, our phylogenetic tree is built using the Glottolog database (Nordhoff and Hammarström, 2011), which contains phylogenetic information about 7915 languages. We split all languages into training languages or testing languages depending on the vocabulary size: we consider the language to be a training language if the vocabulary size is above a predefined threshold, otherwise, it is clas
+
+| Dataset | # Languages | # Vocabulary |
| Training set | 269 | 1,672,444 |
| Testing set | 605 | 4,796 |
| All | 874 | 1,677,240 |
+
+Table 2: Statistics of the Wiktionary dataset we used in the experiment. 269 languages are used for training and 605 languages are used for testing.
+
+sified as a testing language. Typically, there is a trade-off when selecting the threshold: making the threshold lower would increase the number of training languages and make it easier to find the nearest languages, however lower threshold makes the training process more difficult due to the number of limited vocabulary, additionally, it would reduce the number of testing languages. In our experiment, the threshold is set to 50 by following the previous work (Deri and Knight, 2016), and the statistics of both training datasets and test datasets are shown in Table 2. We have 269 training languages and 605 testing languages. Most of the training languages have a large vocabulary size but the testing languages have only 8 vocabulary items per language on average. The number of distinct graphemes is 9082 and the number of phonemes is 416. The grapheme number is much larger than the phoneme one because many languages are using non-Latin scripts, for example, there are around 4000 distinct Chinese characters in our grapheme set. We train both the n-gram model and neural models using only the training languages, and then test them on the testing languages, which are not seen during the training process. The evaluation is done using the average PER (phoneme error rate) across all testing languages.
+
+# 4.2 Baselines
+
+In our experiments, we consider three different baseline models: the fixed language model, which is a model trained using the English dataset. The global language model is a shared model mixing all training sets, it ignores the target language id during inference, this was explored in the previous work (Peters et al., 2017). The nearest language model can be seen as a special case of our proposed model: we compute the most similar language to the target language and run inference using that language's model instead. For each of the baseline models, we investigate three different architectures:
+
+ | N-gram Model | LSTM Model | Transformer Model |
| PER | Add | Del | Sub | PER | Add | Del | Sub | PER | Add | Del | Sub |
| Fixed Model | 76.0 | 4.52 | 9.39 | 62.1 | 78.1 | 4.53 | 20.4 | 53.2 | 78.5 | 3.2 | 19.0 | 56.2 |
| Global Model | 70.4 | 6.89 | 9.86 | 53.6 | 72.8 | 3.4 | 29.0 | 43.4 | 74.2 | 2.9 | 20.6 | 50.8 |
| Nearest Model | 68.4 | 4.51 | 12.4 | 51.5 | 43.8 | 12.1 | 4.0 | 27.6 | 45.4 | 15.8 | 3.6 | 26.1 |
| Ensemble Model | 55.0 | 0.56 | 23.6 | 30.9 | 35.7 | 10.0 | 3.4 | 22.2 | 39.8 | 13.9 | 3.1 | 22.8 |
+
+Table 3: Experiment Results of the our approach. It compares our ensemble model with three baselines: Fixed Model, Global Model and Nearest Model. The comparison is performed under three different architectures: N-gram model, LSTM model, Transformer Model. In all settings, the proposed model outperforms baselines.
+
+N-gram, LSTM, and transformer architecture. We use OpenNMT-py6 for our neural models. The LSTM architecture is using the framework's default configuration: 2 standard LSTM layers for both encoder and an attention-based decoder, each layer has 500 hidden size. This model is optimized with 1.0 learning rate using SGD optimizer. The transformer model uses the framework's WMT sample configuration7: we have 6 layers for both the encoder and decoder with 500 attention and feedforward size. The mode has a positional encoding layer and is using 8 heads in self-attention. The optimizer is Adam with learning rate 2.0 and 8000 steps for warmup. Both neural models are trained with 20k steps. In our ensemble model, we use the top-10 languages ( $k = 10$ ) in our main experiment.
+
+# 4.3 Results
+
+Table 3 shows our experiment results. For each of the G2P architecture (N-gram Model, LSTM Model, Transformer Model), we demonstrate our ensemble model's results as well as 3 baselines. The leftmost architecture shows the N-gram Model result: the fixed language model performs $76\%$ PER, The global language model gets $70\%$ , which is better than the fixed language model. The nearest language model further improves it to $68\%$ . While all those models perform poorly, the reason for their poor performance is different from each other: the fixed language model is only trained with the English dataset, therefore it cannot handle orthography rules in other languages. The global language model suffers from the inconsistency of the training set: the same grapheme might map to different phonemes in different languages, therefore it cannot learn consistent rules across all languages.
+
+Recall the grapheme "h" have different pronunciations in English and Spanish. Finally, the nearest language model has the problem that the nearest language might be a low-resource language. As we mention in the previous section, most languages have few training vocabularies, even we restrict the training languages to have more than 50 vocabularies, the large proportion of languages still have 50 to 100 vocabularies, which might be insufficient to train a good model. Additionally, depending on a single language might have a large variance. The proposed ensemble model solves those issues to some extent: it relies on more than 1 language when predicting for the target language: even 1 language is a low-resource language, other languages might be able to compensate for that low-resource language. Additionally, introducing more language also reduces the variance. The proposed model significantly improves the PER to $55.0\%$ .
+
+Table 3 also demonstrates the performance of two neural models: the LSTM model and the transformer model. Interestingly, the neural model's performance does not perform better than the n-gram model when using a fixed language, even slightly worse than it. It is because the neural model further overfits the English dataset and could not capture orthography rules in other languages. The global model has the same trend, which again fails to fit each language. However, the nearest language model significantly reduces the error rate by almost $30\%$ . Unlike the N-gram architecture, whose models of different languages are trained using a separate dataset, the neural model uses the shared architecture, and only distinguishes different languages by a language tag. This allows efficient parameter sharing between low-resource languages. Ensembling the model further reduces the error rate by more than $5\%$ . In our experi
+
+
+
+
+
+
+Figure 4: The effect of using different number of nearest languages when ensembling models. It shows that we reach the best performance when we use the top-10 languages to ensemble outputs.
+
+
+
+ment, the LSTM model and the transformer model have similar trends in their performance, but the LSTM model has a better performance than the transformer's one. The reason might be that there are far more hyperparameters to be tuned in the transformer model and the default sample configuration provided by the framework might not be optimal. As the main contribution of this work is to propose a general approach to ensemble languages rather than exploring different neural architectures, we only focus on how to ensemble models of different languages in this work.
+
+# 4.4 Ensemble Analysis
+
+It would be interesting to compare the number of languages when ensembling languages. Figure 4 demonstrates the influence of the number of languages from the LSTM model. PER drops quickly when we start ensembling models, it reaches the bottom when the number of nearest languages is 10, then starts to increase very slowly. We observe that there exists a bias-variance trade-off when changing the number of languages. When the number is relatively small, the prediction relies heavily on each language, therefore causing high variance when predicting for the target language. Increasing the number of languages could alleviate the variance problem, but using a large number of languages would decrease the accuracy as the selected languages are no longer close to the target language, which introduces more bias to the model.
+
+| Errors | Most Common Errors |
| Add | /a/, /k/, /u/, /i/, /n/, /o/ |
| Del | /a/, /i/, /?/, /e/, /j/, /u/ |
| Sub | (/a/, /o/), (/o/, /u/), (/r/, /l/), (/t/, /d/) |
| Add | /a/, /i/, /k/, /u/, /s/, /o/ |
| Del | /a/,?/, /i/, /e/, /u/, /j/ |
| Sub | (/r/, /l/),(/a/, /a/), (/i/, /i/), (/ε/, /e/) |
+
+Table 4: Most frequent errors in the LSTM model. The top half shows the errors in the nearest model, the bottom-half shows the errors when using 10 languages
+
+To further understand the behavior of the model, we also show curves of Addition, Deletion, and Substitution in Figure 4. It indicates that after we start ensembling the model (from 2), the addition is increasing while the deletion is decreasing in general, the substitution decreases first and remains relatively flat later. The opposite trend of addition and deletion can be explained by the ensembling approach: when we introduce a new hypothesis into the model, it is probable some phonemes might not be aligned to the existing confusion set in the confusion network, to incorporate these new phonemes into the network, we need to create new confusion set, which would lead to more phoneme emissions. More phonemes would also contribute to decreasing the deletion rate as well. Therefore, that curve of PER is very similar to the curve of the substi
+
+tution error (as the addition and deletion almost cancel each other). Not only does the ensemble model improve the substitution error quantitatively, it also improves the errors qualitatively: Table 4 shows the most frequent errors made by the nearest language model and the top-10 ensemble model. It indicates the most frequent substitution errors (/a/, /o/) and (/o/, /u/) are replaced by (/a/, /a:/) and (/i/, /i:!). We find latter errors are much closer to each other (they have phonological distances of 1, while the former errors have larger distances), therefore they are much better errors than the first two pairs qualitatively.
+
+# 5 Limitations
+
+While we get reasonable performance in our testing languages, we acknowledge that there are several limitations in our approach: first, both of our training languages and testing languages are limited to languages available in Wiktionary. The full Glottolog Phylogenetic Tree has 110 top-level branches in total, however, our dataset only spans 40 branches. Therefore if we want to apply our approach to unseen languages in the remaining 70 branches, we have to depend on unrelated languages to build our ensemble model, which might lead to worse performance. Second, as our approach heavily depends on Glottolog and Wiktionary, if the language is not available in the Glottolog database or the vocabulary quality in Wiktionary is not good enough, then our approach cannot be applied to it. Finally, many of the 8k languages do not have orthographies, therefore it might be difficult or meaningless to evaluate the G2P performance for them.
+
+# 6 Conclusion
+
+In this work, we propose a zero-shot learning method to approximate G2P models for 8k languages in the world. We use the phylogenetic tree to measure the distance between languages and combine multilingual outputs. We test our approach on 600 unseen languages and demonstrate it significantly outperforms baselines. We hope the proposed model can be used in many speech tasks such as phone recognition for low resource languages (Li et al., 2021). We will release our datasets and models for 8k languages to allow more researchers to explore this direction. $^{8}$
+
+# References
+
+Simon Ager. 2008. Omniglot-writing systems and languages of the world. Retrieved January, 27:2008.
+Sercan Ö Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, et al. 2017. Deep voice: Real-time neural text-to-speech. In International Conference on Machine Learning, pages 195-204. PMLR.
+Maximilian Bisani and Hermann Ney. 2008. Joint-sequence models for grapheme-to-phoneme conversion. Speech communication, 50(5):434-451.
+Alan W Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules. In The third ESCA/COCOSDA workshop (ETRW) on speech synthesis.
+Harry Bleyan, Sandy Ritchie, Jonas Fromseier Mortensen, and Daan van Esch. 2019. Developing pronunciation models in new languages faster by exploiting common grapheme-to-phoneme correspondences across languages. In *INTERSPEECH*, pages 2100–2104.
+CMU. 2000. The cmu pronunciation dictionary.
+Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. 2009. Introduction to algorithms. MIT press.
+Aliya Deri and Kevin Knight. 2016. Grapheme-to-phoneme models for (almost) any language. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 399–408.
+Matthew S Dryer and Martin Haspelmath. 2013. The world atlas of language structures online.
+Jonathan G Fiscus. 1997. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (rover). In 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, pages 347-354. IEEE.
+Ram Frost and Marian Katz. 1992. Orthography, phonology, morphology and meaning. Elsevier.
+Kyle Gorman, Lucas FE Ashby, Aaron Goyzueta, Arya D McCarthy, Shijie Wu, and Daniel You. 2020. The sigmorphism 2020 shared task on multilingual grapheme-to-phoneme conversion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 40-50.
+Tomoki Hayashi, Ryuichi Yamamoto, Takenori Yoshimura, Peter Wu, Jiatong Shi, Takaaki Saeki, Yooncheol Ju, Yusuke Yasuda, Shinnosuke Takamichi, and Shinji Watanabe. 2021. Espnet2-tts: Extending the edge of tts research. arXiv preprint arXiv:2110.07840.
+
+Bruce Hayes and Colin Wilson. 2008. A maximum entropy model of phonotactics and phonotactic learning. Linguistic inquiry, 39(3):379-440.
+Sittichai Jiamojamarn and Grzegorz Kondrak. 2010. Phoneme alignment: An exploration. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 780-788.
+Ronald M Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational linguistics, 20(3):331-378.
+Jackson L Lee, Lucas FE Ashby, M Elizabeth Garza, Yeonju Lee-Sikka, Sean Miller, Alan Wong, Arya D McCarthy, and Kyle Gorman. 2020. Massively multilingual pronunciation modeling with wikipron. In Proceedings of the 12th language resources and evaluation conference, pages 4223-4228.
+Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anastasopoulos, David R Mortensen, Graham Neubig, Alan W Black, et al. 2020a. Universal phone recognition with a multilingual allophone system. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8249-8253. IEEE.
+Xinjian Li, Siddharth Dalmia, David Mortensen, Juncheng Li, Alan Black, and Florian Metze. 2020b. Towards zero-shot learning for automatic phonemic transcription. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8261-8268.
+Xinjian Li, Juncheng Li, Florian Metze, and Alan W Black. 2021. Hierarchical phone recognition with compositional phonetics. Proc. Interspeech 2021, pages 2461-2465.
+Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8-14.
+Lidia Mangu, Eric Brill, and Andreas Stolcke. 2000. Finding consensus in speech recognition: word error minimization and other applications of confusion networks. Computer Speech & Language, 14(4):373-400.
+Yajie Miao, Mohammad Gowayyed, and Florian Metze. 2015. Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 167-174. IEEE.
+David R Mortensen, Siddharth Dalmia, and Patrick Littell. 2018. Epitran: Precision g2p for many languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
+
+David R. Mortensen, Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, and Lori S. Levin. 2016. Panphon: A resource for mapping IPA segments to articulatory feature vectors. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3475-3484. ACL.
+Sebastian Nordhoff and Harald Hammarström. 2011. Glottolog/langdoc: Defining dialects, languages, and language families as collections of resources. In First International Workshop on Linked Science 2011-In conjunction with the International Semantic Web Conference (ISWC 2011).
+Josef Robert Novak, Nobuaki Minematsu, and Keikichi Hirose. 2016. Phonetisaurus: Exploring graphemeto-phoneme conversion with joint n-gram models in the wfst framework. Natural Language Engineering, 22(6):907-938.
+Ben Peters, Jon Dehdari, and Josef van Genabith. 2017. Massively multilingual neural grapheme-to-phoneme conversion. EMNLP 2017, page 19.
+Kanishka Rao, Fuchun Peng, Hasim Sak, and Françoise Beaufays. 2015. Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4225-4229. IEEE.
+Eric Sven Ristad and Peter N Yianilos. 1998. Learning string-edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(5):522-532.
+Tim Schlippe, Wolf Quaschningk, and Tanja Schultz. 2014. Combining grapheme-to-phoneme converter outputs for enhanced pronunciation generation in low-resource scenarios. In Spoken Language Technologies for Under-Resourced Languages.
+Terrence J Sejnowski and Charles R Rosenberg. 1987. Parallel networks that learn to pronounce english text. Complex systems, 1(1):145-168.
+Brian Yan, Siddharth Dalmia, David Mortensen, Florian Metze, and Shinji Watanabe. 2021. Differentiable allophone graphs for language-universal speech recognition. pages 2471-2475.
+Kaisheng Yao and Geoffrey Zweig. 2015. Sequenceto-sequence neural net models for grapheme-tophoneme conversion.
+Sevinj Yolchuyeva, Géza Németh, and Balint Gyires-Tóth. 2019. Grapheme-to-phoneme conversion with convolutional neural networks. Applied Sciences, 9(6):1143.
+Sevinj Yolchuyeva, Géza Németh, and Bálint Gyires-Tóth. 2020. Transformer based graphemeto-phoneme conversion. arXiv preprint arXiv:2004.06338.
\ No newline at end of file
diff --git a/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/images.zip b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c7b44f67bb56ed498d68d9d58c33a6f4becaa5c4
--- /dev/null
+++ b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11043460fd5fe095aa49902b0d9d831868aeeff0ebbd24c551d09a2fbfd891bd
+size 223369
diff --git a/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/layout.json b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a0a0f2e9cf672d64e7ad3f6e3309781a9ce05f5d
--- /dev/null
+++ b/zeroshotlearningforgraphemetophonemeconversionwithlanguageensemble/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45e05f9325319739d037d50d12650b06afa73eec065bffa8d22fbd77823a1420
+size 268794
diff --git a/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_content_list.json b/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..030217811f4bda4079249901dfc940614dd361da
--- /dev/null
+++ b/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ffbb9b6a991a54eb7461d4d63a6b2f906dd282660b4e5ea6e5191575a2f0cde5
+size 69990
diff --git a/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_model.json b/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b0f94ba6ce8302e80cbd1e1bfc2857188a95eff9
--- /dev/null
+++ b/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c06f7d81ccbb001ad52948f1c115c10df42fcd03fb7af8145b3f01dded3d026c
+size 84026
diff --git a/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_origin.pdf b/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7b8bf4eec512cd2d97911515281588266a218e68
--- /dev/null
+++ b/zinetlinkingchinesecharactersspanningthreethousandyears/ca6f350e-e720-44ea-be19-726f0fea1660_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:609756dcfcca9c9c342e734a12dc1fa5a1a5cb9062ef48c9c8fd3cbef5a475e7
+size 1303244
diff --git a/zinetlinkingchinesecharactersspanningthreethousandyears/full.md b/zinetlinkingchinesecharactersspanningthreethousandyears/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..23f13568a77e14b4507ba1e3547517992db419ae
--- /dev/null
+++ b/zinetlinkingchinesecharactersspanningthreethousandyears/full.md
@@ -0,0 +1,305 @@
+# ZiNet: Linking Chinese Characters Spanning Three Thousand Years
+
+Yang Chi $^{1}$ , Fausto Giunchiglia $^{1,2,3}$ , Daqian Shi $^{3}$ , Xiaolei Dao $^{3}$
+
+Chuntao Li $^{4}$ , Hao Xu $^{1,2,*}$
+
+1School of Artificial Intelligence, Jilin University, Changchun, China
+
+$^{2}$ College of Computer Science and Technology, Jilin University, Changchun, China
+
+$^{3}$ DISI, University of Trento, Trento, Italy
+
+$^{4}$ School of Archaeology, Jilin University, Changchun, China
+
+yangchi19@mails.jlu.edu.cn, {xuhao,lct33}@jlu.edu.cn
+
+{fausto.giunchiglia, daqian.shi, xiaolei.diao}@unitn.it
+
+# Abstract
+
+Modern Chinese characters evolved from 3,000 years ago. Up to now, tens of thousands of glyphs of ancient characters have been discovered, which must be deciphered by experts to interpret unearthed documents. Experts usually need to compare each ancient character to be examined with similar known ones in whole historical periods. However, it is inevitably limited by human memory and experience, which often cost a lot of time but associations are limited to a small scope. To help researchers discover glyph similar characters, this paper introduces ZiNet, the first diachronic knowledge base describing relationships and evolution of Chinese characters and words. In addition, powered by the knowledge of radical systems in ZiNet, this paper introduces glyph similarity measurement between ancient Chinese characters, which could capture similar glyph pairs that are potentially related in origins or semantics. Results show strong positive correlations between scores from the method and from human experts. Finally, qualitative analysis and implicit future applications are presented.
+
+# 1 Introduction
+
+The evaluation of Chinese character can be divided into two stages: the ancient stage (before Han dynasty, 202 BC) and the clerical and standard script stage (after Han dynasty) (Qiu et al., 2000). At the former stage, ancient characters do not have a fixed shape, and their glyphs show several differences respect to modern characters: representatives include the Oracle bone script (Oracle) in the Shang Dynasty (about 1300 BC), which appears on animal bones or turtle shells (Boltz, 1986), the Chinese bronze script (Bronze, about 1000 BC) appeared on bronze wares (Shaughnessy, 1991) and the script belonging to the Warring States period (States), mainly recorded on wooden slips (about
+
+
+Figure 1: Examples of historical evolution of Chinese characters associated with unfixed glyphs and radical compositions (the pictures on the top show unearthed ancient characters respectively written on the turtle shell, bronze ware and wooden slips).
+
+400 BC) (Qiu, 2014). Evolution of glyphs of Chinese characters can be observed in Figure 1.
+
+Ancient unearthed documents show a wealth of information regarding that historical periods (Boltz, 1986; Shaughnessy, 1991; Qiu, 2014), which is great significant for understanding the culture and history of China, as well as the whole world. Nevertheless, nearly half of ancient characters cannot be deciphered yet. The purpose of deciphering ancient character is to find the modern Chinese characters evolved from it and give enough interpretations and evidences in terms of glyphs, phonetics and semantics. According to the systematic nature and evolution law of Chinese characters, experts need to compare the character to be examined with similar known characters in history. However, there are tens of thousands of glyphs of characters appeared in history, discovering similar character heavily relies on expert experience, which is inevitably limited by human memory and reduces the comprehensiveness and efficiency.
+
+To measure similarity between ancient characters, automatic methods face challenges: (1) it lacks available resources of ancient Chinese, which means existing algorithms, especially supervised al
+
+ggorithms cannot be directly used to solve this problem. And ancient characters do not have features such as standard code, pinyin and strokes, which have been widely used for describing the modern one. (2) it is complicated to represent and measure ancient characters. For instance, edit distance is widely used to measure orthographic similarity between words in Romance languages; however, it is not suitable for measuring glyph similarity between pictographic Chinese characters.
+
+Based on above considerations, the main contributions of this paper are: (1) introduces ZiNet, the first diachronic knowledge base for linking Chinese characters and words across various historical periods. (2) as the first application of ZiNet, this paper introduces methods for glyph similarity measurement, which aims at giving glyph similar scores for ancient Chinese character pairs.
+
+There are two main characteristics of ZiNet comparing to existing lexical resources: (1) it is designed based on the systematic nature of Chinese characters. The smallest unit of ZiNet is the radical, the component of character, which is of significance for analyzing semantics or phonics of characters (details will be discussed in Section 3.1). (2) ZiNet is diachronic, which integrates characters and words across historical periods, and aims to portray their evolution. Powered by knowledge of ZiNet, our glyph similarity measurement method could capture the glyphs that are potentially relevant in terms of origins or semantics, which is meaningful in researches of Chinese characters. Results shown a strong positive correlation between the methods scores and human experts.
+
+The paper is organized as follows: Section 2 presents the state of the art; Section 3 describes key information of ZiNet; Section 4 describes glyph similarity measurement; results are proposed in Section 5; Section 6 shows implications and future works; Section 7 concludes the paper and Section 8 shows ethics.
+
+# 2 State of the Art
+
+# 2.1 Processing Ancient Chinese Character
+
+Zhang et al. (2020) built a real-world dataset OBRejoin, which proposes an effective algorithm to rejoin Oracle fragments. Han et al. (2020) proposed an Oracle information system, known as IsOBS, which records Oracle rubbings, documents, Oracle characters and all their variants. Jiao et al. (2021) generated a network for Oracle characters accord
+
+ing to their structures and documents. They classified the semantically-similar Oracle characters by analyzing the network module.
+
+# 2.2 Lexical Resources and Cognate Discovery
+
+WordNet-oriented (George, 1995) lexical resources are widely used in NLP tasks. Their architecture consists in synset as basic semantic units to integrate words senses, which are related to each other, thus forming a conceptual semantic network. Multilingual resources, such as Open Multilingual WordNet (Bond and Foster, 2013), BabelNet (Navigli and Ponzetto, 2012) and Universal Knowledge Core (Giunchiglia et al., 2017), which integrated words and concepts from all over the world, can support NLP tasks in languages that lack resources.
+
+According to historical linguistics, cognate identification needs to consider three dimensions: semantic, phonetic and orthographic similarity (Arnaud et al., 2017), who dealt with researching ancient Chinese. Hauer and Kondrak (2011) designed rich set of features to capture similarity. Batsuren et al. (2020) considered evidence in the form of a combined orthographic and geographic relatedness. Snyder et al. (2010) designed a Bayesian model to incorporate linguistic constraints, which includes customized priors for alphabet matching and morphological structure. Luo et al. (2019) automatically deciphered ancient languages by evaluating the accuracy of aligning words from a lost language to their counterparts in a known language. According to these works, orthographic similarity is an important indicator; however, measurements like edit distance cannot be directly applied to Chinese characters.
+
+# 3 ZiNet
+
+# 3.1 Motivation
+
+ZiNet has been created in order to link Chinese characters and words in history, according to their glyphs, semantics and phonetics to support knowledge-powered algorithms during processing of Chinese or ancient Chinese. Here we will give a general outline of key knowledge to help understand the structure of ZiNet and the reasons why it has been developed.
+
+Relation between word and character: Chinese words are composed of one or more characters; the latter can also be regarded as monosyllable words when expressing semantics. For example, character (or monosyllabic word) "宿"
+
+
+Figure 2: Structure of the ZiNet.
+
+(stay overnight) in Figure 1 can participate in the formation of the polysyllabic word "住宿" (get accommodation).
+
+Relation between character and radical: radical is the conventional structural unit that can participate in the characters composition; radicals themselves are also characters, or variants of characters. For instance, in ZiNet, the radical of the single modern character "刀" (knife) is itself "刀", and the radicals of the compound modern character "宿" (stay overnight) are "一"(house),"↑"(person) and "百" (hundred).
+
+Radical and deciphering: Radicals knowledge is crucial for related researches, because radical is related to the phonetics or semantics of the character. For instance, "→" (house), "↑" (person) are related to the semantics of "宿" (stay overnight). Thus, through radicals and relationships between them, experts are able to discover further phonetic or semantic related characters that may implicit clues for deciphering.
+
+Evolution: The glyph of character is evolving through historical periods. For instance, Figure 1 shows the radicals of Oracle character "宿". In that ancient period, the bottom-right radical of "宿" is not "百" (hundred), but another similar character that means "mat". These objects should be represented within a diachronic network, in order to explore their implicit evolution rules.
+
+# 3.2 Structure of ZiNet
+
+In the current stage, ZiNet is composed by seven layers and there are relations between layers (Figure 2); in the future, an eighth layer of Ontology
+
+is aimed to be added, in order to describe human life through varied historical periods, by linking synsets to concepts and topics.
+
+- Glyph: Character writing shapes. ZiNet integrates rubbing images from unearthed artifacts for each glyph.
+- Radical: The components of character. In ZiNet, all glyphs are associated with corresponding radicals at two levels of granularity (Compound radicals can be further split into finer-grained units. For instance, in Figure 2, $r_4$ is a compound radical, consisting of $r_1$ and $r_2$ ).
+- Ancient Character: Chinese characters in ancient historical periods. All ancient glyphs should be associated with the corresponding ancient character.
+- Character: Including deciphered and undeciphered characters: the former is further divided into modern and dead character. Ancient characters belonging to different periods that represent the same character should be linked. If ancient character is deciphered, and is being currently used, it should be linked to modern character. Else, if the ancient character is deciphered but is not used, it should be linked to the corresponding dead character. Finally, undeciphered ancient characters should be linked to the corresponding undeciphered character.
+- Word: Mono-syllable (character) and multi-syllable word in Chinese history.
+- Sense: Meaning of word. All words should be associated with their corresponding senses.
+- Synset: A set of at least one synonym. All senses should be associated with the corresponding synset.
+
+The organization of Word, Sense, Synset layers are designed based on WordNet. One word might have several senses; senses associated to the same meaning are linked to the same synset.
+
+Other bottom of layers is different with existing lexical resources, which are designed following the systematic nature of Chinese character. In order to research ancient Chinese characters, knowledge on glyphs and radicals must be explicitly provided.
+
+The other key characteristic is diachronism, which is reflected in two ways: (1) At the glyph level, ZiNet aims at covering the critical period of evolution of Chinese characters: Oracle, Bronze, States and modern characters have been integrated up to now. (2) At the sense level, for each sense, the earliest and latest dynasty where it appeared are annotated, according to the records provided by dictionaries.
+
+Here we introduce two relations inside ancient character layer, which are used to measure glyph similarity:
+
+- Derivation (分化): A proliferation phenomenon of Chinese characters: based on a certain glyph of a mother character, making one or several new characters that are glyph-consistent and related to the semantics of the mother character. In ZiNet, if character $B$ (e.g., "束" (bag)) is derived from character $A$ (e.g., "束" (tie)), there would be a Derivation relation between them.
+
+ancient_char(B) $\xrightarrow{D}$ ancient_char(A).
+
+- Indication (指事): An abstract method to create a new Chinese characters by directly adding a indicative symbol on a specific position of the glyph of the mother character, the new character meaning is related to the position indicated by symbol. If a new character $B$ (e.g., "刃" (knife edge)) is created by adding a symbol on the specific position(e.g., edge) of a pictographic character $A$ (e.g., "刃" (knife)), there would be an Indication relation between them.
+
+ancient_char(B) $\xrightarrow{I}$ ancient_char(A).
+
+# 3.3 Statistics of ZiNet
+
+ZiNet is constantly developing. All characters, glyphs and rubbings images were provided by experts on Chinese characters. Radicals of each ancient character and relations were also been split and proofread by experts, who referred to dozens of authoritative publications, of which the most representatives are (Chinese Academy of Social Science (CASS), 1984) and (Guo and Hu, 1978). Most of the words and senses in ZiNet were acquired from authoritative ancient dictionaries, such as Shuowenjiezi (Shen Xu, 1963) and a few original senses in far ancient periods were provided by experts. Synsets were automatically associated according to the definitions of the senses. Up to now,
+
+| Object | Statistics |
| Rubbing image | 15175=Oracle; 14289=Bronze; 28421=States |
| Glyph | 2913=Oracle; 3225=Bronze; 7232=States |
| Radical | 584=Oracle; 853=Bronze; 868=States |
| Ancient character | 2543=Oracle; 2319=Bronze; 5632=States |
| Character | Deciphered character: 1283=Oracle; 2466≤Bronze; 4478≤States; 18966≤Present Undeciphered character: 1260=Oracle; 1714≤Bronze; 4118≤States |
| Word | 423997 |
| Sense | 69825≤206BC; 177570≤618AD; 315181≤1368AD; 386949≤1840AD; 570764≤Present |
| Synset | 366544 |
+
+Table 1: ZiNet statistics ("=" means an object existed in that historical period; "≤" means an object had appeared before, or during that period).
+
+ZiNet includes three historical Chinese periods: Oracle, Bronze, and States. Table 1 lists statistical information. ZiNet is extensible, as Figure 2 shows, the Glyph and Ancient Character layers are independent for each historical period, which allows it to conveniently extend to other historical periods in future.
+
+# 4 Glyph Similarity Measurement
+
+# 4.1 Key Points of Glyph Similarity
+
+The task is to give glyph-similar scores for each ancient character pair: this does not only include the pictographic similarity of character shapes, but also between their radical systems. In this paper, we consider the following four points:
+
+(1) Similar character shape: Two pictographic characters have similar shapes. For example, the pictographic character "刀"(knife) in Figure 1 is depicted in the form of a knife. If the shape of another character also resembles a knife, they are defined as glyph similar.
+(2) Sharing radicals: Two characters sharing radicals. For instance, the character "宿" (stay overnight) in Figure 2 is formed by radicals ac
+
+
+Figure 3: Procedure to generate glyph embedding for pictographic similarity.
+
+cording to their respective meanings, rather than by directly drawing the object. If one character shares radicals with it, they are defined as glyph similar.
+
+(3) There are Derivation or Indication relations between their radicals: In general cases, characters do not share radicals; however, their radicals are related in Derivation or Indication (Section 3.2). If two characters respectively contain related radicals, they are defined as glyph similar.
+
+(4) Their radicals are universal when composing a character: In other cases, radicals of two characters do not have relations; however, when composing a character, they are universally used to show the same semantics. Universal radical pairs can be automatically discovered, by exploring radical pairs that are mutually substituted in synchronic, or diachronic different glyphs of the same character in ZiNet. For example, in Figure 1, the character "牢" (animal pen) has two different Oracle glyphs: the first contains the radicals of "宀" (house) and "牛" (cow), whereas the second contains "宀" (house) and "羊" (sheep). In this case, "牛" (cow) and "羊" (sheep) is a pair of synchronic substitutable radicals. We consider the characters respectively containing them as glyph similar.
+
+Pictographic Similarity (PicSim), Radical LCS Similarity (RLCSSim) and Graph Similarity (GraphSim) will be introduced respectively in Section 4.2, 4.3 and 4.4, respectively. While the former aims at measuring the similarity between character shapes, RLCSSim and GraphSim focus on measuring similarities between radical systems.
+
+# 4.2 Pictographic Similarity
+
+The intuition to measure the similarity of pictographic characters is to consider them as pictures. Deep Residual Network (ResNet) (He et al.,
+
+2016) is used to obtain the high-dimensional vector of images of ancient characters, as shown in Figure 3. There are $n$ ancient characters and $m$ images of characters in total; the set of images is $X(x_{1},x_{2},\ldots x_{m})$ , and that of characters is $C(c_{1},c_{2},\ldots c_{n})$ . The network task is to classify each image $x$ into the corresponding character $c$ , $p(c|x,\varphi)$ is used to denote the probability that an image $x$ belongs to the character $c$ , where $\varphi$ is the parameter that needs to be trained to acquire. The network input is the image $x$ , while the output is the $|C|$ -dimensional vector: each dimension represents the probability $p$ of each character label $c$ . At the training step, images and their associated Chinese character labels are provided. We minimize cross-entropy loss function to get the optimal parameters $\varphi$ .
+
+The $|C|$ -dimensional vector output is then directly used as the image embedding, $\vec{I}$ . As a next step, given the set ImageSet that contains all images belonging to glyph $g$ , the glyph embedding, $\vec{G}$ of $g$ , is set to the average of embedding of images in ImageSet.
+
+$$
+\vec {G} _ {i} = \frac {1}{| I m a g e S e t _ {i} |} \sum_ {x _ {j} \in I m a g e S e t _ {i}} \vec {I _ {j}} \tag {1}
+$$
+
+After obtaining the glyph embedding $\vec{G}$ , cosine similarity is used to get the similarity between glyph pairs. It is multiplied by a hyper-parameter $\alpha$ here. Only when two glyphs share the same or related radicals, then $\alpha = 1$ , otherwise, $\alpha$ will be set as a value greater than 0, and less than 1. $RSet_{i}$ consists in the collection of radicals, and their related radicals (derivative, indicative or universal relations as introduced in Section 4.1) of $g_{i}$ .
+
+$$
+\operatorname {S i m} \left(g _ {i}, g _ {j}\right) = \alpha \operatorname {C o s i n e} \left(g _ {i}, g _ {j}\right), \tag {2}
+$$
+
+$$
+\left\{ \begin{array}{l l} \alpha = 1, & R S e t _ {i} \cap R S e t _ {j} \neq \emptyset \\ 0 < \alpha < 1, & O t h e r w i s e \end{array} \right.
+$$
+
+Finally, given the GlyphSet that contains all glyphs belonging to character $c$ , the PicSim between two characters is the maximum similarity of the combination between their glyph pairs.
+
+$$
+\operatorname {P i c S i m} \left(c _ {k}, c _ {g}\right) = \operatorname {M a x} \left\{\operatorname {S i m} \left(g _ {i}, g _ {j}\right) \right\}, \tag {3}
+$$
+
+$$
+\left(g _ {i} \in \text {G l y p h S e t} _ {k}, g _ {j} \in \text {G l y p h S e t} _ {g}\right)
+$$
+
+
+Figure 4: Procedure to generate glyph embedding for Graph similarity.
+
+# 4.3 Radical LCS Similarity
+
+RLCSSim aims at measuring the similarity between radical systems. Here we represent a character as a radicals sequence and use longest common subsequence (LCS) to measure glyph similarity of characters. In this paper, each glyph is represented as a sequence of their smallest unit of radicals: $Seq(r_1, r_2, \ldots, r_k)$ . $k$ is the number of radicals of that glyph. The sort order of the radicals $r$ is determined by their positions within the character, which follows the rules of first left, then right; first up, then down; and first inside, then outside.
+
+Eq.4 shows the RLCSSim between glyphs: RLCS means the longest common subsequence of same or related radicals between $Seq_{i}$ and $Seq_{j}$ . When calculating RLCS, we not only consider the same radical pairs, but also related radical pairs in derivative, indicative, or universal aspects (Section 4.1). If the corresponding two radicals are the same one, the RLCS will add 1, whereas if the two radicals are related, the RLCS will add a hyper-parameter $\theta$ , $0 < \theta < 1$ . After getting the similarity of glyphs, the similarity between characters can be acquired according to Eq.3.
+
+$$
+R L C S S i m \left(g _ {i}, g _ {j}\right) = \frac {2 \times \left| R L C S \left(S e q _ {i} , S e q _ {j}\right) \right|}{\left| S e q _ {i} \right| + \left| S e q _ {j} \right|} \tag {4}
+$$
+
+# 4.4 Graph Similarity
+
+RLCSSim is discrete, and only covers character pairs sharing related radicals. In order to represent glyphs in high-dimensional vectors, and to acquire similarities among all character pairs, we introduce GraphSim. Here we construct an undirected graph $Graph$ based on ZiNet with the purpose of
+
+associating all Chinese glyphs through radicals. As shown in Figure 4, the set of nodes $N$ includes character $c$ , glyph $g$ and radical $r$ . There are three types of relations in Graph: $R_{1}(c,g)$ , $R_{2}(g,r)$ , $R_{3}(r,r)$ ; $R_{1}$ describes the inclusion relationship between characters and glyphs; $R_{2}$ describes the inclusion relationship between glyphs and radicals; $R_{3}$ contains derivative, indicative and universal relationships (Section 4.1) between radicals.
+
+As the next step, based on the Graph, the random walk algorithm node2vec (Grover and Leskovec, 2016) is used to generate glyph embedding $\vec{G}$ of glyph nodes, while cosine similarity is used to obtain the similarity between glyphs. Finally, the GraphSim between characters can be acquired as the same way of Eq.3.
+
+# 5 Evaluation
+
+# 5.1 Design of Evaluation
+
+We used Oracle data as the sample for evaluation, which contains 2543 Oracle characters, 2912 glyphs, 586 radicals and 15,175 character images; among them, 1283 characters are undeciphered up to now. The characters meanings cover each domain in that ancient age.
+
+Experts were invited to further manually annotate the dataset: (1) There were 5400 Oracle character pairs randomly selected from the 2543 characters. Experts were asked to score them regarding glyph similarity. The corresponding value ranges from 0 to 10; the most similar character pair should be scored as 10. Three experts participated in this work, we selected the median as the final score for each pair of characters. (2) Experts were asked to provide less than five most similar characters to each Oracle character in sample. One expert firstly annotated similar characters. Then, another expert gave verification and deleted incorrect characters he thought. Finally, we got a total of 6405 similar pairs; on average, 2.5 similar characters were provided for each Oracle character, which have been represented as: $HSimSet\{(c_1, c_{11}), \dots, (c_i, c_{in}), \dots\}, i \leq 2543, n \leq 5$ .
+
+There are three quantitative and qualitative evaluation indicators:
+
+- Correlation: Spearman's correlation was used to evaluate the correlation between similarity scores annotated by experts and our methods in 5400 pairs of Oracle characters.
+
+| Method | Top-5 | Top-10 | Top-20 | Top-50 | Top-100 | Top-200 |
| PicSim | 19.53% | 24.03% | 29.74% | 41.25% | 50.27% | 59.25% |
| RLCSSim | 52.63% | 65.21% | 74.91% | 86.15% | 91.83% | 95.93% |
| GraphSim | 53.90% | 64.84% | 74.96% | 85.92% | 91.69% | 96.03% |
| RLCSSim+PicSim | 42.39% | 52.51% | 64.59% | 78.61% | 87.63% | 94.53% |
| RLCSSim+GraphSim | 59.75% | 70.37% | 78.86% | 88.70% | 93.99% | 97.38% |
| RLCSSim+PicSim+GraphSim | 57.13% | 69.49% | 79.75% | 89.41% | 95.08% | 97.86% |
+
+The value would be closer to 1 if it shows stronger positive correlation, conversely, it would be closer to -1.
+
+- Coverage: The proportion of the 6405 similar character pairs appearing in the top- $k$ similar character pairs provided by our methods (Eq.5), where the indicator aims at evaluating how much information that users need to browse to get the relevant one. $MSimSet\{(c_1, c_{11}), \dots, (c_i, c_{ik}), \dots\}, i \leq 2543, k \leq 2543$ to represent the top- $k$ set of character pairs given by our methods.
+
+$$
+C o v e r a g e = \frac {\left| H S i m S e t \cap M S i m S e t \right|}{\left| H S i m S e t \right|} \tag {5}
+$$
+
+- Qualitative analysis: We show the top-5 recommendation examples to evaluate the performance and show potential semantic relations at radical level captured by the method.
+
+# 5.2 Configuration
+
+In the experiment, the number of layers of the ResNet network was 18, batch size was 64 and the learning rate was 0.001. The network was trained through 90 epochs. The hyper-parameter $\alpha$ was 0.4 and $\theta$ in RLCSSim was 0.7. The node2vec algorithm for GraphSim was implemented by using the OpenNE tool; the dimension of the output glyph vector was 50.
+
+In addition, this paper designed three combinations of basic methods: RLCSSim+PicSim, RLCSSim+GraphSim, and RLCSSim+PicSim+GraphSim. Their scores were set to the weights of basic methods. For the first two combinations, the weights of each basic method were 0.5 in both. Regarding RLCSSim+PicSim+GraphSim, the weights were
+
+Table 3: Results of coverage of the six methods in Top5–Top200 recommendations (the recommended size $k$ was set to 5-200 according to the application scenarios in researches).
+
+| Method | Correlation |
| score | p-value |
| PicSim | 0.3241 | <.001 |
| RLCSSim | 0.8188 | <.001 |
| GraphSim | 0.7763 | <.001 |
| RLCSSim+PicSim | 0.7614 | <.001 |
| RLCSSim+GraphSim | 0.8391 | <.001 |
| RLCSSim+PicSim+Graph | 0.8422 | <.001 |
+
+Table 2: Results of Spearman's correlation.
+
+respectively set to 0.4, 0.3 and 0.4. We extra annotated 100 ancient character pairs to set these hyper-parameters. The code of experiment can be acquired here2.
+
+# 5.3 Results and Discussions
+
+Spearman's correlations regarding the six methods are shown in Table 2; all of them show positive correlations respect to scores from experts. More in detail, RLCSSim+PicSim+GraphSim has the strongest positive correlation, corresponding to 0.8422, while the performance of PicSim method is not as good, with a 0.3241 value.
+
+Table 3 shows the results of coverage indicator. RLCSSim+GraphSim achieved the best performance in the top5 and top10 recommendations, while, when dealing with larger recommendations size (top20 - top200), the effect of RLCSSim+PicSim+GraphSim has the most positive outcome. In a top-5 recommendation, four methods cover more than half of similar characters, while in top-200 recommendation, the coverage enhanced to more than $97\%$ for RLCSSim+PicSim+GraphSim.
+
+As results show, RLCSSim and GraphSim that are powered by knowledge of radical systems perform better than PicSim both in terms of corre
+
+
+Figure 5: Cases of top-5 characters of glyph similarity (for each character, the Image, Title (e.g., "刀") in modern Chinese, English Annotation (e.g., "knife") and the Similarity Score (e.g., $78.37\%$ ) are shown. If the character is deciphered, the English Annotation of it is "-" and the Title is written as the combination of the Titles of its Radicals (e.g., "一" (house;person)).
+
+lation and coverage. PicSim is suitable to comparing similarity between shapes of single pictographic characters. However, though some of character pairs show similar shapes, they are not similar at radical systems level. Thus, PicSim reduced the coverage of RLCSSim+PicSim, and RLCSSim+PicSim+GraphSim, in the case of small recommendations size. However, PicSim is meaningful to discover new similarities as the supplements of knowledge-powered methods. In larger size of scenarios, RLCSSim+PicSim+GraphSim performs better than only RLCSSim+GraphSim. Overall, the results show that radical systems are the crucial indicator for glyph similarity considered by human experts. It is necessary to represent, and calculate the potential relationships between radical systems of character pairs, rather than only consider characters as pictures. In application scenarios of small recommendations size, RLCSSim + GraphSim can be the best choice. In larger size of recommendations scenario, a combination of the three methods is the best choice.
+
+# 5.4 Qualitative Analysis
+
+Figure 5 shows five top-5 recommendations of the RLCSSim+PicSim+GraphSim method; the first three examples are single pictographic characters, while the other two are instances of compound characters which are formed with more than one radical.
+
+From the examples it can show many glyph similar character pairs are also related in semantics.
+
+The first reason is that glyph similar pictographic characters always semantic related, which can be captured by PicSim method. As the figure shows, similar characters of "刀" (knife) are related to the knife edge, and the cut behavior, while similar characters of "鼎" (tripod) are mostly related to vessels for sacrifices and food. These characters have similar shapes, thus they can be recognized by PicSim. Another significant reason is that our method is also knowledge-powered, which can capture potential relations at the radical level. Regarding the compound character "宿" (stay overnight), at the radical level, all of meanings of recommended characters deal with a person doing activities in the house. Analogously, "牢" (animal pen) is formed by radicals "宀" (house) and "牛" (cow), and three similar characters are also combined by animals and houses: for instance, the most similar character "廐" (horse stable) is formed by "宀" (house) and "馬" (horse), whose meaning is also related to "animal pen". The character gets higher similar score because RLCSSim and GraphSim captured the semantic similarity between the radicals of "馬" (horse) and "牛" (cow).
+
+In addition, this method is inclined to give higher scores for character pairs with potential relations. For instance, regarding the recommendations of character "月" (moon), the recommended character "夕"(dust) and "月" (moon) were derived from the same character. And another recommended character "舟"(boat) is the diachronic substitutable radical of "月": some Chinese characters (e.g., "前" (to forward)) were formed by "舟" (boat) in ancient age; however, today, their radicals have been changed to "月" (moon).
+
+# 6 Implications and Future Work
+
+This work firstly put forward a diachronic Chinese lexical resource, which expanded the architecture of Princeton WordNet by adding several layers to describe diachronic characters under the lexical layer. Word was regarded as the basic unit in most existing semantic lexical databases (George, 1995; Bond and Foster, 2013; Navigli and Ponzetto, 2012; Giunchiglia et al., 2017); however, based on our investigations, glyphs or radicals of Chinese characters can also show semantics, which have been used to enrich input information in several NLP tasks (Meng et al., 2019; Tao et al., 2019; Sun
+
+et al., 2021; Tao et al., 2021). Besides, in interdisciplinary researches with historical linguistics, Chinese history and paleography, etc., diachronic characters and words, glyphs and semantics were always discussed together because of their close links, while in the low-resources background, existing NLP algorithms have not been widely applied in these fields.
+
+The significance of ZiNet is to give a more complete architecture to support diverse NLP tasks: it introduces not only lexical, but also glyph and character information, not only works for modern Chinese, but also ancient Chinese, or regarding them in the same diachronic space. We hope this work can enlighten diversity of the architecture of language resources and promote development of more NLP tasks in interdisciplinary researches.
+
+At the application level, ZiNet hold potential for knowledge powered Chinese NLP and image processing algorithms, especially in interdisciplinary researches, such as cognate discovery, word sense tracking and rubbing character recognition. ZiNet can also support platforms and provide experts in related fields with domain knowledge and quick information suggestions. For instance, giving retrievals of the evolution timeline of characters and words, annotated unearth document corpus and recommendations of similar characters at various historical times.
+
+In future work, ZiNet will be further expanded to other historical periods, and synsets will be linked into conceptual ontology layer to describe the topics of Chinese in varied historical periods. In application level, we will apply ZiNet in other knowledge powered tasks, for instance, using radical knowledge to enhance performance of ancient character image recognition. And we will further explore that how it can help research and decipher ancient characters. Meanwhile, we are developing a platform to support services of ZiNet, which will be open in near future.
+
+# 7 Conclusion
+
+This paper proposed ZiNet, a diachronic Chinese knowledge base, which is the first structured resource dedicated to describe the relations, and evolution of Chinese characters and words. Based on ZiNet, we demonstrated methods for calculating glyph similarity between ancient Chinese characters. Results show a strong positive correlation between the scores obtained from our method and
+
+from experts. We hope this work can serve experts in Chinese linguistics, history and related fields.
+
+# 8 Ethics
+
+Data of ZiNet was mainly from School of Archaeology, Jilin university, we got the permission for further development. Other data was processed from ancient dictionaries, which are open for access and researches. ZiNet also has limitations. Since the ancient characters were thousands of years away from now, lots of information was lost, and there are also disputes in existing academic theories, such as identity and meaning of a certain character, the character to which a glyph belongs, etc. As a result, inevitably, ZiNet is incomplete, and tends to be in line with the "mainstream" theories that is also possible to be incorrect proved in future. Therefore, in some cases, glyph similarity measurement and other applications based on ZiNet may produce misleading and omission. Relevant users can use ZiNet and applications to get suggestions efficiently; however, they need to rely on their own professional knowledge for judgment. All the same, we believe the positive impact of our work far outweighs the limitation.
+
+# Acknowledgements
+
+This work is partially supported by National Natural Science Foundation of China through grants No.62077027.
+
+# References
+
+Adam St Arnaud, David Beck, and Grzegorz Kondrak. 2017. Identifying cognate sets across dictionaries of related languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2519-2528.
+Khuyagbaatar Batsuren, Gábor Bella, and Fausto Giunchiglia. 2020. Cognet: A large-scale cognate database. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3136-3145.
+William G. Boltz. 1986. Early chinese writing. World Archaeology, 17(3):420-436.
+Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual wordnet. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1352-1362.
+Chinese Academy of Social Science (CASS). 1984. Yin Zhou Jinwen Ji Cheng (Jinwen integration in Shang and Zhou Dynasty). Zhonghua Book Company, Beijing.
+
+Miller A. George. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38(11):39-41.
+Fausto Giunchiglia, Khuyagbaatar Batsuren, and Gabor Bella. 2017. Understanding and exploiting language diversity. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), pages 4009-4017.
+Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 855-864.
+Moruo Guo and Houxuan Hu. 1978. Jiaguwen He Ji (The Comprehensive Dictionary of Oracle Characters). Zhonghua Book Company, Beijing.
+Xu Han, Yuzhuo Bai, Keyue Qiu, Zhiyuan Liu, and Maosong Sun. 2020. Isobs: An information system for oracle bone script. In Proceedings of the 2020 EMNLP, pages 227-233.
+Bradley Hauer and Grzegorz Kondrak. 2011. Clustering semantically equivalent words into cognate sets in multilingual lists. In Proceedings of 5th international joint conference on natural language processing, pages 865-873.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1063-6919.
+Qingju Jiao, Yuanyuan Jin, Yongge Liu, Shengwei Han, Guoying Liu, Nan Wang, Bang Li, and Feng Gao. 2021. Module structure detection of oracle characters with similar semantics. *Alexandria Engineering Journal*, 60(5):4819-4828.
+Jiaming Luo, Yuan Cao, and Regina Barzilay. 2019. Neural decipherment via minimum-cost flow: From ugaritic to linear b. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3146-3155.
+Yuxian Meng, Wei Wu, Fei Wang, Xiaoya Li, Ping Nie, Fan Yin, Muyu Li, Qinghong Han, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for Chinese character representations. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), pages 2742-2753.
+Roberto Navigli and Simone P. Ponzetto. 2012. Babelnet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217-250.
+Jane Qiu. 2014. Ancient times table hidden in chinese bamboo strips. Nature.
+
+Xigui Qiu, Gilbert L. Mattos, and Jerry Norman. 2000. Chinese Writing. The Society for the Study of Early China and The Institute of East Asian Studies, University of California, Berkeley, California.
+Edward L. Shaughnessy. 1991. Sources of Western Zhou History: Inscribed Bronze Vessels. University of California Press, Berkeley, Los Angeles, Oxford.
+Shen Xu. 1963. Shuo Wen Jie Zi. Zhonghua Book Company, Beijing.
+Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048-1057.
+Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xi-ang Ao, Qing He, Fei Wu, and Jiwei Li. 2021. Chinesebert: Chinese pretraining enhanced by glyph and pinyin information. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 2065-2075.
+Hanqing Tao, Shiwei Tong, Kun Zhang, Tong Xu, Qi Liu, Enhong Chen, and Min Hou. 2021. Ideography leads us to the field of cognition: A radical-guided associative model for chinese text classification. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), pages 13898-13906.
+Hanqing Tao, Shiwei Tong, Hongke Zhao, Tong Xu, Binbin Jin, and Qi Liu. 2019. A radical-aware attention-based model for chinese text classification. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), pages 5125-5132.
+Chongsheng Zhang, Ruixing Zong, Shuang Cao, Yi Men, and Bofeng Mo. 2020. Ai-powered oracle bone inscriptions recognition and fragments rejoining. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20), pages 5309-5311.
\ No newline at end of file
diff --git a/zinetlinkingchinesecharactersspanningthreethousandyears/images.zip b/zinetlinkingchinesecharactersspanningthreethousandyears/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a54abe50405cc627b040338abcc64756e91b1a5c
--- /dev/null
+++ b/zinetlinkingchinesecharactersspanningthreethousandyears/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c49e22f70fc635a735d7b771c10fc19595b9fe2825e56700f35c5b03ab4a8fe
+size 445423
diff --git a/zinetlinkingchinesecharactersspanningthreethousandyears/layout.json b/zinetlinkingchinesecharactersspanningthreethousandyears/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3c9537a6d698c972e5c50d8721b449a0702c971b
--- /dev/null
+++ b/zinetlinkingchinesecharactersspanningthreethousandyears/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1d6c17ae85ad2b78d84d2ffe01eecbff03b93ceae51980a4411442bd64948eb
+size 327250